To see the other types of publications on this topic, follow the link: Online Ratings.

Journal articles on the topic 'Online Ratings'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Online Ratings.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Calixto, Nathaniel E., Whitney Chiao, Megan L. Durr, and Nancy Jiang. "Factors Impacting Online Ratings for Otolaryngologists." Annals of Otology, Rhinology & Laryngology 127, no. 8 (June 8, 2018): 521–26. http://dx.doi.org/10.1177/0003489418778062.

Full text
Abstract:
Objective: To identify factors associated with online patient ratings and comments for a nationwide sample of otolaryngologists. Methods: Ratings, demographic information, and written comments were obtained for a random sample of otolaryngologists from HealthGrades.com and Vitals.com . Online Presence Score (OPS) was based on 10 criteria, including professional website and social media profiles. Regression analyses identified factors associated with increased rating. We evaluated for correlations between OPS and other attributes with star rating and used chi-square tests to evaluate content differences between positive and negative comments. Results: On linear regression, increased OPS was associated with higher ratings on HealthGrades and Vitals; higher ratings were also associated with younger age on Vitals and less experience on HealthGrades. However, detailed correlation studies showed weak correlation between OPS and rating; age and graduation year also showed low correlation with ratings. Negative comments more likely focused on surgeon-independent factors or poor bedside manner. Conclusion: Though younger otolaryngologists with greater online presence tend to have higher ratings, weak correlations suggest that age and online presence have only a small impact on the content found on ratings websites. While most written comments are positive, deficiencies in bedside manner or other physician-independent factors tend to elicit negative comments.
APA, Harvard, Vancouver, ISO, and other styles
2

Ackerman, David, and Christina Chung. "Is RateMyProfessors.com Unbiased? A Look at the Impact of Social Modelling on Student Online Reviews of Marketing Classes." Journal of Marketing Education 40, no. 3 (October 16, 2017): 188–95. http://dx.doi.org/10.1177/0273475317735654.

Full text
Abstract:
This article looks at how marketing student ratings of instructors and classes on online rating sites such as RateMyProfessor.com can be biased by prior student ratings of that class. Research has identified potential sources of bias of online student reviews administered by universities. Less has been done on the sources of bias inherent in a ratings site where those doing the rating can see prior ratings. To measure how student online ratings of a course can be influenced by existing online ratings, the study used five different prior ratings experiment conditions: mildly negative prior ratings, strongly negative prior ratings, mildly positive prior ratings, strongly positive prior ratings, and a control condition of no prior ratings. Results of this study suggest prior online ratings, both positive and negative, do affect subsequent online ratings and bias them. There are several implications. First, both negative and positive ratings can have an impact biasing subsequent ratings. Second, sometimes negative prior ratings must be strong in valence in order to bias subsequent ratings whereas even mildly positive ratings can have an impact. Last, this bias can potentially influence student course selection.
APA, Harvard, Vancouver, ISO, and other styles
3

Wurm, Lee H., Annmarie Cano, and Diana A. Barenboym. "Ratings gathered online vs. in person." Mental Lexicon 6, no. 2 (August 3, 2011): 325–50. http://dx.doi.org/10.1075/ml.6.2.05wur.

Full text
Abstract:
Barenboym, Wurm, and Cano (2010) recently showed that significant differences emerged for ratings gathered online and in person. They also showed that researchers could reach different statistical conclusions in a regression analysis, depending on whether the norms were gathered online or in person. In the current study that research was extended. Familiarity ratings gathered online were significantly higher than those gathered in the lab, for a set of 300 potential stimuli. The in-person ratings correlated significantly better with an existing database of familiarity values. It is also shown that under three different grouping methods, online and in-person familiarity ratings produce different sets of stimuli. Finally, it is demonstrated that in each case, different conclusions are reached about variables that have a significant relationship with familiarity. Simulations show that the effects are driven disproportionately by higher intra-item variability in the online ratings. Studies in which stimuli are grouped on the basis of ratings can be affected by the choice of rating methodology.
APA, Harvard, Vancouver, ISO, and other styles
4

Zillioux, Jacqueline, C. William Pike, Devang Sharma, and David E. Rapp. "Analysis of Online Urologist Ratings: Are Rating Differences Associated With Subspecialty?" Journal of Patient Experience 7, no. 6 (August 24, 2020): 1062–67. http://dx.doi.org/10.1177/2374373520951901.

Full text
Abstract:
Patients are increasingly using online rating websites to obtain information about physicians and to provide feedback. We performed an analysis of urologist online ratings, with specific focus on the relationship between overall rating and urologist subspecialty. We conducted an analysis of urologist ratings on Healthgrades.com . Ratings were sampled across 4 US geographical regions, with focus across 3 practice types (large and small private practice, academic) and 7 urologic subspecialties. Statistical analysis was performed to assess for differences among subgroup ratings. Data were analyzed for 954 urologists with a mean age of 53 (±10) years. The median overall urologist rating was 4.0 [3.4-4.7]. Providers in an academic practice type or robotics/oncology subspecialty had statistically significantly higher ratings when compared to other practice settings or subspecialties ( P < 0.001). All other comparisons between practice types, specialties, regions, and sexes failed to demonstrate statistically significant differences. In our study of online urologist ratings, robotics/oncology subspecialty and academic practice setting were associated with higher overall ratings. Further study is needed to assess reasons underlying this difference.
APA, Harvard, Vancouver, ISO, and other styles
5

Neo, Rachel L. "The Limits of Online Consensus Effects: A Social Affirmation Theory of How Aggregate Online Rating Scores Influence Trust in Factual Corrections." Communication Research 47, no. 5 (June 21, 2018): 771–92. http://dx.doi.org/10.1177/0093650218782823.

Full text
Abstract:
Research on bandwagon effects suggests that people will yield to aggregate online rating scores even when forming evaluations of contentious content. However, such findings derive mainly from studying partisan news selection behaviors, and therefore, are incomplete. How do people use ratings to evaluate whether factual corrections on contentious issues are trustworthy? Through what I term the social affirmation heuristic, I hypothesize, people will first assess rating scores for compatibility with their own beliefs; and then they will invest trust only in ratings of factual messages that affirm their beliefs, while distrusting ratings that disaffirm them. I further predict that distrusted ratings will elicit boomerang effects, causing evaluations of message trustworthiness to conflict with rating scores. I use an online experiment ( n = 157) and a nationally representative survey experiment ( N = 500) to test these ideas. All hypotheses received clear support. Implications of the findings are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Le Mens, Gaël, Balázs Kovács, Judith Avrahami, and Yaakov Kareev. "How Endogenous Crowd Formation Undermines the Wisdom of the Crowd in Online Ratings." Psychological Science 29, no. 9 (July 25, 2018): 1475–90. http://dx.doi.org/10.1177/0956797618775080.

Full text
Abstract:
People frequently consult average ratings on online recommendation platforms before making consumption decisions. Research on the wisdom-of-the-crowd phenomenon suggests that average ratings provide unbiased quality estimates. Yet we argue that the process by which average ratings are updated creates a systematic bias. In analyses of more than 80 million online ratings, we found that items with high average ratings tend to attract more additional ratings than items with low average ratings. We call this asymmetry in how average ratings are updated endogenous crowd formation. Using computer simulations, we showed that it implies the emergence of a negative bias in average ratings. This bias affects items with few ratings particularly strongly, which leads to ranking mistakes. The average-rating rankings of items with few ratings are worse than their quality rankings. We found evidence for the predicted pattern of biases in an experiment and in analyses of large online-rating data sets.
APA, Harvard, Vancouver, ISO, and other styles
7

BUMPASS, DAVID B., and JULIE BALCH SAMORA. "Understanding Online Physician Ratings." Journal of the American Academy of Orthopaedic Surgeons 7, no. 9 (September 2013): 21. http://dx.doi.org/10.5435/00124635-201309010-00014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Huber, Sarah A., Jennifer Priestley, Khushabu Kasabwala, Bogdan Gadidov, and Patrick Culligan. "Understanding Your Online Ratings." Female Pelvic Medicine & Reconstructive Surgery 25, no. 2 (2019): 193–97. http://dx.doi.org/10.1097/spv.0000000000000676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Atkinson, Sean. "Current status of online rating of Australian doctors." Australian Journal of Primary Health 20, no. 3 (2014): 222. http://dx.doi.org/10.1071/py14056.

Full text
Abstract:
Online rating of patient satisfaction of their doctor is increasingly common worldwide. This study of 4157 ratings of Australian doctors found patients were extremely satisfied with their doctor. However, this result was limited by a low prevalence of rated doctors and low numbers of ratings per doctor. Further studies are needed to determine how online rating will affect future practice for all doctors.
APA, Harvard, Vancouver, ISO, and other styles
10

Emmert, Martin, and Stuart McLennan. "One Decade of Online Patient Feedback: Longitudinal Analysis of Data From a German Physician Rating Website." Journal of Medical Internet Research 23, no. 7 (July 26, 2021): e24229. http://dx.doi.org/10.2196/24229.

Full text
Abstract:
Background Feedback from patients is an essential element of a patient-oriented health care system. Physician rating websites (PRWs) are a key way patients can provide feedback online. This study analyzes an entire decade of online ratings for all medical specialties on a German PRW. Objective The aim of this study was to examine how ratings posted on a German PRW have developed over the past decade. In particular, it aimed to explore (1) the distribution of ratings according to time-related aspects (year, month, day of the week, and hour of the day) between 2010 and 2019, (2) the number of physicians with ratings, (3) the average number of ratings per physician, (4) the average rating, (5) whether differences exist between medical specialties, and (6) the characteristics of the patients rating physicians. Methods All scaled-survey online ratings that were posted on the German PRW jameda between 2010 and 2019 were obtained. Results In total, 1,906,146 ratings were posted on jameda between 2010 and 2019 for 127,921 physicians. The number of rated physicians increased constantly from 19,305 in 2010 to 82,511 in 2018. The average number of ratings per rated physicians increased from 1.65 (SD 1.56) in 2010 to 3.19 (SD 4.69) in 2019. Overall, 75.2% (1,432,624/1,906,146) of all ratings were in the best rating category of “very good,” and 5.7% (107,912/1,906,146) of the ratings were in the lowest category of “insufficient.” However, the mean of all ratings was 1.76 (SD 1.53) on the German school grade 6-point rating scale (1 being the best) with a relatively constant distribution over time. General practitioners, internists, and gynecologists received the highest number of ratings (343,242, 266,899, and 232,914, respectively). Male patients, those of higher age, and those covered by private health insurance gave significantly (P<.001) more favorable evaluations compared to their counterparts. Physicians with a lower number of ratings tended to receive ratings across the rating scale, while physicians with a higher number of ratings tended to have better ratings. Physicians with between 21 and 50 online ratings received the lowest ratings (mean 1.95, SD 0.84), while physicians with >100 ratings received the best ratings (mean 1.34, SD 0.47). Conclusions This study is one of the most comprehensive analyses of PRW ratings to date. More than half of all German physicians have been rated on jameda each year since 2016, and the overall average number of ratings per rated physicians nearly doubled over the decade. Nevertheless, we could also observe a decline in the number of ratings over the last 2 years. Future studies should investigate the most recent development in the number of ratings on both other German and international PRWs as well as reasons for the heterogeneity in online ratings by medical specialty.
APA, Harvard, Vancouver, ISO, and other styles
11

Hu, Xingbao (Simon), Yang Yang, and Sangwon Park. "A meta-regression on the effect of online ratings on hotel room rates." International Journal of Contemporary Hospitality Management 31, no. 12 (December 9, 2019): 4438–61. http://dx.doi.org/10.1108/ijchm-10-2018-0835.

Full text
Abstract:
Purpose Online ratings (review valence) have been found to exert a strong influence on hotel room prices. This study aims to systematically synthesize research estimating the impact of online ratings on room rates using a meta-analytical method. Design/methodology/approach From major academic databases, a total of 163 estimates of the effects of online ratings on room rates were coded from 22 studies across different countries through a systematic review of relevant literature. All estimates were converted into elasticity-type effect sizes, and a hierarchical linear meta-regression was used to investigate factors explaining variations in the effect sizes. Findings The median elasticity of online ratings on hotel room rates was estimated to be 0.851. Meta-regression results highlighted four categories of factors moderating the size of this elasticity: data characteristics, research settings, variable measures and publication outlet. Among sub-ratings, results revealed value rating and room rating to exert the largest impact on room rates, whereas staff and cleanliness ratings demonstrated non-significant impacts. Practical implications This study provides practical implications on the relative importance of different types of online ratings for online reputation and revenue management. Originality/value This study represents the first research effort to understand factors moderating the effects of online ratings on hotel room rates based on a quantitative review of the literature. Moreover, this study provides beneficial insights into the specification of empirical hedonic pricing models and data-collection strategies, such as the selection of price variables and choices of model functional forms.
APA, Harvard, Vancouver, ISO, and other styles
12

Velasco, Brian T., Bonnie Chien, John Y. Kwon, and Christopher P. Miller. "Online Ratings and Reviews of American Orthopaedic Foot and Ankle Surgeons." Foot & Ankle Specialist 13, no. 1 (February 22, 2019): 43–49. http://dx.doi.org/10.1177/1938640019832363.

Full text
Abstract:
Background. Utilization of physician rating websites continues to expand. There is limited information on how these websites function and influence patient perception and physicians’ practices. No study to our knowledge has investigated online ratings and comments of orthopaedic foot and ankle surgeons. We identified factors impacting online ratings and comments for this subset of surgeons. Methods. 210 orthopaedic foot and ankle surgeons were selected from the American Orthopaedic Foot and Ankle Society (AOFAS) website. Demographic information, ratings, and comments were reviewed on the 3 most visited public domain physician ratings websites: HealthGrades.com , Vitals.com, and Ratemds.com. Content differences between positive and negative comments were evaluated. Results. The mean review rating of 4.03 ± 0.57 out of 5 stars. 84% of the total number of ratings were either 1 star (17%) or 5 stars (67%). Most positive comments related to outcome, physician personality, and communication, whereas most negative comments related to outcome, bedside manner, and waiting time. χ2 Analyses revealed statistically significant proportions of positive comments pertaining to surgeon-dependent factors (eg, physician personality, knowledge, skills) and negative comments concerning surgeon-independent factors (eg, waiting time, logistics). Conclusion. This study examined online ratings and written comments of orthopaedic foot and ankle surgeons. Surgeons had a generally favorable rating and were likely to have positive comments. Patients were likely to write positive comments about surgeon personality and communication, and negative comments pertaining to bedside manner and waiting time. Knowledge and management of online content may allow surgeons to improve patient satisfaction and online ratings. Level of Evidence: Level IV
APA, Harvard, Vancouver, ISO, and other styles
13

Hendrikx, Roy Johannus Petrus, Hanneke Wil-Trees Drewes, Marieke Spreeuwenberg, Dirk Ruwaard, and Caroline Baan. "Measuring Regional Quality of Health Care Using Unsolicited Online Data: Text Analysis Study." JMIR Medical Informatics 7, no. 4 (December 16, 2019): e13053. http://dx.doi.org/10.2196/13053.

Full text
Abstract:
Background Regional population management (PM) health initiatives require insight into experienced quality of care at the regional level. Unsolicited online provider ratings have shown potential for this use. This study explored the addition of comments accompanying unsolicited online ratings to regional analyses. Objective The goal was to create additional insight for each PM initiative as well as overall comparisons between these initiatives by attempting to determine the reasoning and rationale behind a rating. Methods The Dutch Zorgkaart database provided the unsolicited ratings from 2008 to 2017 for the analyses. All ratings included both quantitative ratings as well as qualitative text comments. Nine PM regions were used to aggregate ratings geographically. Sentiment analyses were performed by categorizing ratings into negative, neutral, and positive ratings. Per category, as well as per PM initiative, word frequencies (ie, unigrams and bigrams) were explored. Machine learning—naïve Bayes and random forest models—was applied to identify the most important predictors for rating overall sentiment and for identifying PM initiatives. Results A total of 449,263 unsolicited ratings were available in the Zorgkaart database: 303,930 positive ratings, 97,739 neutral ratings, and 47,592 negative ratings. Bigrams illustrated that feeling like not being “taken seriously” was the dominant bigram in negative ratings, while bigrams in positive ratings were mostly related to listening, explaining, and perceived knowledge. Comparing bigrams between PM initiatives showed a lot of overlap but several differences were identified. Machine learning was able to predict sentiments of comments but was unable to distinguish between specific PM initiatives. Conclusions Adding information from text comments that accompany online ratings to regional evaluations provides insight for PM initiatives into the underlying reasons for ratings. Text comments provide useful overarching information for health care policy makers but due to a lot of overlap, they add little region-specific information. Specific outliers for some PM initiatives are insightful.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Yanyan, Yumei Zhong, Sumin Yu, Yan Xiao, and Sining Chen. "Exploring Bidirectional Performance of Hotel Attributes through Online Reviews Based on Sentiment Analysis and Kano-IPA Model." Applied Sciences 12, no. 2 (January 11, 2022): 692. http://dx.doi.org/10.3390/app12020692.

Full text
Abstract:
As people increasingly make hotel booking decisions relying on online reviews, how to effectively improve customer ratings has become a major point for hotel managers. Online reviews serve as a promising data source to enhance service attributes in order to improve online bookings. This paper employs online customer ratings and textual reviews to explore the bidirectional performance (good performance in positive reviews and poor performance in negative reviews) of hotel attributes in terms of four hotel star ratings. Sentiment analysis and a combination of the Kano model and importance-performance analysis (IPA) are applied. Feature extraction and sentiment analysis techniques are used to analyze the bidirectional performance of hotel attributes in terms of four hotel star ratings from 1,090,341 online reviews of hotels in London collected from TripAdvisor.com (accessed on 4 January 2022). In particular, a new sentiment lexicon for hospitality domain is built from numerous online reviews using the PolarityRank algorithm to convert textual reviews into sentiment scores. The Kano-IPA model is applied to explain customers’ rating behaviors and prioritize attributes for improvement. The results provide determinants of high/low customer ratings to different star hotels and suggest that hotel attributes contributing to high/low customer ratings vary across hotel star ratings. In addition, this paper analyzed the Kano categories and priority rankings of six hotel attributes for each star rating of hotels to formulate improvement strategies. Theoretical and practical implications of these results are discussed in the end.
APA, Harvard, Vancouver, ISO, and other styles
15

Oh, Hyun-Kyo, Sang-Wook Kim, Sunju Park, and Ming Zhou. "Can You Trust Online Ratings? A Mutual Reinforcement Model for Trustworthy Online Rating Systems." IEEE Transactions on Systems, Man, and Cybernetics: Systems 45, no. 12 (December 2015): 1564–76. http://dx.doi.org/10.1109/tsmc.2015.2416126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Jun, Adan Omar, and Addisu Mesfin. "Online Ratings of Spine Surgeons." SPINE 43, no. 12 (June 2018): E722—E726. http://dx.doi.org/10.1097/brs.0000000000002488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Vu, Alexander F., Gabriela M. Espinoza, Julian D. Perry, and Rao V. Chundury. "Online Ratings of ASOPRS Surgeons." Ophthalmic Plastic and Reconstructive Surgery 33, no. 6 (2017): 466–70. http://dx.doi.org/10.1097/iop.0000000000000829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

BENABIO, JEFFREY. "Do online doctor ratings matter?" Skin & Allergy News 44, no. 2 (February 2013): 11–12. http://dx.doi.org/10.1016/s0037-6337(13)70037-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

BENABIO, JEFFREY. "Do online doctor ratings matter?" Internal Medicine News 46, no. 2 (February 2013): 11. http://dx.doi.org/10.1016/s1097-8690(13)70060-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hardy, Nedra. "Online Ratings: Fact and Fiction." New Directions for Teaching and Learning 2003, no. 96 (2003): 31–38. http://dx.doi.org/10.1002/tl.120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Llewellyn, Donna C. "Online Reporting of Results for Online Student Ratings." New Directions for Teaching and Learning 2003, no. 96 (2003): 61–68. http://dx.doi.org/10.1002/tl.123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Velasco, Brian T., Bonnie Chien, John Y. Kwon, and Christopher P. Miller. "Online Ratings and Reviews of American Orthopedic Foot and Ankle Surgeons." Foot & Ankle Orthopaedics 4, no. 4 (October 1, 2019): 2473011419S0042. http://dx.doi.org/10.1177/2473011419s00424.

Full text
Abstract:
Category: Practice Management Introduction/Purpose: Utilization by patients of physician rating websites continues to expand. There is limited information on how these websites function and influence patient perception and physicians’ practices. No study has specifically investigated online ratings and comments of orthopedic foot and ankle surgeons. In this study, we identified what factors impact online ratings and comments for this subset of surgeons. Methods: A total of 210 orthopedic foot and ankle surgeons in or near metropolitan areas were randomly selected from the American Orthopaedic of Foot & Ankle Society (AOFAS) website. Demographic information, ratings (number of stars), and comments were reviewed on the three most visited public domain physician ratings websites: HealthGrades.com (HealthGrades), Vitals.com (Vitals), and Ratemds.com (Ratemds). Content differences between positive and negative comments were evaluated. Results: Orthopedic foot and ankle surgeons had a mean rating of 4.03 ± 0.57 out of 5 stars. A high percentage (84%) of the total number of ratings were either 1-star (17%) or 5-stars (67%). Most positive comments related to outcome, physician personality, and communication while most negative comments related to outcome, bedside manner, and waiting time. Chi-square analysis revealed statistically significant proportions of positive comments pertaining to surgeon-dependent factors (eg, physician personality, knowledge, skills) and of negative comments concerning surgeon-independent factors (eg, waiting time, logistics). Conclusion: This is the first study examining online ratings and written comments of orthopedic foot and ankle surgeons. Surgeons had a generally favorable rating and were likely to have positive comments. Patients were likely to write positive comments about surgeon personality and communication, and negative comments pertaining to bedside manner and waiting time. Knowledge and management of online content may allow surgeons to improve patient satisfaction and online ratings.
APA, Harvard, Vancouver, ISO, and other styles
23

Jia, Susan (Sixue). "Behind the ratings: Text mining of restaurant customers’ online reviews." International Journal of Market Research 60, no. 6 (January 4, 2018): 561–72. http://dx.doi.org/10.1177/1470785317752048.

Full text
Abstract:
Establishing the relation between online ratings and reviews provides a potentially inexpensive and effective way for restaurants to capture quality improvement hints from customers. To this end, this study proposes an integrated approach that leverages text mining and empirical modeling to quantitatively correlate ratings with reviews. From Dianping.com (a Chinese crowd-sourced online review community), 49,080 pairs of restaurant rating and review were examined, with high-frequency words, major topics, and subtopics identified. Multilinear regression was employed to screen out the most impactful factors that influence taste, environment, and service ratings. Managerially, the idea of triggering the synergistic benefit from customer ratings and reviews is referential for market practitioners both within and beyond the catering industry.
APA, Harvard, Vancouver, ISO, and other styles
24

Donnally, Chester J., Johnathon R. McCormick, Deborah J. Li, James A. Maguire, Grant P. Barker, Augustus J. Rush, and Michael Y. Wang. "How do physician demographics, training, social media usage, online presence, and wait times influence online physician review scores for spine surgeons?" Journal of Neurosurgery: Spine 30, no. 2 (February 2019): 279–88. http://dx.doi.org/10.3171/2018.8.spine18553.

Full text
Abstract:
OBJECTIVEThe purpose of this study was to assess the impact of certain demographics, social media usage, and physician review website variables for spine surgeons across Healthgrades.com (Healthgrades), Vitals.com (Vitals), and Google.com (Google).METHODSThrough a directory of registered North American Spine Society (NASS) physicians, we identified spine surgeons practicing in Texas (107 neurosurgery trained, 192 orthopedic trained). Three physician rating websites (Healthgrades, Vitals, Google) were accessed to obtain surgeon demographics, training history, practice setting, number of ratings/reviews, and overall score (January 2, 2018–January 16, 2018). Using only the first 10 search results from Google.com, we then identified whether the surgeon had a website presence or an accessible social media account on Facebook, Twitter, and/or Instagram.RESULTSPhysicians with either a personal or institutional website had a higher overall rating on Healthgrades compared to those who did not have a website (p < 0.01). Nearly all spine surgeons had a personal or institutional website (90.3%), and at least 1 accessible social media account was recorded for 43.5% of the spine surgeons in our study cohort (39.5% Facebook, 10.4% Twitter, 2.7% Instagram). Social media presence was not significantly associated with overall ratings across all 3 sites, but it did significantly correlate with more comments on Healthgrades. In multivariable analysis, increasing surgeon age was significantly associated with a lower overall rating across all 3 review sites (p < 0.05). Neurosurgeons had higher overall ratings on Vitals (p = 0.04). Longer wait times were significantly associated with a lower overall rating on Healthgrades (p < 0.0001). Overall ratings from all 3 websites correlated significantly with each other, indicating agreement between physician ratings across different platforms.CONCLUSIONSLonger wait times, increasing physician age, and the absence of a website are indicative of lower online review scores for spine surgeons. Neurosurgery training correlated with a higher overall review score on Vitals. Having an accessible social media account does not appear to influence scores, but it is correlated with increased patient feedback on Healthgrades. Identification of ways to optimize patients’ perception of care are important in the future of performance-based medicine.
APA, Harvard, Vancouver, ISO, and other styles
25

Jia, Susan (Sixue). "Understanding Cruise Tourists' Satisfaction by Analysing Their Online Ratings and Reviews." Journal of Management and Strategy 10, no. 4 (June 20, 2019): 1. http://dx.doi.org/10.5430/jms.v10n4p1.

Full text
Abstract:
Associating online ratings with reviews provides a potentially cost-effective way for cruise tourism managers to capture satisfaction and quality improvement information from customers so as to facilitate the sustainable development of the tourism industry. For this purpose, this study proposes an integrated approach that leverages text mining and empirical modeling to quantitatively correlate online ratings with reviews. From TripAdvisor.com, 4248 pairs of rating and review regarding Chicago’s First Lady Cruises were examined, with major topics identified. Subsequently, multilinear regression was employed to screen out the most impactful factors that influence overall ratings. Managerially, the idea of triggering the synergistic benefit from customer ratings and reviews is referential for market practitioners both within and beyond the tourism industry.
APA, Harvard, Vancouver, ISO, and other styles
26

Wahyudi, Taesar. "PENGARUH ONLINE CUSTOMER REVIEW DAN ONLINE CUSTOMER RATING TERHADAP KEPERCAYAAN KONSUMEN REMAJA KOTA MATARAM PADA PEMBELIAN PRODUK FASHION SHOPEE ONLINE SHOP." Jurnal Riset Manajemen 19, no. 1 (August 10, 2019): 1. http://dx.doi.org/10.29303/jrm.v19i1.33.

Full text
Abstract:
This study aims to prove and analyze the effect of online customer reviews, online customer ratings on customer trust in online shopping at Shopee. This type of research is correlational research because this study aims to study the differences between two or more variables. The data collection method used is a survey method. The population in this study is the teenagers of the city of Mataram who bought 10-24 years who had bought fashion products online at Shopee. Purposive sampling technique, with a total sample of 120. The distribution of questionnaires using the form of goggles. The data analysis tool used is Multiple Linear Regression Analysis using the SPSS 22 program for Windows. The results of this study indicate that online customer reviews and online customer ratings have a significant positive effect on customer trust Keywords : Online Customer Review, Online Customer Rating, Customer Trust.
APA, Harvard, Vancouver, ISO, and other styles
27

Romere, Chase M., and Romil F. Shah. "Discordance in online commercial ratings of orthopaedic surgeons: a retrospective review of online rating scores." Current Orthopaedic Practice 34, no. 1 (November 15, 2022): 53–55. http://dx.doi.org/10.1097/bco.0000000000001190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Shin, Daniel, and Denis Darpy. "Rating, review and reputation: how to unlock the hidden value of luxury consumers from digital commerce?" Journal of Business & Industrial Marketing 35, no. 10 (November 2, 2020): 1553–61. http://dx.doi.org/10.1108/jbim-01-2019-0029.

Full text
Abstract:
Purpose Product ratings and reviews are popular tools to support buying decisions of consumers. Many e-commerce platforms now offer product ratings and reviews as ratings and reviews are valuable for online retailers. However, luxury goods industry is somewhat slow to adapt to the digital terrain. The purpose of this paper is to answer “how luxury consumers see user-generated product ratings and reviews for their online shopping experience and what important factors or values are perceived by the luxury consumers when they shop online?” Design/methodology/approach To understand how luxury consumers use product ratings and reviews before buying online, a survey with a situational set up of variations of rating, review and price options in association with a number of hypothetical luxury goods was conducted among 421 global luxury consumers out of over 6,000 people. The study was carried out from September to October 2018 for six weeks in the form of online and mobile survey. User population is high net-worth individuals or luxury consumers derived from the author’s various professional and social networks and communities. Their geographical coverage would be global, but concentrated around the major cities. Findings The survey shows that ratings and reviews can be important source of information for luxury consumers. Online ratings and reviews are rated as helpful by 76.01% of the participants. People who chose the highly rated one (4.8/5) over the poorly rated (3.7/5) was 86.94%, while all else such as product category, star rating and price range are about the same. Feedback from the open question survey indicates that the perceived helpfulness of rating and review systems could vary. Comparing user reviews is time-consuming because of unstructured nature of contextual reviews and the relative nature of human perception on the rating scale. Research limitations/implications There are two aspects of ratings and reviews playing an important role for luxury consumers’ buying decision. First, it is about helpfulness of collective rating score. Luxury consumers see a user-generated rating score and use the score when they make a choice even if the rating is not an absolute, but relative figure, not exactly like the star rating system in the hotel industry. Second, there is discrepancy between the status of the brand in association with its price position and perceived value as the industry does not cope with classifying their brands in any official star rating system. Practical implications Consumers need compact and concise information about the products they need. When there are only a few potential products left in their short wish-list, full user reviews can be helpful to get more details and general opinions about the products on the short list before making a final decision. In that regard, a primary indicator that will guide through the decision-making process of the luxury consumers would be the trustworthiness of user rating of each product in an aggregated score along with a potential use of sub-ratings, which has to be visible from the product landing page. Originality/value Even if there is a wide use and ubiquitous nature of product ratings and reviews in other consumer products, the author is curious about how luxury consumers use ratings and reviews for their buying decision because there are not that many researches done previously in spite of the importance of this issue. Luxury goods industry has hit €320bn in 2017 according to Bain and Co., and 25% of the trading volume will be replaced by the digital commerce by 2025.
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Xuan, Shin-Yi Chou, Mary E. Deily, and Mengcen Qian. "Comparing the Impact of Online Ratings and Report Cards on Patient Choice of Cardiac Surgeon: Large Observational Study." Journal of Medical Internet Research 23, no. 10 (October 28, 2021): e28098. http://dx.doi.org/10.2196/28098.

Full text
Abstract:
Background Patients may use two information sources about a health care provider’s quality: online physician reviews, which are written by patients to reflect their subjective experience, and report cards, which are based on objective health outcomes. Objective The aim of this study was to examine the impact of online ratings on patient choice of cardiac surgeon compared to that of report cards. Methods We obtained ratings from a leading physician review platform, Vitals; report card scores from Pennsylvania Cardiac Surgery Reports; and information about patients’ choices of surgeons from inpatient records on coronary artery bypass graft (CABG) surgeries done in Pennsylvania from 2008 to 2017. We scraped all reviews posted on Vitals for surgeons who performed CABG surgeries in Pennsylvania during our study period. We linked the average overall rating and the most recent report card score at the time of a patient’s surgery to the patient’s record based on the surgeon’s name, focusing on fee-for-service patients to avoid impacts of insurance networks on patient choices. We used random coefficient logit models with surgeon fixed effects to examine the impact of receiving a high online rating and a high report card score on patient choice of surgeon for CABG surgeries. Results We found that a high online rating had positive and significant effects on patient utility, with limited variation in preferences across individuals, while the impact of a high report card score on patient choice was trivial and insignificant. About 70.13% of patients considered no information on Vitals better than a low rating; the corresponding figure was 26.66% for report card scores. The findings were robust to alternative choice set definitions and were not explained by surgeon attrition, referral effect, or admission status. Our results also show that the interaction effect of rating information and a time trend was positive and significant for online ratings, but small and insignificant for report cards. Conclusions A patient’s choice of surgeon is affected by both types of rating information; however, over the past decade, online ratings have become more influential, while the effect of report cards has remained trivial. Our findings call for information provision strategies that incorporate the advantages of both online ratings and report cards.
APA, Harvard, Vancouver, ISO, and other styles
30

Choi, Dongjun, Hosik Choi, and Changyi Park. "Classification of ratings in online reviews." Journal of the Korean Data and Information Science Society 27, no. 4 (July 31, 2016): 845–54. http://dx.doi.org/10.7465/jkdi.2016.27.4.845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Pike, C. William, Jacqueline Zillioux, and David Rapp. "Online Ratings of Urologists: Comprehensive Analysis." Journal of Medical Internet Research 21, no. 7 (July 2, 2019): e12436. http://dx.doi.org/10.2196/12436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Besbes, Omar, and Marco Scarsini. "On Information Distortions in Online Ratings." Operations Research 66, no. 3 (June 2018): 597–610. http://dx.doi.org/10.1287/opre.2017.1676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Riemer, Christie, Monica Doctor, and Robert P. Dellavalle. "Analysis of Online Ratings of Dermatologists." JAMA Dermatology 152, no. 2 (February 1, 2016): 218. http://dx.doi.org/10.1001/jamadermatol.2015.4991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Trehan, Samir K., Christopher DeFrancesco, Joseph Nguyen, and Aaron Daluiski. "Online Patient Ratings of Hand Surgeons." Journal of Hand Surgery 40, no. 9 (September 2015): e25-e26. http://dx.doi.org/10.1016/j.jhsa.2015.06.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Trehan, Samir K., Christopher J. DeFrancesco, Joseph T. Nguyen, Resmi A. Charalel, and Aaron Daluiski. "Online Patient Ratings of Hand Surgeons." Journal of Hand Surgery 41, no. 1 (January 2016): 98–103. http://dx.doi.org/10.1016/j.jhsa.2015.10.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Tassiello, Vito, Giampaolo Viglia, and Anna S. Mattila. "How handwriting reduces negative online ratings." Annals of Tourism Research 73 (November 2018): 171–79. http://dx.doi.org/10.1016/j.annals.2018.05.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Johnson, Trav D. "Online Student Ratings: Will Students Respond?" New Directions for Teaching and Learning 2003, no. 96 (2003): 49–59. http://dx.doi.org/10.1002/tl.122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Sobin, Lindsay, and Parul Goyal. "Trends of Online Ratings of Otolaryngologists." JAMA Otolaryngology–Head & Neck Surgery 140, no. 7 (July 1, 2014): 635. http://dx.doi.org/10.1001/jamaoto.2014.818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ivanova, Olga, and Michael Scholz. "How can online marketplaces reduce rating manipulation? A new approach on dynamic aggregation of online ratings." Decision Support Systems 104 (December 2017): 64–78. http://dx.doi.org/10.1016/j.dss.2017.10.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Komariyah, Desi Intan. "Pengaruh Online Customer Riview dan Rating Terhadap Minat Pembelian Online Shopee (Studi Kasus Pada Santri Putri Pondok Pesantren Salafiyah Syafi'iyah Seblak Jombang)." BIMA : Journal of Business and Innovation Management 4, no. 2 (February 16, 2022): 343–58. http://dx.doi.org/10.33752/bima.v4i2.410.

Full text
Abstract:
Research analysis of the influence of online customer reviews and ratings on the interest in online purchases shopee on the students of pondok Pesantren Salafiyah Syafi'iyah Seblak. The research method used is quantitative. Data collection was conducted by disseminating questionnaires on 60 respondens. The results of this study stated that online customer review and rating each had a significant positive effect on purchase interest judging by the results of the t test which stated the value of online variable thitung customer review 2,419 > ttabel of 1.6715 and signification value of 0.019 < 0.05 then the value of variable thitung rating of 13.792 > ttabel of 1.05 6715 and signification value of 0.000 < 0.05. Online customer review and rating together positively and significantly influenced the purchase interest judging by the f test result which stated the Fhicalc value of 94,405 > Ftabel of 3.16 and signification value of 0.000 < 0.05. The hypothesis in this study is acceptable, therefore online customer reviews and ratings are included in the important factors that influence the interest in purchasing. Keywords: Online Customer Review; Rating and Purchase Interests
APA, Harvard, Vancouver, ISO, and other styles
41

Su, Ja-Hwung, Chu-Yu Chin, Yi-Wen Liao, Hsiao-Chuan Yang, Vincent S. Tseng, and Sun-Yuan Hsieh. "A Personalized Music Recommender System Using User Contents, Music Contents and Preference Ratings." Vietnam Journal of Computer Science 07, no. 01 (February 2020): 77–92. http://dx.doi.org/10.1142/s2196888820500049.

Full text
Abstract:
Recently, the advances in communication technologies have made music retrieval easier. Without downloading the music, the users can listen to music through online music websites. This incurs a challenging issue of how to provide the users with an effective online listening service. Although a number of past studies paid attention to this issue, the problems of new user, new item and rating sparsity are not easy to solve. To deal with these problems, in this paper, we propose a novel music recommender system that fuses user contents, music contents and preference ratings to enhance the music recommendation. For dealing with problem of new user, the user similarities are calculated by user profiles instead of traditional ratings. By the user similarities, the unknown ratings can be predicted using user-based Collaborative Filtering (CF). For dealing with problems of rating sparsity and new items, the unknown ratings are initialized by acoustic features and music genre ratings. Because the unknown ratings are initially imputed, the rating data will be enriched. Thereupon, the user preference can be predicted effectively by item-based CF. The evaluation results show that our proposed music recommender system performs better than the state-of-the-arts methods in terms of Root Mean Squared Error.
APA, Harvard, Vancouver, ISO, and other styles
42

Dong, Jian, Yu Chen, Aihua Gu, Jingwei Chen, Lili Li, Qinling Chen, Shujun Li, and Qifeng Xun. "Potential Trend for Online Shopping Data Based on the Linear Regression and Sentiment Analysis." Mathematical Problems in Engineering 2020 (August 26, 2020): 1–11. http://dx.doi.org/10.1155/2020/4591260.

Full text
Abstract:
How to reduce the cost of competition in the industry, identify effective customers, and understand the emotional needs and consumer preferences of customers, so as to carry out fast and accurate commercial marketing, is an important research topic. In this paper, we discussed the method for the analysis of three product data which represent the customer-supplied ratings and reviews for microwave ovens, baby pacifiers, and hair dryers sold in the Amazon marketplace over the time period. The sentiment analysis, linear regression analysis, and descriptive statistics were implemented to analyze the three datasets. Based on the sentiment analysis given by the naive Bayesian classification algorithm, we found that the star rating is positively correlated with the reviews, while the helpfulness ratings have no specific relationship with the star rating and reviews. We use multiple regression analysis and clustering algorithm analysis to get the relationship between the 4 indexes such as time, star rating, reviews, and helpfulness rating. We find that there is a positive correlation between the 4 indexes, and the reputation of the product online market is improving as time grows. Based on the analysis of the positive reviews and star ratings, we suggested indicating a potentially successful or failing product by the positive reviews. We also discussed the relations between the star ratings and number of reviews. Finally, we selected the words from the Amazon sentiment dictionary as candidate words. By counting the candidate words’ appearance in the review, the keywords that can reflect the star rating were found.
APA, Harvard, Vancouver, ISO, and other styles
43

Kostyk, Alena, James M. Leonhardt, and Mihai Niculescu. "Simpler online ratings formats increase consumer trust." Journal of Research in Interactive Marketing 11, no. 2 (June 12, 2017): 131–41. http://dx.doi.org/10.1108/jrim-06-2016-0062.

Full text
Abstract:
Purpose Online customer ratings are ubiquitous in e-commerce. However, in presenting these ratings to consumers, e-commerce websites utilize different formats. The purpose of this paper is to investigate the effects of customer ratings formats on consumer trust and processing fluency. Design/methodology/approach Drawing on the latest behavioral research, two empirical experimental studies test whether the format of online customer ratings affects consumer trust and processing fluency. Findings The studies offer converging evidence that a simpler ratings format (i.e. mean format) elicits higher processing fluency and, in turn, higher consumer trust than does a more complex ratings format (i.e. distribution format). Research limitations/implications Future research could include additional factors that might influence the ease of online ratings processing for consumers. Investigation of possible moderators, such as need for cognition, numeracy and consumer involvement, may also be of value. Practical implications These findings have timely practical implications for the design and presentation of customer ratings to enhance e-commerce outcomes. Originality/value This paper extends the effects of processing fluency on consumer trust to the increasingly important context of e-commerce. In doing so, it highlights important interactions between the evolving information environment and consumer judgment. The key takeaway for managers is that simpler online customer ratings formats help to enhance consumer trust.
APA, Harvard, Vancouver, ISO, and other styles
44

Leung, Rosanna, Norman Au, Jianwei Liu, and Rob Law. "Do customers share the same perspective? A study on online OTAs ratings versus user ratings of Hong Kong hotels." Journal of Vacation Marketing 24, no. 2 (November 25, 2016): 103–17. http://dx.doi.org/10.1177/1356766716679483.

Full text
Abstract:
Hotel service levels and pricing range are often denoted by the “star” rating system predominant in that country. This rating system depends considerably on travel agencies to disseminate information to consumers to assist them in their hotel selection. Given the popularity of online travel agencies (OTAs) and review websites, consumers can now compare published star and user ratings of hotels online to obtain a complete idea of the hotel service standards from the perspective of other users. This study attempted to analyze the difference among the star and user ratings published in eight popular OTAs. Findings showed that Priceline and Ctrip had the lowest website star ratings, whereas Bookings.com and Agoda had the highest for both local chain and independent hotels. A comparison of the star and user ratings indicated that Priceline, TripAdvisor, and Hotels.com had no statistical difference, but the other five OTAs exhibited statistical differences. The findings also indicated that Ctrip had higher user rating scores among the OTAs, possibly indicating that Chinese users rate hotels higher than other nationalities do.
APA, Harvard, Vancouver, ISO, and other styles
45

Yap, Ching Seng, Mor Yang Ong, and Rizal Ahmad. "Online Product Review, Product Knowledge, Attitude, and Online Purchase Behavior." International Journal of E-Business Research 13, no. 3 (July 2017): 33–52. http://dx.doi.org/10.4018/ijebr.2017070103.

Full text
Abstract:
This case study aims to investigate buyers' post-purchase behavior on feedback ratings. From the data collected from eBay, the statistical analysis shows that the average time length that buyers post their feedback after auctions completion is 15.5 days. New sellers and experienced sellers have different chances to receive feedback. New sellers are more likely to receive negative feedback over positive feedback. The distribution of the feedback types (negative, neutral and positive) does not match that of their associated monetary volumes. This case study also demonstrates that inexperienced eBay buyers are more likely to post negative feedback ratings than experienced ones. New and used products attract different ratings in the three feedback types. With word cloud and word frequency analysis, the authors identify common issues associated with each of the three types of feedback. The paper also discusses the managerial implications and recommendations based on these findings.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Wen-Jun, Qiang Dong, and Yan Fu. "Investigating the Temporal Effect of User Preferences with Application in Movie Recommendation." Mobile Information Systems 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/8940709.

Full text
Abstract:
As the rapid development of mobile Internet and smart devices, more and more online content providers begin to collect the preferences of their customers through various apps on mobile devices. These preferences could be largely reflected by the ratings on the online items with explicit scores. Both of positive and negative ratings are helpful for recommender systems to provide relevant items to a target user. Based on the empirical analysis of three real-world movie-rating data sets, we observe that users’ rating criterions change over time, and past positive and negative ratings have different influences on users’ future preferences. Given this, we propose a recommendation model on a session-based temporal graph, considering the difference of long- and short-term preferences, and the different temporal effect of positive and negative ratings. The extensive experiment results validate the significant accuracy improvement of our proposed model compared with the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Ok, Lee, and Kim. "Recommendation Framework Combining User Interests with Fashion Trends in Apparel Online Shopping." Applied Sciences 9, no. 13 (June 28, 2019): 2634. http://dx.doi.org/10.3390/app9132634.

Full text
Abstract:
Although fashion-related products account for most of the online shopping categories, it becomes more difficult for users to search and find products matching their taste and needs as the number of items available online increases explosively. Personalized recommendation of items is the best method for both reducing user effort on searching for items and expanding sales opportunity for sellers. Unfortunately, experimental studies and research on fashion item recommendation for online shopping users are lacking. In this paper, we propose a novel recommendation framework suitable for online apparel items. To overcome the rating sparsity problem of online apparel datasets, we derive implicit ratings from user log data and generate predicted ratings for item clusters by user-based collaborative filtering. The ratings are combined with a network constructed by an item click trend, which serves as a personalized recommendation through a random walk. An empirical evaluation on a large-scale real-world dataset obtained from an apparel retailer demonstrates the effectiveness of our method.
APA, Harvard, Vancouver, ISO, and other styles
48

Cezar, Asunur, and Hulisi Ögüt. "Analyzing conversion rates in online hotel booking." International Journal of Contemporary Hospitality Management 28, no. 2 (February 8, 2016): 286–304. http://dx.doi.org/10.1108/ijchm-05-2014-0249.

Full text
Abstract:
Purpose – The aim of this paper is to examine the impact of three main technologies on converting browsers into customers: impact of review rating (location rating and service rating), recommendation and search listings. Design/methodology/approach – This paper estimates conversion rate model parameters using a quasi-likelihood method with the Bernoulli log-likelihood function and parametric regression model based on the beta distribution. Findings – The results show that a high rank in search listings, a high number of recommendations and location rating have a significant and positive impact on conversion rates. However, service rating and star rating do not have a significant effect on conversion rate. Furthermore, room price and hotel size are negatively associated with conversion rate. It was also found that a high rank in search listings, a high number of recommendations and location rating increase online hotel bookings. Furthermore, it was found that a high number of recommendations increase the conversion rate of hotels with low ranks. Practical implications – The findings show that hotels’ location ratings are more important than both star and service ratings for the conversion of visitors into customers. Thus, hotels that are located in convenient locations can charge higher prices. The results may also help entrepreneurs who are planning to open new hotels to forecast the conversion rates and demand for specific locations. It was found that a high number of recommendations help to increase the conversion rate of hotels with low ranks. This result suggests that a high numbers of recommendations mitigate the adverse effect of a low rank in search listings on the conversion rate. Originality/value – This paper contributes to the understanding of the drivers of conversion rates in online channels for the successful implementation of hotel marketing.
APA, Harvard, Vancouver, ISO, and other styles
49

State, Bogdan, Bruno Abrahao, and Karen Cook. "Power Imbalance and Rating Systems." Proceedings of the International AAAI Conference on Web and Social Media 10, no. 1 (August 4, 2021): 368–77. http://dx.doi.org/10.1609/icwsm.v10i1.14753.

Full text
Abstract:
Ratings are critical to the function and success of services in the emerging sharing economy. They are a means through which users develop trust in one another and in the services themselves. Ratings are designed to give users a proxy for the expected quality and risk of potential online transactions. We expect online ratings to reflect an objective measure of quality, but such evaluations in fact may be systematically distorted by many, complex social-psychological processes. Decoupling these subjective factors from rating systems to correct for biases and to provide neutral assessments of risk and quality has proved extremely challenging. We focus on one of the most prevalent factors in virtually every form of social exchange. Differences in resource ownership affect the balance of power in interpersonal interactions, likely impacting online ratings. We demonstrate how power imbalance affects mutual ratings using a massive dataset from CouchSurfing.org, an international online hospitality exchange network. Our methodology employs a deductive approach to knowledge discovery. Through a series of observational experiments, we find support for a sociological theory dating back to the 1960s, Power-Dependence Theory (PD), as a possible explanation. PD predicts that power-imbalanced relationships induce user behavior that attempts to balance power. We find support for status-giving as a likely mechanism driving the asymmetry of ratings between power-unequal users. Our findings underscore the need for ratings systems to account for the tendency of mutual ratings between users that hold differential resources to be asymmetrical, especially under conditions of resource scarcity.
APA, Harvard, Vancouver, ISO, and other styles
50

Xie, Hong, Yongkun Li, and John C. S. Lui. "Understanding Persuasion Cascades in Online Product Rating Systems." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5490–97. http://dx.doi.org/10.1609/aaai.v33i01.33015490.

Full text
Abstract:
Online product rating systems have become an indispensable component for numerous web services such as Amazon, eBay, Google play store and TripAdvisor. One functionality of such systems is to uncover the product quality via product ratings (or reviews) contributed by consumers. However, a well-known psychological phenomenon called “messagebased persuasion” lead to “biased” product ratings in a cascading manner (we call this the persuasion cascade). This paper investigates: (1) How does the persuasion cascade influence the product quality estimation accuracy? (2) Given a real-world product rating dataset, how to infer the persuasion cascade and analyze it to draw practical insights? We first develop a mathematical model to capture key factors of a persuasion cascade. We formulate a high-order Markov chain to characterize the opinion dynamics of a persuasion cascade and prove the convergence of opinions. We further bound the product quality estimation error for a class of rating aggregation rules including the averaging scoring rule, via the matrix perturbation theory and the Chernoff bound. We also design a maximum likelihood algorithm to infer parameters of the persuasion cascade. We conduct experiments on the data from Amazon and TripAdvisor, and show that persuasion cascades notably exist, but the average scoring rule has a small product quality estimation error under practical scenarios.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography