skip to main content
10.1145/2043932.2043961acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
research-article

Rating: how difficult is it?

Published: 23 October 2011 Publication History

Abstract

Netflix.com uses star ratings, Digg.com uses up/down votes and Facebook uses a "like" but not a "dislike" button. Despite the popularity and diversity of these rating scales, research offers little guidance for designers choosing between them.
This paper compares four different rating scales: unary ("like it"), binary (thumbs up / thumbs down), five-star, and a 100-point slider. Our analysis draws upon 12,847 movie and product review ratings collected from 348 users through an online survey. We a) measure the time and cognitive load required by each scale, b) study how rating time varies with the rating value assigned by a user, and c) survey users' satisfaction with each scale.
Overall, users work harder with more granular rating scales, but these effects are moderated by item domain (product reviews or movies). Given a particular scale, users rating times vary significantly for items they like and dislike. Our findings about users' rating effort and satisfaction suggest guidelines for designers choosing between rating scales.

References

[1]
G. Adomavicius and A. Tuzhilin. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE transactions on knowledge and data engineering, pages 734--749, 2005.
[2]
G. Al Mamunur Rashid and J. Riedl. Learning preferences of new users in recommender systems: an information theoretic approach. ACM SIGKDD Explorations Newsletter, 10(2), 2008.
[3]
A. Bader-Natal. Incorporating game mechanics into a network of online study groups. In AIED 2009: 14 th International Conference on Artificial Intelligence in Education Workshops Proceedings, page 109. Citeseer, 2009.
[4]
C. Boutilier, R. Zemel, and B. Marlin. Active collaborative filtering. In Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, pages 98--106. Citeseer, 2003.
[5]
R. Brunken, J. L. Plass, and D. Leutner. Direct measurement of cognitive load in multimedia learning. Educational Psychologist, 38(1):53--61, 2003.
[6]
D. Cosley, S. K. Lam, I. Albert, J. A. Konstan, and J. Riedl. Is seeing believing?: how recommender system interfaces affect users' opinions. In Proceedings of SIGCHI, pages 585--592. ACM New York, NY, USA, 2003.
[7]
J. Gwizdka. Distribution of cognitive load in web search. Arxiv preprint arXiv:1005.1340, 2010.
[8]
F. M. Harper, X. Li, Y. Chen, and J. A. Konstan. An economic model of user rating in an online recommender system. In User Modeling 2005, pages 307--316. 2005.
[9]
J. L. Herlocker, J. A. Konstan, A. Borchers, and J. Riedl. An algorithmic framework for performing collaborative filtering. In Proceedings of SIGIR, page 237. ACM, 1999.
[10]
G. A. C. Jr and J. P. Peter. Research design effects on the reliability of rating scales: a meta-analysis. Journal of Marketing Research, XXI(1):360--375, November 1984.
[11]
J. Liu, Y. Cao, C. Y. Lin, Y. Huang, and M. Zhou. Low-quality product review detection in opinion summarization. In Proceedings of EMNLP-CoNLL, page 334--342, 2007.
[12]
F. Paas, J. E. Tuovinen, H. Tabbers, and P. W. M. V. Gerven. Cognitive load measurement as a means to advance cognitive load theory. Educational psychologist, 38(1):63--71, 2003.
[13]
J. Patrick, R. Pruchno, and M. Rose. Recruiting research participants: a comparison of the costs and effectiveness of five recruitment strategies. The Gerontologist, 38(3):295, 1998.
[14]
C. C. Preston and A. M. Colman. Optimal number of response categories in rating scales: reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104(1):1--15, 2000.
[15]
A. Rashid, I. Albert, D. Cosley, S. Lam, S. McNee, J. Konstan, and J. Riedl. Getting to know you: learning new user preferences in recommender systems. In Proceedings of the 7th international conference on Intelligent user interfaces, pages 127--134. ACM, 2002.

Cited By

View all
  • (2024)What influences users to provide explicit feedback? A case of food delivery recommendersUser Modeling and User-Adapted Interaction10.1007/s11257-023-09385-834:3(753-796)Online publication date: 1-Jul-2024
  • (2024)Practical Use of AI-Based Learning Recommendations in Higher EducationMethodologies and Intelligent Systems for Technology Enhanced Learning, 14th International Conference10.1007/978-3-031-73538-7_6(57-66)Online publication date: 28-Dec-2024
  • (2023)Online Information Filtering: The Role of Contextual Cues in Electronic Networks of PracticeACM SIGMIS Database: the DATABASE for Advances in Information Systems10.1145/3631341.363134754:4(77-106)Online publication date: 30-Oct-2023
  • Show More Cited By

Index Terms

  1. Rating: how difficult is it?

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    RecSys '11: Proceedings of the fifth ACM conference on Recommender systems
    October 2011
    414 pages
    ISBN:9781450306836
    DOI:10.1145/2043932
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    In-Cooperation

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 23 October 2011

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. rating scales
    2. recommender systems
    3. user studies

    Qualifiers

    • Research-article

    Conference

    RecSys '11
    Sponsor:
    RecSys '11: Fifth ACM Conference on Recommender Systems
    October 23 - 27, 2011
    Illinois, Chicago, USA

    Acceptance Rates

    Overall Acceptance Rate 254 of 1,295 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)109
    • Downloads (Last 6 weeks)9
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)What influences users to provide explicit feedback? A case of food delivery recommendersUser Modeling and User-Adapted Interaction10.1007/s11257-023-09385-834:3(753-796)Online publication date: 1-Jul-2024
    • (2024)Practical Use of AI-Based Learning Recommendations in Higher EducationMethodologies and Intelligent Systems for Technology Enhanced Learning, 14th International Conference10.1007/978-3-031-73538-7_6(57-66)Online publication date: 28-Dec-2024
    • (2023)Online Information Filtering: The Role of Contextual Cues in Electronic Networks of PracticeACM SIGMIS Database: the DATABASE for Advances in Information Systems10.1145/3631341.363134754:4(77-106)Online publication date: 30-Oct-2023
    • (2023)The influence of user personality and rating scale features on rating behaviour: an empirical studyProceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter10.1145/3605390.3605410(1-8)Online publication date: 20-Sep-2023
    • (2022)The Effect of Feedback Granularity on Recommender Systems PerformanceProceedings of the 16th ACM Conference on Recommender Systems10.1145/3523227.3551479(586-591)Online publication date: 12-Sep-2022
    • (2022)TastePaths: Enabling Deeper Exploration and Understanding of Personal Preferences in Recommender SystemsProceedings of the 27th International Conference on Intelligent User Interfaces10.1145/3490099.3511156(120-133)Online publication date: 22-Mar-2022
    • (2022)Formalization and implementation of credibility dynamics through prioritized multiple revisionInternational Journal of Approximate Reasoning10.1016/j.ijar.2022.05.001147:C(1-22)Online publication date: 1-Aug-2022
    • (2022)Willingness to pay for automated taxis: a stated choice experiment to measure the impact of in-vehicle features and customer reviewsTransportation10.1007/s11116-022-10319-351:1(51-72)Online publication date: 26-Sep-2022
    • (2021)Human‐centered recommender systemsAI Magazine10.1609/aimag.v42i3.1814242:3(31-42)Online publication date: 1-Sep-2021
    • (2021)By the Crowd and for the Crowd: Perceived Utility and Willingness to Contribute to Trustworthiness Indicators on Social MediaProceedings of the ACM on Human-Computer Interaction10.1145/34639305:GROUP(1-24)Online publication date: 13-Jul-2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media