Skip to main content
Log in

Exploring potential biases towards blockbuster items in ranking-based recommendations

  • Published:
Data Mining and Knowledge Discovery Aims and scope Submit manuscript

Abstract

Popularity bias is defined as the intrinsic tendency of recommendation algorithms to feature popular items more than unpopular ones in the ranked lists lists they produced. When investigating the adverse effects of popularity bias, the literature has usually focused on the most frequently rated items only. However, an item’s popularity does not always indicate that it is highly-liked by individuals; in fact, the degree of liking may even introduce biases that are more extreme than the famous popularity bias in terms of beyond-accuracy evaluations. Therefore, in the present study, we attempt to consider items that are both popular and highly-liked, which we refer to as blockbuster items, and to investigate whether the recommendation algorithms impose a considerable bias in favor of the blockbuster items in their ranking-based recommendations. To this end, we first present a practical formulation that measures the degree of the blockbuster level of the items by combining their liking-degree and popularity effectively. Then, based on this formulation, we perform a comprehensive set of experiments with ten different algorithms on five datasets with different characteristics to explore the potential biases towards blockbuster items in recommendations. The experimental outcomes demonstrate that most recommenders propagate an undesirable bias in their recommendations towards the blockbuster items, and such a bias is, in fact, not caused by the item popularity. Moreover, the observed biases to blockbuster items are more harmful and persistent than those to popular ones in terms of beyond-accuracy aspects such as diversity, catalog coverage, and novelty. The obtained results also suggest that conventional popularity-debiasing strategies are not so talented in treating the adverse effects of the observed blockbuster bias in recommendations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. https://www.netfix.com/.

  2. https://www.spotify.com/.

  3. https://www.booking.com/.

  4. https://www.amazon.com/.

  5. https://www.instagram.com/.

  6. https://www.metacritic.com/game/playstation-4/cyberpunk-2077/.

  7. https://github.com/eMRe5832/BlockbusterBias.

  8. https://grouplens.org/datasets/movielens/100k/.

  9. https://grouplens.org/datasets/movielens/1m/.

  10. https://www.ciao.co.uk/.

  11. https://webscope.sandbox.yahoo.com/catalog.php?datatype=r/.

  12. http://fastml.com/goodbooks-10k.

  13. https://surpriselib.com/.

References

  • Abdollahpouri H (2020) Popularity bias in recommendation: A multi-stakeholder perspective. arXiv preprint arXiv:2008.08551

  • Abdollahpouri H, Mansoury M (2020) Multi-sided exposure bias in recommendation. arXiv preprint arXiv:2006.15772

  • Abdollahpouri H, Burke R, Mobasher B (2017) Controlling popularity bias in learning-to-rank recommendation. In: Proceedings of the 11th ACM Conference on Recommender Systems, pp 42–46, https://doi.org/10.1145/3109859.3109912

  • Abdollahpouri H, Burke R, Mobasher B (2018) Popularity-aware item weighting for long-tail recommendation. arXiv preprint arXiv:1802.05382

  • Abdollahpouri H, Burke R, Mobasher B (2019) Managing popularity bias in recommender systems with personalized re-ranking. arXiv preprint arXiv:1901.07555

  • Adamopoulos P, Tuzhilin A (2014) On unexpectedness in recommender systems: Or how to better expect the unexpected. ACM Trans Intell Syst Technol 5(4):1–32. https://doi.org/10.1145/2559952

    Article  Google Scholar 

  • Boratto L, Fenu G, Marras M (2019) The effect of algorithmic bias on recommender systems for massive open online courses. In: European Conference on Information Retrieval, pp 457–472, https://doi.org/10.1007/978-3-030-15712-8_30

  • Boratto L, Fenu G, Marras M (2021) Connecting user and item perspectives in popularity debiasing for collaborative recommendation. Information Processing & Management 58(1):102387. https://doi.org/10.1016/j.ipm.2020.102387

    Article  Google Scholar 

  • Borges R, Stefanidis K (2020) On measuring popularity bias in collaborative filtering data. In: Proceedings of the Workshops of the EDBT/ICDT 2020 Joint Conference

  • Borges R, Stefanidis K (2021) On mitigating popularity bias in recommendations via variational autoencoders. In: Proceedings of the 36th Annual ACM Symposium on Applied Computing, pp 1383—1389, https://doi.org/10.1145/3412841.3442123

  • Chen C, Zhang M, Liu Y, Ma S (2018a) Missing data modeling with user activity and item popularity in recommendation. In: Asia Information Retrieval Symposium, pp 113–125, https://doi.org/10.1007/978-3-030-03520-4_11

  • Chen J, Wang C, Zhou S, Shi Q, Feng Y, Chen C (2019) Samwalker: Social recommendation with informative sampling strategy. In: The World Wide Web Conference, pp 228–239, https://doi.org/10.1145/3308558.3313582

  • Chen J, Dong H, Wang X, Feng F, Wang M, He X (2020) Bias and debias in recommender system: A survey and future directions. arXiv preprint arXiv:2010.03240

  • Chen R, Hua Q, Chang YS, Wang B, Zhang L, Kong X (2018) A survey of collaborative filtering-based recommender systems: From traditional methods to hybrid methods based on social networks. IEEE Access 6:64301–64320. https://doi.org/10.1109/ACCESS.2018.2877208

    Article  Google Scholar 

  • Christoffel F, Paudel B, Newell C, Bernstein A (2015) Blockbusters and wallflowers: Accurate, diverse, and scalable recommendations with random walks. In: Proceedings of the 9th ACM Conference on Recommender Systems, pp 163–170, https://doi.org/10.1145/2792838.2800180

  • Ciampaglia GL, Nematzadeh A, Menczer F, Flammini A (2018) How algorithmic popularity bias hinders or promotes quality. Sci Rep 8(1):1–7

    Article  Google Scholar 

  • Cremonesi P, Garzotto F, Negro S, Papadopoulos AV, Turrin R (2011) Looking for “good” recommendations: A comparative evaluation of recommender systems. In: IFIP Conference on Human-Computer Interaction, pp 152–168, https://doi.org/10.1007/978-3-642-23765-2_11

  • Ekstrand MD, Tian M, Azpiazu IM, Ekstrand JD, Anuyah O, McNeill D, Pera MS (2018) All the cool kids, how do they fit in?: Popularity and demographic biases in recommender evaluation and effectiveness. In: Conference on Fairness, Accountability and Transparency, pp 172–186

  • George T, Merugu S (2005) A scalable collaborative filtering framework based on co-clustering. In: Proceedings of the Fifth IEEE International Conference on Data Mining, pp 625–628, https://doi.org/10.1109/ICDM.2005.14

  • Herlocker J, Konstan JA, Riedl J (2002) An empirical analysis of design choices in neighborhood-based collaborative filtering algorithms. Inf Retrieval 5(4):287–310

    Article  Google Scholar 

  • Herlocker JL, Konstan JA, Terveen LG, Riedl JT (2004) Evaluating collaborative filtering recommender systems. ACM Trans Inf Syst 22(1):5–53. https://doi.org/10.1145/963770.963772

    Article  Google Scholar 

  • Hernández-Lobato JM, Houlsby N, Ghahramani Z (2014) Probabilistic matrix factorization with non-random missing data. In: International Conference on Machine Learning, pp 1512–1520

  • Hinz O, Eckert J, Skiera B (2011) Drivers of the long tail phenomenon: an empirical analysis. J Manag Inf Syst 27(4):43–70. https://doi.org/10.2753/MIS0742-1222270402

    Article  Google Scholar 

  • Hou L, Pan X, Liu K (2018) Balancing the popularity bias of object similarities for personalised recommendation. The European Physical Journal B 91(3):1–7. https://doi.org/10.1140/epjb/e2018-80374-8

    Article  Google Scholar 

  • Ismailoglu F (2022) Aggregating user preferences in group recommender systems: A crowdsourcing approach. Decis Support Syst 152:113663. https://doi.org/10.1016/j.dss.2021.113663

    Article  Google Scholar 

  • Jannach D, Jugovac M (2019) Measuring the business value of recommender systems. ACM Trans Manag Inf Syst 10(4):1–23. https://doi.org/10.1145/3370082

    Article  Google Scholar 

  • Jannach D, Lerche L, Kamehkhosh I, Jugovac M (2015) What recommenders recommend: an analysis of recommendation biases and possible countermeasures. User Model User-Adap Inter 25(5):427–491. https://doi.org/10.1007/s11257-015-9165-3

    Article  Google Scholar 

  • Jannach D, Kamehkhosh I, Bonnin G (2016) Biases in automated music playlist generation: A comparison of next-track recommending techniques. In: Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization, pp 281–285, https://doi.org/10.1145/2930238.2930283

  • Jannach D, Lerche L, Zanker M (2018) Recommending based on implicit feedback. In: Social Information Access, pp 510–569, https://doi.org/10.1007/978-3-319-90092-6_14

  • Joachims T, Granka L, Pan B, Hembrooke H, Gay G (2017) Accurately interpreting clickthrough data as implicit feedback. ACM SIGIR Forum 51:4–11. https://doi.org/10.1145/3130332.3130334

    Article  Google Scholar 

  • Kamishima T, Akaho S, Asoh H, Sakuma J (2014) Correcting popularity bias by enhancing recommendation neutrality. In: The 8th ACM Conference on Recommender Systems (RecSys 2014), Poster

  • Kapoor K, Kumar V, Terveen L, Konstan JA, Schrater P (2015) “i like to explore sometimes” adapting to dynamic user novelty preferences. In: Proceedings of the 9th ACM Conference on Recommender Systems, pp 19–26, https://doi.org/10.1145/2792838.2800172

  • Karimi M, Jannach D, Jugovac M (2018) News recommender systems - survey and roads ahead. Information Processing & Management 54(6):1203–1227. https://doi.org/10.1016/j.ipm.2018.04.008

    Article  Google Scholar 

  • Kluver D, Ekstrand MD, Konstan JA (2018) Rating-based collaborative filtering: algorithms and evaluation. Social Information Access pp 344–390, https://doi.org/10.1007/978-3-319-90092-6_10

  • Koren Y (2010) Factor in the neighbors: Scalable and accurate collaborative filtering. ACM Trans Knowl Discov Data 4(1):1–24. https://doi.org/10.1145/1644873.1644874

    Article  Google Scholar 

  • Koren Y, Bell R, Volinsky C (2009) Matrix factorization techniques for recommender systems. Computer 42(8):30–37. https://doi.org/10.1109/MC.2009.263

    Article  Google Scholar 

  • Kotkov D, Konstan JA, Zhao Q, Veijalainen J (2018) Investigating serendipity in recommender systems based on real user feedback. In: Proceedings of the 33rd Annual ACM Symposium on Applied Computing, pp 1341–1350, https://doi.org/10.1145/3167132.3167276

  • Kowald D, Schedl M, Lex E (2020) The unfairness of popularity bias in music recommendation: A reproducibility study. In: European Conference on Information Retrieval, pp 35–42, https://doi.org/10.1007/978-3-030-45442-5_5

  • Krishnan S, Patel J, Franklin MJ, Goldberg K (2014) A methodology for learning, analyzing, and mitigating social influence bias in recommender systems. In: Proceedings of the 8th ACM Conference on Recommender systems, pp 137–144, https://doi.org/10.1145/2645710.2645740

  • Kunaver M, Požrl T (2017) Diversity in recommender systems - a survey. Knowl-Based Syst 123:154–162. https://doi.org/10.1016/j.knosys.2017.02.009

    Article  Google Scholar 

  • Lemire D, Maclachlan A (2005) Slope one predictors for online rating-based collaborative filtering. In: Proceedings of the 2005 SIAM International Conference on Data Mining, pp 471–475, https://doi.org/10.1137/1.9781611972757.43

  • Liu D, Cheng P, Dong Z, He X, Pan W, Ming Z (2020) A general knowledge distillation framework for counterfactual recommendation via uniform data. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp 831–840, https://doi.org/10.1145/3397271.3401083

  • Liu Y, Cao X, Yu Y (2016) Are you influenced by others when rating? improve rating prediction by conformity modeling. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp 269–272, https://doi.org/10.1145/2959100.2959141

  • Luo X, Zhou M, Xia Y, Zhu Q (2014) An efficient non-negative matrix-factorization-based approach to collaborative filtering for recommender systems. IEEE Trans Industr Inf 10(2):1273–1284. https://doi.org/10.1109/TII.2014.2308433

    Article  Google Scholar 

  • Mendoza M, Torres N (2020) Evaluating content novelty in recommender systems. J Intell Inf Syst 54(2):297–316. https://doi.org/10.1007/s10844-019-00548-x

    Article  MathSciNet  Google Scholar 

  • Mesas RM, Bellogín A (2017) Evaluating decision-aware recommender systems. In: Proceedings of the 11th ACM Conference on Recommender Systems, pp 74–78, https://doi.org/10.1145/3109859.3109888

  • Park YJ, Tuzhilin A (2008) The long tail of recommender systems and how to leverage it. In: Proceedings of the 2008 ACM conference on Recommender systems, pp 11–18, https://doi.org/10.1145/1454008.1454012

  • Pitoura E, Stefanidis K, Koutrika G (2021) Fairness in rankings and recommendations: An overview. The International Journal on Very Large Data Bases 31:431–458. https://doi.org/10.1007/s00778-021-00697-y

    Article  Google Scholar 

  • Ramos G, Boratto L, Caleiro C (2020) On the negative impact of social influence in recommender systems: A study of bribery in collaborative hybrid algorithms. Information Processing & Management 57(2):102058. https://doi.org/10.1016/j.ipm.2019.102058

    Article  Google Scholar 

  • Ricci F, Rokach L, Shapira B (2011) Introduction to recommender systems handbook. In: Recommender systems handbook, pp 1–35

  • Sánchez P (2019) Exploiting contextual information for recommender systems oriented to tourism. In: Proceedings of the 13th ACM Conference on Recommender Systems, pp 601–605, https://doi.org/10.1145/3298689.3347062

  • Silveira T, Zhang M, Lin X, Liu Y, Ma S (2019) How good your recommender system is? a survey on evaluations in recommendation. Int J Mach Learn Cybern 10(5):813–831. https://doi.org/10.1007/s13042-017-0762-9

    Article  Google Scholar 

  • Steck H (2013) Evaluation of recommendations: rating-prediction and ranking. In: Proceedings of the 7th ACM conference on Recommender systems, pp 213–220, https://doi.org/10.1145/2507157.2507160

  • Vargas S, Castells P (2011) Rank and relevance in novelty and diversity metrics for recommender systems. In: Proceedings of the 5th ACM conference on Recommender systems, pp 109–116, https://doi.org/10.1145/2043932.2043955

  • Vargas S, Baltrunas L, Karatzoglou A, Castells P (2014) Coverage, redundancy and size-awareness in genre diversity for recommender systems. In: Proceedings of the 8th ACM Conference on Recommender systems, pp 209–216, https://doi.org/10.1145/2645710.2645743

  • Wang S, Gong M, Li H, Yang J (2016) Multi-objective optimization for long tail recommendation. Knowl-Based Syst 104:145–155. https://doi.org/10.1016/j.knosys.2016.04.018

    Article  Google Scholar 

  • Yalcin E (2021) Blockbuster: A new perspective on popularity-bias in recommender systems. In: 6th International Conference on Computer Science and Engineering, pp 107–112, https://doi.org/10.1109/UBMK52708.2021.9558877

  • Yalcin E, Bilge A (2021) Investigating and counteracting popularity bias in group recommendations. Information Processing & Management 58(5):102608. https://doi.org/10.1016/j.ipm.2021.102608

    Article  Google Scholar 

  • Yalcin E, Bilge A (2022) Treating adverse effects of blockbuster bias on beyond-accuracy quality of personalized recommendations. Engineering Science and Technology, an International Journal 33:101083. https://doi.org/10.1016/j.jestch.2021.101083

    Article  Google Scholar 

  • Zhang S, Yao L, Sun A, Tay Y (2019) Deep learning based recommender system: A survey and new perspectives. ACM Comput Surv 52:1–38. https://doi.org/10.1145/3285029

    Article  Google Scholar 

  • Zhang YC, Séaghdha DÓ, Quercia D, Jambor T (2012) Auralist: introducing serendipity into music recommendation. In: Proceedings of the 5th ACM International Conference on Web Search and Data Mining, pp 13–22, https://doi.org/10.1145/2124295.2124300

  • Ziegler CN, McNee SM, Konstan JA, Lausen G (2005) Improving recommendation lists through topic diversification. In: Proceedings of the 14th International Conference on World Wide Web, pp 22–32, https://doi.org/10.1145/1060745.1060754

Download references

Acknowledgements

This work is supported by the Scientific Research Project Fund of Sivas Cumhuriyet University under the project number M-2021-811.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emre Yalcin.

Additional information

Responsible editor: Toon Calders.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yalcin, E. Exploring potential biases towards blockbuster items in ranking-based recommendations. Data Min Knowl Disc 36, 2033–2073 (2022). https://doi.org/10.1007/s10618-022-00860-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10618-022-00860-1

Keywords

Navigation