Skip to main content

Utilizing Implicit Feedback for User Mainstreaminess Evaluation and Bias Detection in Recommender Systems

  • Conference paper
  • First Online:
Advances in Bias and Fairness in Information Retrieval (BIAS 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1840))

  • 229 Accesses

Abstract

Bias and fairness issues have attracted considerable attention in recommender systems. From the user’s perspective, intentions to stay or leave heavily depend on the degree of satisfaction with the received recommendation results. Mainstream bias refers to the phenomenon that recommendation algorithms favor mainstream users and provide inferior results to non-mainstream users, which harms user fairness. In recent work, Zhu et al. [24] explore several approaches to evaluate the mainstreaminess of users and show the existence of mainstream bias using implicit feedback data. However, they omit the factor of profile size, which can greatly influence the evaluation. In this paper, we complete the data preprocessing steps missing in the original paper and reproduce the evaluation experiments. In particular, we redesign the setup and present a simple and intuitive evaluation approach with high interpretability. Experimental results show that our method outperforms others with better effectiveness in measuring users’ mainstream level. Finally, we validate the wide existence of mainstream bias and assess its impact on recommendations. Our source code and results are available at https://github.com/Xaiver97/mainstream_evaluation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://grouplens.org/datasets/movielens/1m/.

  2. 2.

    http://ocelma.net/MusicRecommendationDataset/lastfm-360K.html.

  3. 3.

    http://www.trustlet.org/epinions.html.

  4. 4.

    http://www.yelp.com/dataset_challenge/.

  5. 5.

    http://www2.informatik.uni-freiburg.de/~cziegler/BX/.

  6. 6.

    https://grouplens.org/datasets/movielens/20M/.

  7. 7.

    https://github.com/Zziwei/Measuring-Mitigating-Mainstream-Bias.

References

  1. Abdollahpouri, H., Mansoury, M., Burke, R., Mobasher, B.: The unfairness of popularity bias in recommendation. In: RecSys Workshop on Recommendation in Multistakeholder Environments (RMSE) (2019)

    Google Scholar 

  2. Abdollahpouri, H., Mansoury, M., Burke, R., Mobasher, B., Malthouse, E.: User-centered evaluation of popularity bias in recommender systems. In: Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. p. 119–129. UMAP ’21, Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3450613.3456821

  3. Anderson, A., Kumar, R., Tomkins, A., Vassilvitskii, S.: The dynamics of repeat consumption. In: Proceedings of the 23rd International Conference on World Wide Web. p. 419–430. WWW ’14, Association for Computing Machinery, New York, NY, USA (2014). https://doi.org/10.1145/2566486.2568018

  4. Borges, R., Stefanidis, K.: On measuring popularity bias in collaborative filtering data. In: Proceedings of the Workshops of the EDBT/ICDT 2020 Joint Conference. CEUR Workshop Proceedings, vol. 2578. CEUR-WS.org (2020), jufoid=53269; EDBT/ICDT Workshops; Conference date: 01–01-2020

    Google Scholar 

  5. Breunig, M.M., Kriegel, H.P., Ng, R.T., Sander, J.: LOF: identifying density-based local outliers. SIGMOD Rec. 29(2), 93–104 (2000). https://doi.org/10.1145/335191.335388

  6. Ferrari Dacrema, M., Cremonesi, P., Jannach, D.: Are we really making much progress? a worrying analysis of recent neural recommendation approaches. In: Proceedings of the 13th ACM Conference on Recommender Systems. p. 101–109. RecSys ’19, Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3298689.3347058

  7. He, X., Liao, L., Zhang, H., Nie, L., Hu, X., Chua, T.S.: Neural collaborative filtering. In: Proceedings of the 26th International Conference on World Wide Web. p. 173–182. WWW ’17, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE (2017). https://doi.org/10.1145/3038912.3052569

  8. Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer 42(8), 30–37 (2009). https://doi.org/10.1109/MC.2009.263

    Article  Google Scholar 

  9. Kowald, D., Lacic, E.: Popularity bias in collaborative filtering-based multimedia recommender systems. In: Boratto, L., Faralli, S., Marras, M., Stilo, G. (eds.) Advances in Bias and Fairness in Information Retrieval, pp. 1–11. Springer International Publishing, Cham (2022)

    Google Scholar 

  10. Kowald, D., Schedl, M., Lex, E.: The unfairness of popularity bias in music recommendation: A reproducibility study. In: Jose, J.M., Yilmaz, E., Magalhães, J., Castells, P., Ferro, N., Silva, M.J., Martins, F. (eds.) Advances in Information Retrieval, pp. 35–42. Springer International Publishing, Cham (2020)

    Chapter  Google Scholar 

  11. Li, R.Z., Urbano, J., Hanjalic, A.: Leave no user behind: Towards improving the utility of recommender systems for non-mainstream users. In: Proceedings of the 14th ACM International Conference on Web Search and Data Mining. p. 103–111. WSDM ’21, Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3437963.3441769

  12. Liang, D., Krishnan, R.G., Hoffman, M.D., Jebara, T.: Variational autoencoders for collaborative filtering. In: Proceedings of the 2018 World Wide Web Conference. p. 689–698. WWW ’18, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE (2018). https://doi.org/10.1145/3178876.3186150

  13. Mehrotra, R., McInerney, J., Bouchard, H., Lalmas, M., Diaz, F.: Towards a fair marketplace: Counterfactual evaluation of the trade-off between relevance, fairness & satisfaction in recommendation systems. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management. p. 2243–2251. CIKM ’18, Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3269206.3272027

  14. Naghiaei, M., Rahmani, H.A., Dehghan, M.: The unfairness of popularity bias in book recommendation. In: Boratto, L., Faralli, S., Marras, M., Stilo, G. (eds.) Advances in Bias and Fairness in Information Retrieval, pp. 69–81. Springer International Publishing, Cham (2022)

    Chapter  Google Scholar 

  15. Neophytou, N., Mitra, B., Stinson, C.: Revisiting popularity and demographic biases in recommender evaluation and effectiveness. In: Hagen, M., et al. (eds.) Advances in Information Retrieval, pp. 641–654. Springer International Publishing, Cham (2022)

    Chapter  Google Scholar 

  16. Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: Bpr: Bayesian personalized ranking from implicit feedback. In: Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. p. 452–461. UAI ’09, AUAI Press, Arlington, Virginia, USA (2009)

    Google Scholar 

  17. Rony, M.M.U., Hassan, N., Yousuf, M.: Diving deep into Clickbaits: Who use them to what extents in which topics with what effects? In: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017. p. 232–239. ASONAM ’17, Association for Computing Machinery, New York, NY, USA (2017). https://doi.org/10.1145/3110025.3110054

  18. Ruff, L., Vandermeulen, R., et al.: Deep one-class classification. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 4393–4402. PMLR (2018). https://proceedings.mlr.press/v80/ruff18a.html

  19. Sahebi, S., Brusilovsky, P.: Cross-domain collaborative recommendation in a cold-start context: The impact of user profile size on the quality of recommendation. In: Carberry, S., Weibelzahl, S., Micarelli, A., Semeraro, G. (eds.) User Modeling, Adaptation, and Personalization, pp. 289–295. Springer, Berlin Heidelberg, Berlin, Heidelberg (2013)

    Chapter  Google Scholar 

  20. Schedl, M., Bauer, C.: Distance- and rank-based music mainstreaminess measurement. In: Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization. p. 364–367. UMAP ’17, Association for Computing Machinery, New York, NY, USA (2017). https://doi.org/10.1145/3099023.3099098

  21. Schedl, M., Bauer, C.: An analysis of global and regional mainstreaminess for personalized music recommender systems. J. Mobile Multimed. 14, 95–122 (2018)

    Google Scholar 

  22. Schedl, M., Hauger, D.: Tailoring music recommendations to users by considering diversity, mainstreaminess, and novelty. In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. p. 947–950. SIGIR ’15, Association for Computing Machinery, New York, NY, USA (2015). https://doi.org/10.1145/2766462.2767763

  23. Yao, S., Huang, B.: Beyond parity: Fairness objectives for collaborative filtering. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. p. 2925–2934. NIPS’17, Curran Associates Inc., Red Hook, NY, USA (2017)

    Google Scholar 

  24. Zhu, Z., Caverlee, J.: Fighting mainstream bias in recommender systems via local fine tuning. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. p. 1497–1506. WSDM ’22, Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3488560.3498427

  25. Zou, L., Xia, L., Gu, Y., Zhao, X., Liu, W., Huang, J.X., Yin, D.: Neural interactive collaborative filtering. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. p. 749–758. SIGIR ’20, Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3397271.3401181

Download references

Acknowledgement

We thank all reviewers for their sagacious comments and our colleagues’ great efforts. This work is supported by the Science and Technology Department of Sichuan Province under Grant No. 2021YFS0399 and the Grid Planning and Research Center of Guangdong Power Grid Co. under Grant No. 037700KK52220042 (GDKJXM20220906).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haixian Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, K., Xie, M., Zhang, Y., Zhang, H. (2023). Utilizing Implicit Feedback for User Mainstreaminess Evaluation and Bias Detection in Recommender Systems. In: Boratto, L., Faralli, S., Marras, M., Stilo, G. (eds) Advances in Bias and Fairness in Information Retrieval. BIAS 2023. Communications in Computer and Information Science, vol 1840. Springer, Cham. https://doi.org/10.1007/978-3-031-37249-0_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-37249-0_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-37248-3

  • Online ISBN: 978-3-031-37249-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics