Skip to main content

Modular Debiasing of Latent User Representations in Prototype-Based Recommender Systems

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track (ECML PKDD 2024)

Abstract

Recommender Systems (RSs) may inadvertently perpetuate biases based on protected attributes like gender, religion, or ethnicity. Left unaddressed, these biases can lead to unfair system behavior and privacy concerns. Interpretable RS models provide a promising avenue for understanding and mitigating such biases. In this work, we propose a novel approach to debias interpretable RS models by introducing user-specific scaling weights to the interpretable user representations of prototype-based RSs. This reduces the influence of the protected attributes on the RS’s prediction while preserving recommendation utility. By decoupling the scaling weights from the original representations, users can control the degree of invariance of recommendations to their protected characteristics. Moreover, by defining distinct sets of weights for each attribute, the user can further specify which attributes the recommendations should be agnostic to. We apply our method to ProtoMF, a state-of-the-art prototype-based RS model that models users by their similarities to prototypes. We employ two debiasing strategies to learn the scaling weights and conduct experiments on ML-1M and LFM2B-DB datasets aiming at making the user representations agnostic to age and gender. The results show that our approach effectively reduces the influence of the protected attributes on the representations on both datasets, showcasing flexibility in bias mitigation, while only marginally affecting recommendation quality. Finally, we assess the effects of the debiasing weights and provide qualitative evidence, particularly focusing on movie recommendations, of genre patterns identified by ProtoMF that correlate with specific genders.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We consider majority vs. all others subsets for non-binary attributes.

  2. 2.

    https://grouplens.org/datasets/movielens/1m/.

  3. 3.

    http://www.cp.jku.at/datasets/LFM-2b/.

  4. 4.

    Both datasets provide gender in binary form, neglecting nuanced gender definitions.

  5. 5.

    \(\lambda =1\) on LFM2B-DB, \(\lambda =5\) and \(\lambda =10\) on ML-1M for gender and age respectively.

References

  1. Barkan, O., Hirsch, R., Katz, O., Caciularu, A., Koenigstein, N.: Anchor-based collaborative filtering. In: Proceedings of the CIKM (2021)

    Google Scholar 

  2. Barocas, S., Hardt, M., Narayanan, A.: Fairness and Machine Learning: Limitations and Opportunities. MIT Press (2023)

    Google Scholar 

  3. Begley, T., Schwedes, T., Frye, C., Feige, I.: Explainability for fair machine learning. arXiv preprint arXiv:2010.07389 (2020)

  4. Beigi, G., Mosallanezhad, A., Guo, R., Alvari, H., Nou, A., Liu, H.: Privacy-aware recommendation with private-attribute protection using adversarial learning. In: Proceedings of the WSDM (2020)

    Google Scholar 

  5. Beutel, A., et al.: Fairness in recommendation ranking through pairwise comparisons. In: Proceedings of the KDD (2019)

    Google Scholar 

  6. Bose, A., Hamilton, W.: Compositional fairness constraints for graph embeddings. In: PMLR (2019)

    Google Scholar 

  7. Brodersen, K.H., Ong, C.S., Stephan, K.E., Buhmann, J.M.: The balanced accuracy and its posterior distribution. In: Proceedings of the ICPR. IEEE (2010)

    Google Scholar 

  8. Burke, R., Sonboli, N., Ordonez-Gauger, A.: Balanced neighborhoods for multi-sided fairness in recommendation. In: PMLR (2018)

    Google Scholar 

  9. Chen, J., Dong, H., Wang, X., Feng, F., Wang, M., He, X.: Bias and debias in recommender system: a survey and future directions. ACM TOIS 41(3) (2023)

    Google Scholar 

  10. Cremonesi, P., Turrin, R., Lentini, E., Matteucci, M.: An evaluation methodology for collaborative recommender systems. In: Proceedings of the AXMEDIS. IEEE (2008)

    Google Scholar 

  11. Deldjoo, Y., Jannach, D., Bellogin, A., Difonzo, A., Zanzonelli, D.: Fairness in recommender systems: research landscape and future directions. User Modeling and User-Adapted Interaction (2023)

    Google Scholar 

  12. Ekstrand, M.D., Das, A., Burke, R., Diaz, F., et al.: Fairness in information access systems. Found. Trends Inf. Retrieval 16(1-2) (2022)

    Google Scholar 

  13. Ekstrand, M.D., Tian, M., Kazi, M.R.I., Mehrpouyan, H., Kluver, D.: Exploring author gender in book rating and recommendation. In: Proceedings of the RecSys (2018)

    Google Scholar 

  14. Elazar, Y., Goldberg, Y.: Adversarial removal of demographic attributes from text data. In: Proc. ACL, pp. 11–21 (2018)

    Google Scholar 

  15. Fu, Z., et al.: Fairness-aware explainable recommendation over knowledge graphs. In: Proceedings of the SIGIR (2020)

    Google Scholar 

  16. Fusco, F., Vlachos, M., Vasileiadis, V., Wardatzky, K., Schneider, J.: Reconet: an interpretable neural architecture for recommender systems. In: Proceedings of the IJCAI (2019)

    Google Scholar 

  17. Ganhör, C., Penz, D., Rekabsaz, N., Lesota, O., Schedl, M.: Mitigating consumer biases in recommendations with adversarial training. In: Proceedings of the SIGIR (2022)

    Google Scholar 

  18. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: PMLR (2015)

    Google Scholar 

  19. Ge, Y., et al.: Explainable fairness in recommendation. In: Proceeding of the SIGIR (2022)

    Google Scholar 

  20. Geyik, S.C., Ambler, S., Kenthapadi, K.: Fairness-aware ranking in search & recommendation systems with application to linkedin talent search. In: Proceedings of the KDD (2019)

    Google Scholar 

  21. Goodfellow, I., et al.: Generative adversarial nets. In: Proceedings of the NIPS, vol. 27 (2014)

    Google Scholar 

  22. Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.: A kernel two-sample test. J. Mach. Learn. Res. 13 (2012)

    Google Scholar 

  23. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM CSUR 51(5) (2018)

    Google Scholar 

  24. Harper, F.M., Konstan, J.A.: The movielens datasets: History and context. ACM TIIS 5(4) (2015)

    Google Scholar 

  25. Hauzenberger, L., Masoudian, S., Kumar, D., Schedl, M., Rekabsaz, N.: Modular and on-demand bias mitigation with attribute-removal subnetworks. In: Proceedings of the ACL (2023)

    Google Scholar 

  26. Kumar, D., et al.: Parameter-efficient modularised bias mitigation via AdapterFusion. In: Proceedings of the ACL (2023)

    Google Scholar 

  27. Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: Proceedings of the AAAI, vol. 32 (2018)

    Google Scholar 

  28. Li, R.Z., Urbano, J., Hanjalic, A.: Leave no user behind: towards improving the utility of recommender systems for non-mainstream users. In: Proceedings of the WSDM, pp. 103–111 (2021)

    Google Scholar 

  29. Li, Y., Chen, H., Xu, S., Ge, Y., Zhang, Y.: Towards personalized fairness based on causal notion. In: Proceedings of the SIGIR (2021)

    Google Scholar 

  30. Liang, D., Krishnan, R.G., Hoffman, M.D., Jebara, T.: Variational autoencoders for collaborative filtering. In: Proceedings of the WebConf (2018)

    Google Scholar 

  31. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Proceedings of the ICLR (2019)

    Google Scholar 

  32. Masoudian, S., Volaucnik, C., Schedl, M., Rekabsaz, N.: Effective controllable bias mitigation for classification and retrieval using gate adapters. In: Proceedings of the EACL (2024)

    Google Scholar 

  33. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM CSUR 54(6) (2021)

    Google Scholar 

  34. Melchiorre, A.B., Rekabsaz, N., Ganhör, C., Schedl, M.: Protomf: prototype-based matrix factorization for effective and explainable recommendations. In: Proceedings of the RecSys (2022)

    Google Scholar 

  35. Melchiorre, A.B., Rekabsaz, N., Parada-Cabaleiro, E., Brandl, S., Lesota, O., Schedl, M.: Investigating gender fairness of recommendation algorithms in the music domain. IP &M 58(5) (2021)

    Google Scholar 

  36. Müllner, P., Lex, E., Schedl, M., Kowald, D.: Differential privacy in collaborative filtering recommender systems: a review. Frontiers in Big Data 6 (2023)

    Google Scholar 

  37. Pan, D., Li, X., Li, X., Zhu, D.: Explainable recommendation via interpretable feature mapping and evaluation of explainability. In: Proceedings of the IJCAI (2020)

    Google Scholar 

  38. Pan, W., Cui, S., Bian, J., Zhang, C., Wang, F.: Explaining algorithmic fairness through fairness-aware causal path decomposition. In: Proceedings of the KDD (2021)

    Google Scholar 

  39. Pfeiffer, J., Kamath, A., Rücklé, A., Cho, K., Gurevych, I.: Adapterfusion: non-destructive task composition for transfer learning. In: Proceedings of the EACL (2021)

    Google Scholar 

  40. Shen, A., Han, X., Cohn, T., Baldwin, T., Frermann, L.: Does representational fairness imply empirical fairness? In: Proceedings of the ACL (2022)

    Google Scholar 

  41. Spearman, C.: The proof and measurement of association between two things. Am. J. Psychol. 15 (1904)

    Google Scholar 

  42. Wu, L., Chen, L., Shao, P., Hong, R., Wang, X., Wang, M.: Learning fair representations for recommendation: a graph-based perspective. In: Proceeding of the WebConf (2021)

    Google Scholar 

  43. Xie, Q., Dai, Z., Du, Y., Hovy, E., Neubig, G.: Controllable invariance through adversarial feature learning. In: Proceedings of the NIPS, vol. 30 (2017)

    Google Scholar 

  44. Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: Fa* IR: a fair top-k ranking algorithm. In: Proceeding of the CIKM (2017)

    Google Scholar 

  45. Zhang, Y., Lai, G., Zhang, M., Zhang, Y., Liu, Y., Ma, S.: Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In: Proceeding of the SIGIR (2014)

    Google Scholar 

  46. Zhao, C., Wu, L., Shao, P., Zhang, K., Hong, R., Wang, M.: Fair representation learning for recommendation: a mutual information perspective. In: Proceeding of the AAAI (2023)

    Google Scholar 

  47. Zhu, Z., Wang, J., Caverlee, J.: Measuring and mitigating item under-recommendation bias in personalized ranking systems. In: Proceeding of the SIGIR, pp. 449–458 (2020)

    Google Scholar 

Download references

Acknowledgments

This research was funded in whole or in part by the Austrian Science Fund (FWF): P36413, P33526, and DFH-23, and by the State of Upper Austria and the Federal Ministry of Education, Science, and Research, through grant LIT-2021-YOU-215 and LIT-2020-9-SEE-113.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alessandro B. Melchiorre .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 112 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Melchiorre, A.B., Masoudian, S., Kumar, D., Schedl, M. (2024). Modular Debiasing of Latent User Representations in Prototype-Based Recommender Systems. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14941. Springer, Cham. https://doi.org/10.1007/978-3-031-70341-6_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70341-6_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70340-9

  • Online ISBN: 978-3-031-70341-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics