Abstract
Recommender Systems (RSs) may inadvertently perpetuate biases based on protected attributes like gender, religion, or ethnicity. Left unaddressed, these biases can lead to unfair system behavior and privacy concerns. Interpretable RS models provide a promising avenue for understanding and mitigating such biases. In this work, we propose a novel approach to debias interpretable RS models by introducing user-specific scaling weights to the interpretable user representations of prototype-based RSs. This reduces the influence of the protected attributes on the RS’s prediction while preserving recommendation utility. By decoupling the scaling weights from the original representations, users can control the degree of invariance of recommendations to their protected characteristics. Moreover, by defining distinct sets of weights for each attribute, the user can further specify which attributes the recommendations should be agnostic to. We apply our method to ProtoMF, a state-of-the-art prototype-based RS model that models users by their similarities to prototypes. We employ two debiasing strategies to learn the scaling weights and conduct experiments on ML-1M and LFM2B-DB datasets aiming at making the user representations agnostic to age and gender. The results show that our approach effectively reduces the influence of the protected attributes on the representations on both datasets, showcasing flexibility in bias mitigation, while only marginally affecting recommendation quality. Finally, we assess the effects of the debiasing weights and provide qualitative evidence, particularly focusing on movie recommendations, of genre patterns identified by ProtoMF that correlate with specific genders.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
We consider majority vs. all others subsets for non-binary attributes.
- 2.
- 3.
- 4.
Both datasets provide gender in binary form, neglecting nuanced gender definitions.
- 5.
\(\lambda =1\) on LFM2B-DB, \(\lambda =5\) and \(\lambda =10\) on ML-1M for gender and age respectively.
References
Barkan, O., Hirsch, R., Katz, O., Caciularu, A., Koenigstein, N.: Anchor-based collaborative filtering. In: Proceedings of the CIKM (2021)
Barocas, S., Hardt, M., Narayanan, A.: Fairness and Machine Learning: Limitations and Opportunities. MIT Press (2023)
Begley, T., Schwedes, T., Frye, C., Feige, I.: Explainability for fair machine learning. arXiv preprint arXiv:2010.07389 (2020)
Beigi, G., Mosallanezhad, A., Guo, R., Alvari, H., Nou, A., Liu, H.: Privacy-aware recommendation with private-attribute protection using adversarial learning. In: Proceedings of the WSDM (2020)
Beutel, A., et al.: Fairness in recommendation ranking through pairwise comparisons. In: Proceedings of the KDD (2019)
Bose, A., Hamilton, W.: Compositional fairness constraints for graph embeddings. In: PMLR (2019)
Brodersen, K.H., Ong, C.S., Stephan, K.E., Buhmann, J.M.: The balanced accuracy and its posterior distribution. In: Proceedings of the ICPR. IEEE (2010)
Burke, R., Sonboli, N., Ordonez-Gauger, A.: Balanced neighborhoods for multi-sided fairness in recommendation. In: PMLR (2018)
Chen, J., Dong, H., Wang, X., Feng, F., Wang, M., He, X.: Bias and debias in recommender system: a survey and future directions. ACM TOIS 41(3) (2023)
Cremonesi, P., Turrin, R., Lentini, E., Matteucci, M.: An evaluation methodology for collaborative recommender systems. In: Proceedings of the AXMEDIS. IEEE (2008)
Deldjoo, Y., Jannach, D., Bellogin, A., Difonzo, A., Zanzonelli, D.: Fairness in recommender systems: research landscape and future directions. User Modeling and User-Adapted Interaction (2023)
Ekstrand, M.D., Das, A., Burke, R., Diaz, F., et al.: Fairness in information access systems. Found. Trends Inf. Retrieval 16(1-2) (2022)
Ekstrand, M.D., Tian, M., Kazi, M.R.I., Mehrpouyan, H., Kluver, D.: Exploring author gender in book rating and recommendation. In: Proceedings of the RecSys (2018)
Elazar, Y., Goldberg, Y.: Adversarial removal of demographic attributes from text data. In: Proc. ACL, pp. 11–21 (2018)
Fu, Z., et al.: Fairness-aware explainable recommendation over knowledge graphs. In: Proceedings of the SIGIR (2020)
Fusco, F., Vlachos, M., Vasileiadis, V., Wardatzky, K., Schneider, J.: Reconet: an interpretable neural architecture for recommender systems. In: Proceedings of the IJCAI (2019)
Ganhör, C., Penz, D., Rekabsaz, N., Lesota, O., Schedl, M.: Mitigating consumer biases in recommendations with adversarial training. In: Proceedings of the SIGIR (2022)
Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: PMLR (2015)
Ge, Y., et al.: Explainable fairness in recommendation. In: Proceeding of the SIGIR (2022)
Geyik, S.C., Ambler, S., Kenthapadi, K.: Fairness-aware ranking in search & recommendation systems with application to linkedin talent search. In: Proceedings of the KDD (2019)
Goodfellow, I., et al.: Generative adversarial nets. In: Proceedings of the NIPS, vol. 27 (2014)
Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.: A kernel two-sample test. J. Mach. Learn. Res. 13 (2012)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM CSUR 51(5) (2018)
Harper, F.M., Konstan, J.A.: The movielens datasets: History and context. ACM TIIS 5(4) (2015)
Hauzenberger, L., Masoudian, S., Kumar, D., Schedl, M., Rekabsaz, N.: Modular and on-demand bias mitigation with attribute-removal subnetworks. In: Proceedings of the ACL (2023)
Kumar, D., et al.: Parameter-efficient modularised bias mitigation via AdapterFusion. In: Proceedings of the ACL (2023)
Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: Proceedings of the AAAI, vol. 32 (2018)
Li, R.Z., Urbano, J., Hanjalic, A.: Leave no user behind: towards improving the utility of recommender systems for non-mainstream users. In: Proceedings of the WSDM, pp. 103–111 (2021)
Li, Y., Chen, H., Xu, S., Ge, Y., Zhang, Y.: Towards personalized fairness based on causal notion. In: Proceedings of the SIGIR (2021)
Liang, D., Krishnan, R.G., Hoffman, M.D., Jebara, T.: Variational autoencoders for collaborative filtering. In: Proceedings of the WebConf (2018)
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: Proceedings of the ICLR (2019)
Masoudian, S., Volaucnik, C., Schedl, M., Rekabsaz, N.: Effective controllable bias mitigation for classification and retrieval using gate adapters. In: Proceedings of the EACL (2024)
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM CSUR 54(6) (2021)
Melchiorre, A.B., Rekabsaz, N., Ganhör, C., Schedl, M.: Protomf: prototype-based matrix factorization for effective and explainable recommendations. In: Proceedings of the RecSys (2022)
Melchiorre, A.B., Rekabsaz, N., Parada-Cabaleiro, E., Brandl, S., Lesota, O., Schedl, M.: Investigating gender fairness of recommendation algorithms in the music domain. IP &M 58(5) (2021)
Müllner, P., Lex, E., Schedl, M., Kowald, D.: Differential privacy in collaborative filtering recommender systems: a review. Frontiers in Big Data 6 (2023)
Pan, D., Li, X., Li, X., Zhu, D.: Explainable recommendation via interpretable feature mapping and evaluation of explainability. In: Proceedings of the IJCAI (2020)
Pan, W., Cui, S., Bian, J., Zhang, C., Wang, F.: Explaining algorithmic fairness through fairness-aware causal path decomposition. In: Proceedings of the KDD (2021)
Pfeiffer, J., Kamath, A., Rücklé, A., Cho, K., Gurevych, I.: Adapterfusion: non-destructive task composition for transfer learning. In: Proceedings of the EACL (2021)
Shen, A., Han, X., Cohn, T., Baldwin, T., Frermann, L.: Does representational fairness imply empirical fairness? In: Proceedings of the ACL (2022)
Spearman, C.: The proof and measurement of association between two things. Am. J. Psychol. 15 (1904)
Wu, L., Chen, L., Shao, P., Hong, R., Wang, X., Wang, M.: Learning fair representations for recommendation: a graph-based perspective. In: Proceeding of the WebConf (2021)
Xie, Q., Dai, Z., Du, Y., Hovy, E., Neubig, G.: Controllable invariance through adversarial feature learning. In: Proceedings of the NIPS, vol. 30 (2017)
Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: Fa* IR: a fair top-k ranking algorithm. In: Proceeding of the CIKM (2017)
Zhang, Y., Lai, G., Zhang, M., Zhang, Y., Liu, Y., Ma, S.: Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In: Proceeding of the SIGIR (2014)
Zhao, C., Wu, L., Shao, P., Zhang, K., Hong, R., Wang, M.: Fair representation learning for recommendation: a mutual information perspective. In: Proceeding of the AAAI (2023)
Zhu, Z., Wang, J., Caverlee, J.: Measuring and mitigating item under-recommendation bias in personalized ranking systems. In: Proceeding of the SIGIR, pp. 449–458 (2020)
Acknowledgments
This research was funded in whole or in part by the Austrian Science Fund (FWF): P36413, P33526, and DFH-23, and by the State of Upper Austria and the Federal Ministry of Education, Science, and Research, through grant LIT-2021-YOU-215 and LIT-2020-9-SEE-113.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Melchiorre, A.B., Masoudian, S., Kumar, D., Schedl, M. (2024). Modular Debiasing of Latent User Representations in Prototype-Based Recommender Systems. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14941. Springer, Cham. https://doi.org/10.1007/978-3-031-70341-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-70341-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-70340-9
Online ISBN: 978-3-031-70341-6
eBook Packages: Computer ScienceComputer Science (R0)