skip to main content
10.1145/3640457.3688020acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
extended-abstract

Fairness Explanations in Recommender Systems

Published: 08 October 2024 Publication History

Abstract

Fairness in recommender systems is an emerging area that aims to study and mitigate discriminations against individuals or/and groups of individuals in recommendation engines. These mitigation strategies rely on bias detection, which is a non-trivial task that requires complex analysis and interventions to ensure fairness in these engines. Furthermore, fairness interventions in recommender systems involve a trade-off between fairness and performance of the recommendation lists, impacting the user experience with less potentially accurate lists. In this context, fairness interventions with explanations have been proposed recently in the literature, mitigating discrimination in recommendation lists and providing explainability about the recommendation process and the impact of the fairness interventions in the outcomes. However, in spite of the different approaches it is still not clear how these proposals compare with each other, even those that propose to mitigate the same kind of bias. In addition, the contribution of these different explainable algorithmic fairness approaches to users’ fairness perceptions was not explored until the moment. Looking at these gaps, our doctorate project aims to investigate how these explainable fairness proposals compare to each other and how they are perceived by the users, in order to identify which fairness interventions and explanation strategies are most promising to increase transparency and fairness perceptions of recommendation lists.

References

[1]
Himan Abdollahpouri, Masoud Mansoury, Robin Burke, and Bamshad Mobasher. 2019. The Unfairness of Popularity Bias in Recommendation. arxiv:1907.13286
[2]
C.C. Aggarwal. 2016. Recommender Systems: The Textbook. Springer International Publishing, New York, NY.
[3]
Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2023. Bias and Debias in Recommender System: A Survey and Future Directions. ACM Trans. Inf. Syst. 41, 3, Article 67 (feb 2023), 39 pages. https://doi.org/10.1145/3564284
[4]
Yashar Deldjoo, Dietmar Jannach, Alejandro Bellogin, Alessandro Difonzo, and Dario Zanzonelli. 2023. Fairness in recommender systems: research landscape and future directions. User Modeling and User-Adapted Interaction (24 Apr 2023). https://doi.org/10.1007/s11257-023-09364-z
[5]
Di Jin, Luzhi Wang, He Zhang, Yizhen Zheng, Weiping Ding, Feng Xia, and Shirui Pan. 2023. A survey on fairness-aware recommender systems. Information Fusion 100 (2023), 101906. https://doi.org/10.1016/j.inffus.2023.101906
[6]
Anastasiia Klimashevskaia, Dietmar Jannach, Mehdi Elahi, and Christoph Trattner. 2024. A survey on popularity bias in recommender systems. User Modeling and User-Adapted Interaction (July 2024). https://doi.org/10.1007/s11257-024-09406-0
[7]
Yunqi Li, Hanxiong Chen, Shuyuan Xu, Yingqiang Ge, Juntao Tan, Shuchang Liu, and Yongfeng Zhang. 2023. Fairness in Recommendation: Foundations, Methods, and Applications. ACM Trans. Intell. Syst. Technol. 14, 5, Article 95 (oct 2023), 48 pages. https://doi.org/10.1145/3610302
[8]
Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, and Robin Burke. 2020. Feedback Loop and Bias Amplification in Recommender Systems. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (Virtual Event, Ireland) (CIKM ’20). Association for Computing Machinery, New York, NY, USA, 2145–2148. https://doi.org/10.1145/3340531.3412152
[9]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 54, 6 (July 2021), 115:1–115:35. https://doi.org/10.1145/3457607
[10]
Yifan Wang, Weizhi Ma, Min Zhang, Yiqun Liu, and Shaoping Ma. 2023. A Survey on the Fairness of Recommender Systems. ACM Trans. Inf. Syst. 41, 3, Article 52 (feb 2023), 43 pages. https://doi.org/10.1145/3547333
[11]
Jianlong Zhou, Fang Chen, and Andreas Holzinger. 2022. Towards Explainability for AI Fairness. In xxAI - Beyond Explainable AI, Andreas Holzinger, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, and Wojciech Samek (Eds.). Vol. 13200. Springer International Publishing, Cham, 375–386. https://doi.org/10.1007/978-3-031-04083-2_18

Index Terms

  1. Fairness Explanations in Recommender Systems

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    RecSys '24: Proceedings of the 18th ACM Conference on Recommender Systems
    October 2024
    1438 pages
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 October 2024

    Check for updates

    Author Tags

    1. Explanations
    2. Fairness
    3. Recommender Systems

    Qualifiers

    • Extended-abstract
    • Research
    • Refereed limited

    Funding Sources

    Conference

    Acceptance Rates

    Overall Acceptance Rate 254 of 1,295 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 98
      Total Downloads
    • Downloads (Last 12 months)98
    • Downloads (Last 6 weeks)14
    Reflects downloads up to 18 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media