ABSTRACT
Rankings are at the core of countless modern applications and thus play a major role in various decision making scenarios. When such rankings are produced by data-informed, machine learning-based algorithms, the potentially harmful biases contained in the data and algorithms are likely to be reproduced and even exacerbated. This motivated recent research to investigate a methodology for fair ranking, as a way to correct the aforementioned biases. Current approaches to fair ranking consider that the protected groups, i.e., the partition of the population potentially impacted by the biases, are known. However, in a realistic scenario, this assumption might not hold as different biases may lead to different partitioning into protected groups. Only accounting for one such partition (i.e., grouping) would still lead to potential unfairness with respect to the other possible groupings. Therefore, in this paper, we study the problem of designing fair ranking algorithms without knowing in advance the groupings that will be used later to assess their fairness. The approach that we follow is to rely on a carefully chosen set of groupings when deriving the ranked lists, and we empirically investigate which selection strategies are the most effective. An efficient two-step greedy brute-force method is also proposed to embed our strategy. As benchmark for this study, we adopted the dataset and setting composing the TREC 2019 Fair Ranking track.
Supplemental Material
- David J. Aldous. 1985. Exchangeability and Related Topics. In École d'É té de Probabilité s de Saint-Flour XIII -- 1983. 1--198.Google Scholar
- Asia J. Biega, Fernando Diaz, and Michael D. Ekstrand. 2019. TREC 2019 Fair Ranking Track Overview. In TREC '19.Google Scholar
- Asia J. Biega, Krishna P. Gummadi, and Gerhard Weikum. 2018. Equity of Attention: Amortizing Individual Fairness in Rankings. In SIGIR '18. 405--414.Google ScholarDigital Library
- L. Elisa Celis, Damian Straszak, and Nisheeth K. Vishnoi. 2018. Ranking with Fairness Constraints. In ICALP '18. 28:1--28:15.Google Scholar
- Ashudeep Singh and Thorsten Joachims. 2018. Fairness of Exposure in Rankings. In KDD '18. 2219--2228.Google Scholar
- Ashudeep Singh and Thorsten Joachims. 2019. Policy Learning for Fairness in Ranking. arXiv:1902.04056 (2019).Google Scholar
- Ke Yang and Julia Stoyanovich. 2017. Measuring Fairness in Ranked Outputs. In SSDBM '17 (SSDBM '17). Article 22, 6 pages.Google Scholar
- Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo Baeza-Yates. 2017. FA*IR: A Fair Top-k Ranking Algorithm. In CIKM '17. 1569--1578.Google ScholarDigital Library
Index Terms
- Multi-grouping Robust Fair Ranking
Recommendations
Search results diversification for effective fair ranking in academic search
AbstractProviding users with relevant search results has been the primary focus of information retrieval research. However, focusing on relevance alone can lead to undesirable side effects. For example, small differences between the relevance scores of ...
Marginal-Certainty-Aware Fair Ranking Algorithm
WSDM '23: Proceedings of the Sixteenth ACM International Conference on Web Search and Data MiningRanking systems are ubiquitous in modern Internet services, including online marketplaces, social media, and search engines. Traditionally, ranking systems only focus on how to get better relevance estimation. When relevance estimation is available, they ...
Understanding and Mitigating the Effect of Outliers in Fair Ranking
WSDM '22: Proceedings of the Fifteenth ACM International Conference on Web Search and Data MiningTraditional ranking systems are expected to sort items in the order of their relevance and thereby maximize their utility. In fair ranking, utility is complemented with fairness as an optimization goal. Recent work on fair ranking focuses on developing ...
Comments