skip to main content
10.1145/3397271.3401292acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

Multi-grouping Robust Fair Ranking

Published:25 July 2020Publication History

ABSTRACT

Rankings are at the core of countless modern applications and thus play a major role in various decision making scenarios. When such rankings are produced by data-informed, machine learning-based algorithms, the potentially harmful biases contained in the data and algorithms are likely to be reproduced and even exacerbated. This motivated recent research to investigate a methodology for fair ranking, as a way to correct the aforementioned biases. Current approaches to fair ranking consider that the protected groups, i.e., the partition of the population potentially impacted by the biases, are known. However, in a realistic scenario, this assumption might not hold as different biases may lead to different partitioning into protected groups. Only accounting for one such partition (i.e., grouping) would still lead to potential unfairness with respect to the other possible groupings. Therefore, in this paper, we study the problem of designing fair ranking algorithms without knowing in advance the groupings that will be used later to assess their fairness. The approach that we follow is to rely on a carefully chosen set of groupings when deriving the ranked lists, and we empirically investigate which selection strategies are the most effective. An efficient two-step greedy brute-force method is also proposed to embed our strategy. As benchmark for this study, we adopted the dataset and setting composing the TREC 2019 Fair Ranking track.

Skip Supplemental Material Section

Supplemental Material

3397271.3401292.mp4

mp4

15.7 MB

References

  1. David J. Aldous. 1985. Exchangeability and Related Topics. In École d'É té de Probabilité s de Saint-Flour XIII -- 1983. 1--198.Google ScholarGoogle Scholar
  2. Asia J. Biega, Fernando Diaz, and Michael D. Ekstrand. 2019. TREC 2019 Fair Ranking Track Overview. In TREC '19.Google ScholarGoogle Scholar
  3. Asia J. Biega, Krishna P. Gummadi, and Gerhard Weikum. 2018. Equity of Attention: Amortizing Individual Fairness in Rankings. In SIGIR '18. 405--414.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. L. Elisa Celis, Damian Straszak, and Nisheeth K. Vishnoi. 2018. Ranking with Fairness Constraints. In ICALP '18. 28:1--28:15.Google ScholarGoogle Scholar
  5. Ashudeep Singh and Thorsten Joachims. 2018. Fairness of Exposure in Rankings. In KDD '18. 2219--2228.Google ScholarGoogle Scholar
  6. Ashudeep Singh and Thorsten Joachims. 2019. Policy Learning for Fairness in Ranking. arXiv:1902.04056 (2019).Google ScholarGoogle Scholar
  7. Ke Yang and Julia Stoyanovich. 2017. Measuring Fairness in Ranked Outputs. In SSDBM '17 (SSDBM '17). Article 22, 6 pages.Google ScholarGoogle Scholar
  8. Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo Baeza-Yates. 2017. FA*IR: A Fair Top-k Ranking Algorithm. In CIKM '17. 1569--1578.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Multi-grouping Robust Fair Ranking

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
          July 2020
          2548 pages
          ISBN:9781450380164
          DOI:10.1145/3397271

          Copyright © 2020 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 25 July 2020

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • short-paper

          Acceptance Rates

          Overall Acceptance Rate792of3,983submissions,20%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader