Skip to main content

A Study of Pre-processing Fairness Intervention Methods for Ranking People

  • Conference paper
  • First Online:
Advances in Information Retrieval (ECIR 2024)

Abstract

Fairness interventions are hard to use in practice when ranking people due to legal constraints that limit access to sensitive information. Pre-processing fairness interventions, however, can be used in practice to create more fair training data that encourage the model to generate fair predictions without having access to sensitive information during inference. Little is known about the performance of pre-processing fairness interventions in a recruitment setting. To simulate a real scenario, we train a ranking model on pre-processed representations, while access to sensitive information is limited during inference. We evaluate pre-processing fairness intervention methods in terms of individual fairness and group fairness. On two real-world datasets, the pre-processing methods are found to improve the diversity of rankings with respect to gender, while individual fairness is not affected. Moreover, we discuss advantages and disadvantages of using pre-processing fairness interventions in practice for ranking people.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.bls.gov/soc/.

  2. 2.

    https://www.xing.com/.

  3. 3.

    https://github.com/MilkaLichtblau/xing_dataset/.

  4. 4.

    https://github.com/ClaraRus/A-Study-of-Pre-processing-Fairness-Intervention-Methods-for-Ranking-People.

References

  1. General data protection regulation (GDPR). https://eur-lex.europa.eu/eli/reg/2016/679/oj (2023), (Accessed 20 Oct 2023)

  2. Albert, E.T.: AI in talent acquisition: a review of AI-applications used in recruitment and selection. Strateg. HR Rev. 18(5), 215–221 (2019)

    Article  Google Scholar 

  3. Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., Rieke, A.: Discrimination through optimization: how Facebook’s Ad delivery can lead to biased outcomes. Proc. ACM Hum. Comput. Interact. 3(CSCW), 1–30 (2019)

    Google Scholar 

  4. van Bekkum, M., Borgesius Zuiderveen, F.: Using sensitive data to prevent discrimination by artificial intelligence: does the GDPR need a new exception? Comput. Law Sec. Rev. 48, 105770 (2023)

    Article  Google Scholar 

  5. Beutel, A., et al.: Fairness in recommendation ranking through pairwise comparisons. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2212–2220 (2019)

    Google Scholar 

  6. Biega, A.J., Gummadi, K.P., Weikum, G.: Equity of attention: amortizing individual fairness in rankings. In: The 41st international ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 405–414 (2018)

    Google Scholar 

  7. Bisschop, P., ter Weel, B., Zwetsloot, J.: Ethnic employment gaps of graduates in the Netherlands. De Economist 168(4), 577–598 (2020)

    Article  Google Scholar 

  8. Bolukbasi, T., Chang, K.W., Zou, J.Y., Saligrama, V., Kalai, A.T.: Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In: Advances in Neural Information Processing Systems 29 (2016)

    Google Scholar 

  9. Burges, C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., Hullender, G.: Learning to rank using gradient descent. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 89–96 (2005)

    Google Scholar 

  10. Chapman, D.S., Webster, J.: The use of technologies in the recruiting, screening, and selection processes for job candidates. Int. J. Sel. Assess. 11(2–3), 113–120 (2003)

    Article  Google Scholar 

  11. Ciminelli, G., Schwellnus, C., Stadler, B.: Sticky floors or glass ceilings? The role of human capital, working time flexibility and discrimination in the gender wage gap. Tech. Rep. 1668, OECD Publishing (2021)

    Google Scholar 

  12. Dang, V., Ascent, C.: The Lemur project (2023). http://lemurproject.org

  13. Dastin, J.: Amazon scraps secret AI recruiting tool that showed bias against women. In: Ethics of Data and Analytics, pp. 296–299. Auerbach Publications (2022)

    Google Scholar 

  14. De-Arteaga, M., et al.: Bias in bios: a case study of semantic representation bias in a high-stakes setting. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 120–128 (2019)

    Google Scholar 

  15. Diaz, F., Mitra, B., Ekstrand, M.D., Biega, A.J., Carterette, B.: Evaluating stochastic rankings with expected exposure. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 275–284 (2020)

    Google Scholar 

  16. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226 (2012)

    Google Scholar 

  17. Granka, L.A., Joachims, T., Gay, G.: Eye-tracking analysis of user behavior in WWW search. In: Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 478–479 (2004)

    Google Scholar 

  18. Kurita, K., Vyas, N., Pareek, A., Black, A.W., Tsvetkov, Y.: Measuring bias in contextualized word representations. arXiv preprint arXiv:1906.07337 (2019)

  19. Lahoti, P., Gummadi, K.P., Weikum, G.: iFair: learning individually fair data representations for algorithmic decision making. In: 2019 IEEE 35th International Conference on Data Engineering (ICDE), pp. 1334–1345, IEEE (2019)

    Google Scholar 

  20. Matteazzi, E., Pailhé, A., Solaz, A.: Part-time employment, the gender wage gap and the role of wage-setting institutions: evidence from 11 European countries. Eur. J. Ind. Relat. 24(3), 221–241 (2018)

    Article  Google Scholar 

  21. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: Proceedings of Workshop at ICLR 2013 (Jan 2013). https://doi.org/10.48550/arXiv.1301.3781

  22. O’Brien, M., Keane, M.T.: Modeling result-list searching in the world wide web: the role of relevance topologies and trust bias. In: Proceedings of the 28th Annual Conference of the Cognitive Science Society, vol. 28, pp. 1881–1886 (2006)

    Google Scholar 

  23. Rus, C., de Rijke, M., Yates, A.: Counterfactual representations for intersectional fair ranking in recruitment. In: RecSys in HR 2023: The 3rd Workshop on Recommender Systems for Human Resources, in conjunction with the 17th ACM Conference on Recommender Systems (2023)

    Google Scholar 

  24. Singh, A., Joachims, T.: Fairness of exposure in rankings. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2219–2228 (2018)

    Google Scholar 

  25. Thijssen, L., Lancee, B., Veit, S., Yemane, R.: Discrimination against Turkish minorities in Germany and the Netherlands: field experimental evidence on the effect of diagnostic information on labour market outcomes. J. Ethn. Migr. Stud. 47(6), 1222–1239 (2021)

    Article  Google Scholar 

  26. Yang, K., Gkatzelis, V., Stoyanovich, J.: Balanced ranking with diversity constraints. arXiv preprint arXiv:1906.01747 (2019)

  27. Yang, K., Loftus, J.R., Stoyanovich, J.: Causal intersectionality for fair ranking. arXiv preprint arXiv:2006.08688 (2020)

  28. Yu, Q., Li, B.: mma: An R package for mediation analysis with multiple mediators. J. Open Res. Softw. 5(1) (2017)

    Google Scholar 

  29. Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: FA*IR: A fair top-\(k\) ranking algorithm. In: Proceedings of the 2017 ACM Conference on Information and Knowledge Management, pp. 1569–1578. ACM (2017)

    Google Scholar 

  30. Zehlike, M., Castillo, C.: Reducing disparate exposure in ranking: a learning to rank approach. In: Proceedings of the Web Conference 2020, pp. 2849–2855 (2020)

    Google Scholar 

  31. Zehlike, M., Sühr, T., Baeza-Yates, R., Bonchi, F., Castillo, C., Hajian, S.: Fair top-\(k\) ranking with multiple protected groups. Inform. Process. Manag. 59(1), 102707 (2022)

    Article  Google Scholar 

  32. Zehlike, M., Yang, K., Stoyanovich, J.: Fairness in ranking, Part I: score-based ranking. ACM Comput. Surv. 55(6), 1–36 (2022)

    Google Scholar 

  33. Zehlike, M., Yang, K., Stoyanovich, J.: Fairness in ranking, Part II: learning-to-rank and recommender systems. ACM Comput. Surv. 55(6), 1–41 (2022)

    Google Scholar 

  34. Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: International Conference on Machine Learning, pp. 325–333. PMLR (2013)

    Google Scholar 

Download references

Acknowledgments

We thank our reviewers for valuable feedback. This research was supported by the FINDHR (Fairness and Intersectional Non-Discrimination in Human Recommendation) project that received funding from the European Union’s Horizon Europe research and innovation program under grant agreement No 101070212, the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, https://hybrid-intelligence-centre.nl, and project LESSEN with project number NWA.1389.20.183 of the research program NWA ORC 2020/21, which is (partly) financed by the Dutch Research Council (NWO).

All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Clara Rus .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rus, C., Yates, A., de Rijke, M. (2024). A Study of Pre-processing Fairness Intervention Methods for Ranking People. In: Goharian, N., et al. Advances in Information Retrieval. ECIR 2024. Lecture Notes in Computer Science, vol 14611. Springer, Cham. https://doi.org/10.1007/978-3-031-56066-8_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-56066-8_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-56065-1

  • Online ISBN: 978-3-031-56066-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics