skip to main content
10.1145/3351095.3372858acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Interventions for ranking in the presence of implicit bias

Published: 27 January 2020 Publication History

Abstract

Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group (e.g., defined by gender or race). Studies on implicit bias have shown that these unconscious stereotypes can have adverse outcomes in various social contexts, such as job screening, teaching, or policing. Recently, [34] considered a mathematical model for implicit bias and showed the effectiveness of the Rooney Rule as a constraint to improve the utility of the outcome for certain cases of the subset selection problem. Here we study the problem of designing interventions for the generalization of subset selection - ranking - that requires to output an ordered set and is a central primitive in various social and computational contexts. We present a family of simple and interpretable constraints and show that they can optimally mitigate implicit bias for a generalization of the model studied in [34]. Subsequently, we prove that under natural distributional assumptions on the utilities of items, simple, Rooney Rule-like, constraints can also surprisingly recover almost all the utility lost due to implicit biases. Finally, we augment our theoretical results with empirical findings on real-world distributions from the IIT-JEE (2009) dataset and the Semantic Scholar Research corpus.

Supplementary Material

PDF File (p369-celis-supp.pdf)
Supplemental material.

References

[1]
ACM. 2017. Statement on Algorithmic Transparency and Accountability. https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf.
[2]
Social Security Administration. 2018. Beyond the Top 1000 Names. https://www.ssa.gov/oact/babynames/limits.html.
[3]
Harold Alderman and Elizabeth M. King. 1998. Gender differences in parental investment in education. Structural Change and Economic Dynamics 9, 4(1998), 453--468.
[4]
Abolfazl Asudeh, H. V. Jagadish. Julia Stoyanovich. and Gautam Das. 2019. Designing Fair Ranking Schemes. In SIGMOD Conference. ACM, 1259--1276.
[5]
Surender Baswana, P. P. Chakrabarti, Yashodhan Kanoria, Utkarsh Patange, and Sharat Chandran. 2019. Joint Seat Allocation 2018: An algorithmic perspective. CoRR (2019). http://arxiv.org/abs/1904.06698
[6]
Marc Bendick Jr. and Ana P. Nunes. 2012. Developing the research basis for controlling bias in hiring. Journal of Social Issues 68. 2 (2012), 238--262.
[7]
Marianne Bertrand and Sendhil Mullainathan. 2004. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American economic review 94, 4 (2004), 991--1013.
[8]
Miranda Bogen and Aaron Rieke. 2018. Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias. https://www.upturn.org/reports/2018/hiring-algorithms/.
[9]
Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. 2013. Concentration inequalities: A nonasymptotic theory of independence. Oxford university press.
[10]
Carlos Castillo. 2019. Fairness and Transparency in Ranking. In ACM SIGIR Forum, Vol. 52. ACM, 64--71.
[11]
Marilyn Cavicchia. 2017. How to fight implicit bias? With conscious thought, diversity expert tells NABE. (June 2017). https://www.americanbar.org/groups/bar_services/publicalions/bar_leader/2015-16/september-october/how-fight-implicit-bias-conscious-thought-diversity-expert-tells-nabe/
[12]
L. Elisa Celis, Amit Deshpande, Tarun Kathuria, and Nisheeth K. Vishnoi. 2016. How to be Fair and Diverse?. In Fairness, Accountability, and Transparency in Machine Learning.
[13]
L. Elisa Celis, Lingxiao Huang, Vijay Keswani, and Nisheeth K. Vishnoi. 2019. Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). ACM, New York, NY, USA, 319--328.
[14]
L. Elisa Celis, Lingxiao Huang, and Nisheeth K. Vishnoi. 2018. Multiwinner Voting with Fairness Constraints. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18. International Joint Conferences on Artificial Intelligence Organization, 144--151.
[15]
L. Elisa Celis, Sayash Kapoor, Farnood Salehi, and Nisheeth K. Vishnoi. 2019. Controlling Polarization in Personalization: An Algorithmic Framework. In Proceedings of the Conference on Fairness. Accountability, and Transparency (FAT* '19). ACM, New York, NY, USA, 160--169.
[16]
L. Elisa Celis, Vijay Keswani, Damian Straszak, Amit Deshpande, Tarun Kathuria, and Nisheeth K. Vishnoi. 2018. Fair and Diverse DPP-Based Data Summarization. In ICML (Proceedings of Machine Learning Research), Vol. 80. PMLR, 715--724.
[17]
L. Elisa Celis, Anay Mehrotra, and Nisheeth K. Vishnoi. 2019. Toward Controlling Discrimination in Online Ad Auctions. In ICML (Proceedings of Machine Learning Research), Vol. 97. PMLR, 4456--4465.
[18]
L. Elisa Celis, Damian Straszak, and Nisheeth K. Vishnoi. 2018. Ranking with Fairness Constraints. In ICALP (LIPIcs), Vol. 107. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 28:1--28:15.
[19]
Brian W. Collins. 2007. Tackling unconscious bias in hiring practices: The plight of the Rooney rule. NYUL Rev. 82 (2007), 870.
[20]
Joshua Correll, Bernadette Park, Charles M. Judd, and Bernd Wittenbrink. 2007. The influence of stereotypes on decisions to shoot. European Journal of Social Psychology 37, 6 (2007), 1102--1117.
[21]
Jeffrey Dastin. 2019. Amazon scraps secret AI recruiting tool that showed bias against women. https://reut.rs/2N1dzRJ.
[22]
Jennifer L. Eberhardt and Sandy Banks. 2019. Implicit bias puts lives in jeopardy. Can mandatory training reduce the risk? https://www.lalimes.com/opinion/op-ed/la-oe-eberhardt-banks-implicit-bias-training-20190712-story.html.
[23]
Robert Epstein and Ronald E. Robertson. 2015. The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proceedings of the National Academy of Sciences 112, 33 (2015), E4512--E4521. arXiv:http://www.pnas.org/content/112/33/E4512.full.pdf
[24]
Facebook. 2019. Managing Unconscious Bias. https://managingbias.fb.com.
[25]
Sahin Cem Geyik, Stuart Ambler, and Krishnaram Kenthapadi. 2019. Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search. In KDD. ACM, 2221--2231.
[26]
Anthony G. Greenwald and Mahzarin R Banaji. 1995. Implicit social cognition: attitudes, self-esteem, and stereotypes. Psychological review 102, 1 (1995), 4.
[27]
Anthony G. Greenwald and Linda Hamilton Krieger. 2006. Implicit bias: Scientific foundations. California Law Review 94, 4 (2006), 945--967.
[28]
The White House. 2015. Fact Sheet: President Obama Announces New Commitments from Investors, Companies, Universities, and Cities to Advance Inclusive Entrepreneurship at First-Ever White House Demo Day. (August 2015). https://obamawhitehouse.archives.gov/the-press-office/2015/08/04/fact-sheet-president-obama-announces-new-commitments-investors-companies
[29]
Lingxiao Huang. Shaoieng H.-C. Jiang, and Nisheeth K. Vishnoi. 2019. Coresets for Clustering with Fairness Constraints. In NeurIPS.
[30]
Don Hush and Clint Scovel. 2005. Concentration of the hypergeometric distribution. Statistics & probability letters 75, 2 (2005), 127--132.
[31]
Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. 20, 4 (2002), 422--446.
[32]
JEE Team. 2011. Joint Entrance Examination (2011) Report. http://bit.do/e5iEZ.
[33]
Evangelos Kanoulas and Javed A Aslam. 2009. Empirical justification of the gain and discount function for nDCG. In Proceedings of the 18th ACM conference on Information and knowledge management. ACM, 611--620.
[34]
Jon M. Kleinberg and Vanish Raghavan. 2018. Selection Problems in the Presence of Implicit Bias. In ITCS (LIPIcs), Vol. 94. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 33:1--33:17.
[35]
Caitlin Kuhlman, MaryAnn Van Valkenburg, and Elke A. Rundensteiner. 2019. FARE: Diagnostics for Fair Ranking using Pairwise Error Metrics. In WWW.ACM, 2936--2942.
[36]
Mr. Rajeev Kumar. 2009. RTI Complaint. Decision No. CIC/SG/C/2009/001088/5392, Complaint No. CIC/SG/C/2009/001088.
[37]
Karen S Lyness and Madeline E Heilman. 2006. When fit is fundamental: performance evaluations and promotions of upper-level female and male managers. Journal of Applied Psychology 91, 4 (2006), 777.
[38]
Kay Manning. 2018. As Starbucks gears up for training, here's why 'implicit bias' can be good, bad or very bad. https://www.chicagotribune.com/Iifestyles/sc-fam-implicit-bias-0529-story.html.
[39]
R McGregor-Smith. 2017. Race in the Workplace: The Mcgregor-Smith Review. (2017). https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachmenl_data/file/594336/race-in-workplace-megregor-smith-review.pdf.
[40]
Government of India Ministry of Home Affairs. 2011. CensusInfo India 2011: Final Population Totals. http://censusindia.gov.in/2011census/censusinfodashboard/index.html.
[41]
Corinne A Moss-Racusin, John F Dovidio, Victoria L Breseoll. Mark J Graham, and Jo Handelsman. 2012. Science faculty's subtle gender biases favor male students. Proceedings of the National Academy of Sciences 109,41 (2012), 16474--16479.
[42]
Cecilia Munoz, Megan Smith, and D.J. Patil. 2016. Big data: A report on algorithmic systems, opportunity, and civil rights. Executive Office of the President. The White House (2016).
[43]
Harikrishna Narasimhan, Andrew Cotter, Maya R. Gupta, and Serena Wang. 2019. Pairwise Fairness for Ranking and Regression. CoRR abs/1906.05330 (2019). arXiv:1906.05330 http://arxiv.org/abs/1906.05330
[44]
Mike Noon. 2018. Pointless diversity training: unconscious bias, new racism and agency. Work, Employment and Society 32, 1 (2018), 198--209.
[45]
Jason A Okonofua and Jennifer L Eberhardt. 2015. Two strikes: Race and the disciplining of young students. Psychological science 26, 5 (2015), 617--624.
[46]
Christina Passariello. 2016. Tech Firms Borrow Football Play to Increase Hiring of Women. (September 2016). https://www.wsj.com/articles/tech-firms-borrow-football-play-to-increase-hiring-of-women-1474963562
[47]
B. Keith Payne, Heidi A. Vuletich, and Jazmin L. Brown-Iannuzzi. 2019. Historical roots of implicit bias in slavery. Proceedings of the National Academy of Sciences 116, 24(2019), 11693--11698. arXiv:https://www.pnas.org/content/116/24/11693.full.pdf
[48]
Melody S Sadler, Joshua Correll, Bernadette Park, and Charles M Judd. 2012. The world is not black and white: Racial bias in the decision to shoot in a multiethnic context. Journal of Social Issues 68, 2 (2012), 286--313.
[49]
Piotr Sapiezynski, Wesley Zeng, Ronald E. Robertson, Alan Mislove, and Christo Wilson. 2019. Quantifying the Impact of User Attention on Fair Group Representation in Ranked Lists. CoRR abs/1901.10437 (2019). arXiv:1901.10437 http://arxiv.org/abs/1901.10437
[50]
Deepa Seetharaman. 2015. Facebook Is Testing the 'Rooney Rule' Approach to Hiring. The Wall Street Journal (June 2015). https://blogs.wsj.com/digits/2015/06/17/facebook-testing-rooney-rule-approach-to-hiring/
[51]
Ashudeep Singh and Thorsten Joachims. 2018. Fairness of Exposure in Rankings. In KDD. ACM, 2219--2228.
[52]
Eric Luis Uhlmann and Geoffrey L Cohen. 2005. Constructed criteria: Redefining merit to justify discrimination. Psychological Science 16, 6 (2005), 474--480.
[53]
Linda Van den Bergh, Eddie Denessen, Lisette Hornstra, Marinus Voeten, and Rob W Holland. 2010. The implicit prejudiced attitudes of teachers: Relations to teacher expectations and the ethnic achievement gap. American Educational Research Journal 47, 2 (2010), 497--527.
[54]
Joseph Walker. 2012. Meet the New Boss: Big Data. https://www.wsj.com/articles/SB10000872396390443890304578006252019616768.
[55]
Gregory M. Walton and Steven J. Spencer. 2009. Latent ability: Grades and test scores systematically underestimate the intellectual ability of negatively stereotyped students. Psychological Science 20, 9(2009), 1132--1139.
[56]
Yining Wang, Liwei Wang, Yuanzhi Li, Di He, Wei Chen, and Tie-Yan Liu. 2013. A theoretical analysis of NDCG ranking measures. In Proceedings of the 26th annual conference on learning theory (COLT 2013), Vol. 8. 6.
[57]
Christine Wenneras and Agnes Wold. 2001. Nepotism and sexism in peer-review. Women, sience and technology: A reader in feminist science studies (2001), 46--52.
[58]
Bernard E. Whitley Jr. and Mary E. Kite. 2016. Psychology of prejudice and discrimination. Routledge.
[59]
Joan C Williams. 2014. Double jeopardy? An empirical study with implications for the debates over implicit bias and intersectionality. Harvard Journal of Law & Gender 37 (2014), 185.
[60]
Rachel Williams. 2013. Why girls in India are still missing out on the education they need. https://www.theguardian.com/education/2013/mar/11/indian-children-education-opportunities.
[61]
Kathleen Woodhouse. 2017. Implicit Bias - Is It Really? Forbes (December 2017). http://bil.do/impIiciLbiasforbes.
[62]
Ke Yang and Julia Stoyanovich. 2017. Measuring Fairness in Ranked Outputs. In Proceedings of the 29th International Conference on Scientific and Statistical Database Management, Chicago, IL, USA, June 27--29, 2017. ACM, 22:1--22:6.
[63]
Meike Zehlike. Francesco Bonchi, Carlos Castillo. Sara Hajian, Mohamed Megahed, and Ricardo A. Baeza-Yates. 2017. FA*IR: A Fair Top-k Ranking Algorithm. In CIKM. ACM, 1569--1578.
[64]
Colin A Zestcott, Irene V Blair, and Jeff Stone. 2016. Examining the presence, consequences, and reduction of implicit bias in health care: A narrative review. Group Processes & Intergroup Relations 19, 4(2016), 528--542.

Cited By

View all
  • (2025)Properties of Group Fairness Measures for RankingsACM Transactions on Social Computing10.1145/36748838:1-2(1-45)Online publication date: 17-Jan-2025
  • (2024)Fairness in Ranking under Disparate UncertaintyProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694703(1-31)Online publication date: 29-Oct-2024
  • (2024)Query Refinement for Diverse Top-k SelectionProceedings of the ACM on Management of Data10.1145/36549692:3(1-27)Online publication date: 30-May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
January 2020
895 pages
ISBN:9781450369367
DOI:10.1145/3351095
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 January 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. algorithmic fairness
  2. implicit bias
  3. interventions
  4. ranking

Qualifiers

  • Research-article

Conference

FAT* '20
Sponsor:

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)62
  • Downloads (Last 6 weeks)9
Reflects downloads up to 22 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Properties of Group Fairness Measures for RankingsACM Transactions on Social Computing10.1145/36748838:1-2(1-45)Online publication date: 17-Jan-2025
  • (2024)Fairness in Ranking under Disparate UncertaintyProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694703(1-31)Online publication date: 29-Oct-2024
  • (2024)Query Refinement for Diverse Top-k SelectionProceedings of the ACM on Management of Data10.1145/36549692:3(1-27)Online publication date: 30-May-2024
  • (2024)The Impact of Group Membership Bias on the Quality and Fairness of Exposure in RankingProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657752(1514-1524)Online publication date: 10-Jul-2024
  • (2024)Intersectional fair ranking via subgroup divergenceData Mining and Knowledge Discovery10.1007/s10618-024-01029-838:4(2186-2222)Online publication date: 1-Jul-2024
  • (2023)Fairness in matching under uncertaintyProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3618716(7775-7794)Online publication date: 23-Jul-2023
  • (2023)Fairness in the Assignment Problem with Uncertain PrioritiesProceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems10.5555/3545946.3598636(188-196)Online publication date: 30-May-2023
  • (2023)Sampling ex-post group-fair rankingsProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/46(409-417)Online publication date: 19-Aug-2023
  • (2023)Fair&Share: Fast and Fair Multi-Criteria SelectionsProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3614874(152-162)Online publication date: 21-Oct-2023
  • (2023)Fairness in Ranking: From Values to Technical Choices and BackCompanion of the 2023 International Conference on Management of Data10.1145/3555041.3589405(7-12)Online publication date: 4-Jun-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media