skip to main content
10.1145/3465416.3483291acmconferencesArticle/Chapter ViewAbstractPublication PageseaamoConference Proceedingsconference-collections
research-article

Breaking Taboos in Fair Machine Learning: An Experimental Study

Published: 04 November 2021 Publication History

Abstract

Many scholars, engineers, and policymakers believe that algorithmic fairness requires disregarding information about certain characteristics of individuals, such as their race or gender. Often, the mandate to “blind” algorithms in this way is conveyed as an unconditional ethical imperative—a minimal requirement of fair treatment—and any contrary practice is assumed to be morally and politically untenable. However, in some circumstances, prohibiting algorithms from considering information about race or gender can in fact lead to worse outcomes for racial minorities and women, complicating the rationale for blinding. In this paper, we conduct a series of randomized studies to investigate attitudes toward blinding algorithms, both among the general public as well as among computer scientists and professional lawyers. We find, first, that people are generally averse to the use of race and gender in algorithmic determinations of “pretrial risk”—the risk that criminal defendants pose to the public if released while awaiting trial. We find, however, that this preference for blinding shifts in response to a relatively mild intervention. In particular, we show that support for the use of race and gender in algorithmic decision-making increases substantially after respondents read a short passage about the possibility that blinding could lead to higher detention rates for Black and female defendants, respectively. Similar effect sizes are observed among the general public, computer scientists, and professional lawyers. These findings suggest that, while many respondents attest that they prefer blind algorithms, their preference is not based on an absolute principle. Rather, blinding is perceived as a way to ensure better outcomes for members of marginalized groups. Accordingly, in circumstances where blinding serves to disadvantage marginalized groups, respondents no longer view the exclusion of protected characteristics as a moral imperative, and the use of such information may become politically viable.

References

[1]
Jack M. Balkin and Reva B. Siegel. 2002. The American Civil Rights Tradition: Anticlassification or Antisubordination The Origins and Fate of Antisubordination Theory. Issues in Legal Scholarship 2, 1 (2002), [i]–17.
[2]
Solon Barocas and Andrew D. Selbst. 2016. Big Data’s Disparate Impact Essay. California Law Review 104, 3 (2016), 671–732.
[3]
Robert Bartlett, Adair Morse, Nancy Wallace, and Richard Stanton. 2022. Algorithmic Discrimination and Input Accountability under the Civil Rights Acts. Berkeley Technology Law Journal(2022). forthcoming.
[4]
Jason R. Bent. 2020. Is Algorithmic Affirmative Action Legal?Georgetown Law Journal 108, 4 (2020), 803–853.
[5]
Alex Chohlas-Wood, Joe Nudell, Keniel Yao, Zhiyuan Lin, Julian Nyarko, and Sharad Goel. 2021. Blind justice: Algorithmically masking race in charging decisions. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 35–45.
[6]
Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023(2018).
[7]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic Decision Making and the Cost of Fairness. KDD ’17: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(2017), 797–806.
[8]
Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology. General 144, 1 (Feb. 2015), 114–126.
[9]
Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2016. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science 64, 3 (Nov. 2016), 1155–1170. Publisher: INFORMS.
[10]
Jim A.C. Everett, Nadira S. Faber, Julian Savulescu, and Molly J. Crockett. 2018. The costs of being consequentialist: Social inference from instrumental harm and impartial beneficence. Journal of Experimental Social Psychology 79 (Nov. 2018), 200–216.
[11]
Jim A. C. Everett, David A. Pizarro, and M. J. Crockett. 2016. Inference of trustworthiness from intuitive moral judgments. Journal of Experimental Psychology. General 145, 6 (June 2016), 772–787.
[12]
Richard S. Frase. 2013. Recurring Policy Issues of Guidelines (and Non-Guidelines) Sentencing: Risk Assessments, Criminal History Enhancements, and the Enforcement of Release Conditions Guest Editor’s Observations. Federal Sentencing Reporter 26, 3 (2013), 145–157.
[13]
Talia B Gillis and Jann L Spiess. 2019. Big data and discrimination. The University of Chicago Law Review 86, 2 (2019), 459–488.
[14]
Sharad Goel, Ravi Shroff, Jennifer L Skeem, and Christopher Slobogin. 2020. The accuracy, equity, and jurisprudence of criminal risk assessment. In Research Handbook on Big Data Law.
[15]
Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P Gummadi, and Adrian Weller. 2016. The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making. In Symposium on Machine Learning and the Law at the 29th Conference on Neural Information Processing Systems. 11.
[16]
Ian Haney-Lopez. 2012. Intentional Blindness. New York University Law Review 87, 6 (2012), 1779–1877.
[17]
Bernard E. Harcourt. 2007. Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age. University of Chicago Press.
[18]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016), 3315–3323.
[19]
Galen Harrison, Julia Hanson, Christine Jacinto, Julio Ramirez, and Blase Ur. 2020. An empirical study on the perceived fairness of realistic, imperfect machine learning models. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). Association for Computing Machinery, New York, NY, USA, 392–402.
[20]
Daniel E Ho and Alice Xiang. 2020. Affirmative Algorithms: The Legal Grounds for Fairness as Awareness. University of Chicago Law Review Online(2020), 134–154.
[21]
Aziz Z. Huq. 2018. Racial Equity in Algorithmic Criminal Justice. Duke Law Journal 68, 6 (2018), 1043–1134.
[22]
Aziz Z. Huq. 2020. Constitutional Rights in the Machine Learning State. Cornell law Review 105, 7 (2020), 1875–1954.
[23]
Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding discrimination through causal reasoning. arXiv preprint arXiv:1706.02744(2017).
[24]
Pauline Kim. 2017. Auditing Algorithms for Discrimination. University of Pennsylvania Law Review Online 166, 1 (Jan. 2017), 189–203.
[25]
Pauline T Kim. 2016. Data-driven discrimination at work. William & Mary Law Review 58 (2016), 857–934.
[26]
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan. 2018. Algorithmic fairness. In Aea papers and proceedings, Vol. 108. 22–27.
[27]
Joshua A. Kroll, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. 2016. Accountable Algorithms. University of Pennsylvania Law Review 165, 3 (2016), 633–706.
[28]
James A. Kushner. 2019. Government Discrimination: Equal Protection Law and Litigation (2019-2020 ed.). Clark Boardman Callaghan, Deerfield, IL.
[29]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In Advances in neural information processing systems. 4066–4076.
[30]
Zachary C Lipton, Alexandra Chouldechova, and Julian McAuley. 2018. Does mitigating ML’s impact disparity require treatment disparity?. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 8136–8146.
[31]
Sandra G. Mayson. 2018. Bias in, Bias out. Yale Law Journal 128, 8 (2018), 2218–2301.
[32]
Emma Pierson, Camelia Simoiu, Jan Overgoor, Sam Corbett-Davies, Daniel Jenson, Amy Shoemaker, Vignesh Ramachandran, Phoebe Barghouty, Cheryl Phillips, Ravi Shroff, 2020. A large-scale analysis of racial disparities in police stops across the United States. Nature Human Behaviour(2020), 1–10.
[33]
Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Manipulating and Measuring Model Interpretability. arXiv:1802.07810 [cs] (Nov. 2019). arXiv:1802.07810.
[34]
Russell K. Robinson. 2016. Unequal Protection. Stanford Law Review 68, 1 (2016), 151–234.
[35]
Sarah C. Rom, Alexa Weiss, and Paul Conway. 2017. Judging those who judge: Perceivers infer the roles of affect and cognition underpinning others’ moral dilemma responses. Journal of Experimental Social Psychology 69 (March 2017), 44–58.
[36]
Reva B. Siegel. 2003. Equality Talk: Antisubordination and Anticlassification Values in Constitutional Struggles over Brown Symposium: Brown at Fifty. Harvard Law Review 117, 5 (2003), 1470–1547.
[37]
Reva B Siegel. 2010. From colorblindness to antibalkanization: An emerging ground of decision in race equality cases. Yale Law Journal 120(2010), 1278–1367.
[38]
Jennifer Skeem, John Monahan, and Christopher Lowenkamp. 2016. Gender, risk assessment, and sanctioning: The cost of treating women like men.Law and human behavior 40, 5 (2016), 580.
[39]
Sonja B. Starr. 2014. Evidence-Based Sentencing and the Scientific Rationalization of Discrimination. Stanford Law Review 66, 4 (2014), 803–872.
[40]
Michael Tonry. 2013. Legal and Ethical Issues in the Prediction of Recidivism. Federal Sentencing Reporter 26, 3 (2013), 167–176.
[41]
Alice Xiang. 2021. Reconciling legal and technical approaches to algorithmic bias. Tennessee Law Review 88, 3 (2021). forthcoming.
[42]
Crystal Yang and Will Dobbie. 2020. Equal Protection Under Algorithms: A New Statistical and Legal Framework. Michigan Law Review 119, 2 (2020), 291–396.

Cited By

View all
  • (2024)Procedural Fairness as Stepping Stone for Successful Implementation of Algorithmic Decision-Making in Public Administration: Review and OutlookAUC IURIDICA10.14712/23366478.2024.2470:2(85-99)Online publication date: 23-May-2024
  • (2024)A Critical Survey on Fairness Benefits of Explainable AIProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658990(1579-1595)Online publication date: 3-Jun-2024
  • (2024)Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-MakingProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642621(1-18)Online publication date: 11-May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
EAAMO '21: Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization
October 2021
207 pages
ISBN:9781450385534
DOI:10.1145/3465416
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 November 2021

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

EAAMO '21
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)45
  • Downloads (Last 6 weeks)1
Reflects downloads up to 02 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Procedural Fairness as Stepping Stone for Successful Implementation of Algorithmic Decision-Making in Public Administration: Review and OutlookAUC IURIDICA10.14712/23366478.2024.2470:2(85-99)Online publication date: 23-May-2024
  • (2024)A Critical Survey on Fairness Benefits of Explainable AIProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658990(1579-1595)Online publication date: 3-Jun-2024
  • (2024)Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-MakingProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642621(1-18)Online publication date: 11-May-2024
  • (2023)Designing equitable algorithmsNature Computational Science10.1038/s43588-023-00485-43:7(601-610)Online publication date: 24-Jul-2023
  • (2023)Fairness of Academic Performance Prediction for the Distribution of Support Measures for Students: Differences in Perceived Fairness of Distributive Justice NormsTechnology, Knowledge and Learning10.1007/s10758-023-09698-y29:2(1079-1107)Online publication date: 11-Nov-2023
  • (2022)Enhancing Fairness Perception – Towards Human-Centred AI and Personalized Explanations Understanding the Factors Influencing Laypeople’s Fairness Perceptions of Algorithmic DecisionsInternational Journal of Human–Computer Interaction10.1080/10447318.2022.209570539:7(1455-1482)Online publication date: 19-Jul-2022
  • (2022)An Algorithmic Assessment of Parole DecisionsJournal of Quantitative Criminology10.1007/s10940-022-09563-840:1(151-188)Online publication date: 14-Dec-2022
  • (2021)Bias, awareness, and ignorance in deep-learning-based face recognitionAI and Ethics10.1007/s43681-021-00108-62:3(509-522)Online publication date: 27-Oct-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media