ABSTRACT
Many scholars, engineers, and policymakers believe that algorithmic fairness requires disregarding information about certain characteristics of individuals, such as their race or gender. Often, the mandate to “blind” algorithms in this way is conveyed as an unconditional ethical imperative—a minimal requirement of fair treatment—and any contrary practice is assumed to be morally and politically untenable. However, in some circumstances, prohibiting algorithms from considering information about race or gender can in fact lead to worse outcomes for racial minorities and women, complicating the rationale for blinding. In this paper, we conduct a series of randomized studies to investigate attitudes toward blinding algorithms, both among the general public as well as among computer scientists and professional lawyers. We find, first, that people are generally averse to the use of race and gender in algorithmic determinations of “pretrial risk”—the risk that criminal defendants pose to the public if released while awaiting trial. We find, however, that this preference for blinding shifts in response to a relatively mild intervention. In particular, we show that support for the use of race and gender in algorithmic decision-making increases substantially after respondents read a short passage about the possibility that blinding could lead to higher detention rates for Black and female defendants, respectively. Similar effect sizes are observed among the general public, computer scientists, and professional lawyers. These findings suggest that, while many respondents attest that they prefer blind algorithms, their preference is not based on an absolute principle. Rather, blinding is perceived as a way to ensure better outcomes for members of marginalized groups. Accordingly, in circumstances where blinding serves to disadvantage marginalized groups, respondents no longer view the exclusion of protected characteristics as a moral imperative, and the use of such information may become politically viable.
- Jack M. Balkin and Reva B. Siegel. 2002. The American Civil Rights Tradition: Anticlassification or Antisubordination The Origins and Fate of Antisubordination Theory. Issues in Legal Scholarship 2, 1 (2002), [i]–17.Google Scholar
- Solon Barocas and Andrew D. Selbst. 2016. Big Data’s Disparate Impact Essay. California Law Review 104, 3 (2016), 671–732.Google Scholar
- Robert Bartlett, Adair Morse, Nancy Wallace, and Richard Stanton. 2022. Algorithmic Discrimination and Input Accountability under the Civil Rights Acts. Berkeley Technology Law Journal(2022). forthcoming.Google Scholar
- Jason R. Bent. 2020. Is Algorithmic Affirmative Action Legal?Georgetown Law Journal 108, 4 (2020), 803–853.Google Scholar
- Alex Chohlas-Wood, Joe Nudell, Keniel Yao, Zhiyuan Lin, Julian Nyarko, and Sharad Goel. 2021. Blind justice: Algorithmically masking race in charging decisions. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 35–45.Google ScholarDigital Library
- Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023(2018).Google Scholar
- Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic Decision Making and the Cost of Fairness. KDD ’17: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(2017), 797–806.Google ScholarDigital Library
- Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology. General 144, 1 (Feb. 2015), 114–126.Google Scholar
- Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2016. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science 64, 3 (Nov. 2016), 1155–1170. Publisher: INFORMS.Google Scholar
- Jim A.C. Everett, Nadira S. Faber, Julian Savulescu, and Molly J. Crockett. 2018. The costs of being consequentialist: Social inference from instrumental harm and impartial beneficence. Journal of Experimental Social Psychology 79 (Nov. 2018), 200–216.Google ScholarCross Ref
- Jim A. C. Everett, David A. Pizarro, and M. J. Crockett. 2016. Inference of trustworthiness from intuitive moral judgments. Journal of Experimental Psychology. General 145, 6 (June 2016), 772–787.Google Scholar
- Richard S. Frase. 2013. Recurring Policy Issues of Guidelines (and Non-Guidelines) Sentencing: Risk Assessments, Criminal History Enhancements, and the Enforcement of Release Conditions Guest Editor’s Observations. Federal Sentencing Reporter 26, 3 (2013), 145–157.Google ScholarCross Ref
- Talia B Gillis and Jann L Spiess. 2019. Big data and discrimination. The University of Chicago Law Review 86, 2 (2019), 459–488.Google Scholar
- Sharad Goel, Ravi Shroff, Jennifer L Skeem, and Christopher Slobogin. 2020. The accuracy, equity, and jurisprudence of criminal risk assessment. In Research Handbook on Big Data Law.Google Scholar
- Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P Gummadi, and Adrian Weller. 2016. The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making. In Symposium on Machine Learning and the Law at the 29th Conference on Neural Information Processing Systems. 11.Google Scholar
- Ian Haney-Lopez. 2012. Intentional Blindness. New York University Law Review 87, 6 (2012), 1779–1877.Google Scholar
- Bernard E. Harcourt. 2007. Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age. University of Chicago Press.Google Scholar
- Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016), 3315–3323.Google Scholar
- Galen Harrison, Julia Hanson, Christine Jacinto, Julio Ramirez, and Blase Ur. 2020. An empirical study on the perceived fairness of realistic, imperfect machine learning models. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). Association for Computing Machinery, New York, NY, USA, 392–402.Google ScholarDigital Library
- Daniel E Ho and Alice Xiang. 2020. Affirmative Algorithms: The Legal Grounds for Fairness as Awareness. University of Chicago Law Review Online(2020), 134–154.Google Scholar
- Aziz Z. Huq. 2018. Racial Equity in Algorithmic Criminal Justice. Duke Law Journal 68, 6 (2018), 1043–1134.Google Scholar
- Aziz Z. Huq. 2020. Constitutional Rights in the Machine Learning State. Cornell law Review 105, 7 (2020), 1875–1954.Google Scholar
- Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding discrimination through causal reasoning. arXiv preprint arXiv:1706.02744(2017).Google Scholar
- Pauline Kim. 2017. Auditing Algorithms for Discrimination. University of Pennsylvania Law Review Online 166, 1 (Jan. 2017), 189–203.Google Scholar
- Pauline T Kim. 2016. Data-driven discrimination at work. William & Mary Law Review 58 (2016), 857–934.Google Scholar
- Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan. 2018. Algorithmic fairness. In Aea papers and proceedings, Vol. 108. 22–27.Google Scholar
- Joshua A. Kroll, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. 2016. Accountable Algorithms. University of Pennsylvania Law Review 165, 3 (2016), 633–706.Google Scholar
- James A. Kushner. 2019. Government Discrimination: Equal Protection Law and Litigation (2019-2020 ed.). Clark Boardman Callaghan, Deerfield, IL.Google Scholar
- Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In Advances in neural information processing systems. 4066–4076.Google Scholar
- Zachary C Lipton, Alexandra Chouldechova, and Julian McAuley. 2018. Does mitigating ML’s impact disparity require treatment disparity?. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 8136–8146.Google Scholar
- Sandra G. Mayson. 2018. Bias in, Bias out. Yale Law Journal 128, 8 (2018), 2218–2301.Google Scholar
- Emma Pierson, Camelia Simoiu, Jan Overgoor, Sam Corbett-Davies, Daniel Jenson, Amy Shoemaker, Vignesh Ramachandran, Phoebe Barghouty, Cheryl Phillips, Ravi Shroff, 2020. A large-scale analysis of racial disparities in police stops across the United States. Nature Human Behaviour(2020), 1–10.Google Scholar
- Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Manipulating and Measuring Model Interpretability. arXiv:1802.07810 [cs] (Nov. 2019). arXiv:1802.07810.Google Scholar
- Russell K. Robinson. 2016. Unequal Protection. Stanford Law Review 68, 1 (2016), 151–234.Google Scholar
- Sarah C. Rom, Alexa Weiss, and Paul Conway. 2017. Judging those who judge: Perceivers infer the roles of affect and cognition underpinning others’ moral dilemma responses. Journal of Experimental Social Psychology 69 (March 2017), 44–58.Google ScholarCross Ref
- Reva B. Siegel. 2003. Equality Talk: Antisubordination and Anticlassification Values in Constitutional Struggles over Brown Symposium: Brown at Fifty. Harvard Law Review 117, 5 (2003), 1470–1547.Google ScholarCross Ref
- Reva B Siegel. 2010. From colorblindness to antibalkanization: An emerging ground of decision in race equality cases. Yale Law Journal 120(2010), 1278–1367.Google Scholar
- Jennifer Skeem, John Monahan, and Christopher Lowenkamp. 2016. Gender, risk assessment, and sanctioning: The cost of treating women like men.Law and human behavior 40, 5 (2016), 580.Google Scholar
- Sonja B. Starr. 2014. Evidence-Based Sentencing and the Scientific Rationalization of Discrimination. Stanford Law Review 66, 4 (2014), 803–872.Google Scholar
- Michael Tonry. 2013. Legal and Ethical Issues in the Prediction of Recidivism. Federal Sentencing Reporter 26, 3 (2013), 167–176.Google ScholarCross Ref
- Alice Xiang. 2021. Reconciling legal and technical approaches to algorithmic bias. Tennessee Law Review 88, 3 (2021). forthcoming.Google Scholar
- Crystal Yang and Will Dobbie. 2020. Equal Protection Under Algorithms: A New Statistical and Legal Framework. Michigan Law Review 119, 2 (2020), 291–396.Google ScholarCross Ref
Recommendations
Deconstructing Organizational Taboos: The Suppression of Gender Conflict in Organizations
<P>This paper begins with a story told by a corporation president to illustrate what his organization was doing to "help" women employees balance the demands of work and home. The paper deconstructs and reconstructs this story text from a feminist ...
Does gender stereotype threat affects the levels of aggressiveness, learning and flow in gamified learning environments?: An experimental study
AbstractStudies in the literature reported several positive benefits provided by the use of technology in online education, especially in the gamified tutoring system. However, despite the benefits of intelligent tutoring systems, recent studies indicate ...
Exploring Why Underrepresented Students Are Less Likely to Study Machine Learning and Artificial Intelligence
ITiCSE '21: Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1There is little research on why underrepresented minorities are less likely to specifically study Machine Learning and Artificial Intelligence (ML/AI). We surveyed 159 undergraduate students about their interest in, exposure to, and personal views on ML/...
Comments