skip to main content
10.1145/3465416.3483291acmconferencesArticle/Chapter ViewAbstractPublication PageseaamoConference Proceedingsconference-collections
research-article

Breaking Taboos in Fair Machine Learning: An Experimental Study

Published:04 November 2021Publication History

ABSTRACT

Many scholars, engineers, and policymakers believe that algorithmic fairness requires disregarding information about certain characteristics of individuals, such as their race or gender. Often, the mandate to “blind” algorithms in this way is conveyed as an unconditional ethical imperative—a minimal requirement of fair treatment—and any contrary practice is assumed to be morally and politically untenable. However, in some circumstances, prohibiting algorithms from considering information about race or gender can in fact lead to worse outcomes for racial minorities and women, complicating the rationale for blinding. In this paper, we conduct a series of randomized studies to investigate attitudes toward blinding algorithms, both among the general public as well as among computer scientists and professional lawyers. We find, first, that people are generally averse to the use of race and gender in algorithmic determinations of “pretrial risk”—the risk that criminal defendants pose to the public if released while awaiting trial. We find, however, that this preference for blinding shifts in response to a relatively mild intervention. In particular, we show that support for the use of race and gender in algorithmic decision-making increases substantially after respondents read a short passage about the possibility that blinding could lead to higher detention rates for Black and female defendants, respectively. Similar effect sizes are observed among the general public, computer scientists, and professional lawyers. These findings suggest that, while many respondents attest that they prefer blind algorithms, their preference is not based on an absolute principle. Rather, blinding is perceived as a way to ensure better outcomes for members of marginalized groups. Accordingly, in circumstances where blinding serves to disadvantage marginalized groups, respondents no longer view the exclusion of protected characteristics as a moral imperative, and the use of such information may become politically viable.

References

  1. Jack M. Balkin and Reva B. Siegel. 2002. The American Civil Rights Tradition: Anticlassification or Antisubordination The Origins and Fate of Antisubordination Theory. Issues in Legal Scholarship 2, 1 (2002), [i]–17.Google ScholarGoogle Scholar
  2. Solon Barocas and Andrew D. Selbst. 2016. Big Data’s Disparate Impact Essay. California Law Review 104, 3 (2016), 671–732.Google ScholarGoogle Scholar
  3. Robert Bartlett, Adair Morse, Nancy Wallace, and Richard Stanton. 2022. Algorithmic Discrimination and Input Accountability under the Civil Rights Acts. Berkeley Technology Law Journal(2022). forthcoming.Google ScholarGoogle Scholar
  4. Jason R. Bent. 2020. Is Algorithmic Affirmative Action Legal?Georgetown Law Journal 108, 4 (2020), 803–853.Google ScholarGoogle Scholar
  5. Alex Chohlas-Wood, Joe Nudell, Keniel Yao, Zhiyuan Lin, Julian Nyarko, and Sharad Goel. 2021. Blind justice: Algorithmically masking race in charging decisions. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 35–45.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023(2018).Google ScholarGoogle Scholar
  7. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic Decision Making and the Cost of Fairness. KDD ’17: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(2017), 797–806.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology. General 144, 1 (Feb. 2015), 114–126.Google ScholarGoogle Scholar
  9. Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2016. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science 64, 3 (Nov. 2016), 1155–1170. Publisher: INFORMS.Google ScholarGoogle Scholar
  10. Jim A.C. Everett, Nadira S. Faber, Julian Savulescu, and Molly J. Crockett. 2018. The costs of being consequentialist: Social inference from instrumental harm and impartial beneficence. Journal of Experimental Social Psychology 79 (Nov. 2018), 200–216.Google ScholarGoogle ScholarCross RefCross Ref
  11. Jim A. C. Everett, David A. Pizarro, and M. J. Crockett. 2016. Inference of trustworthiness from intuitive moral judgments. Journal of Experimental Psychology. General 145, 6 (June 2016), 772–787.Google ScholarGoogle Scholar
  12. Richard S. Frase. 2013. Recurring Policy Issues of Guidelines (and Non-Guidelines) Sentencing: Risk Assessments, Criminal History Enhancements, and the Enforcement of Release Conditions Guest Editor’s Observations. Federal Sentencing Reporter 26, 3 (2013), 145–157.Google ScholarGoogle ScholarCross RefCross Ref
  13. Talia B Gillis and Jann L Spiess. 2019. Big data and discrimination. The University of Chicago Law Review 86, 2 (2019), 459–488.Google ScholarGoogle Scholar
  14. Sharad Goel, Ravi Shroff, Jennifer L Skeem, and Christopher Slobogin. 2020. The accuracy, equity, and jurisprudence of criminal risk assessment. In Research Handbook on Big Data Law.Google ScholarGoogle Scholar
  15. Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P Gummadi, and Adrian Weller. 2016. The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making. In Symposium on Machine Learning and the Law at the 29th Conference on Neural Information Processing Systems. 11.Google ScholarGoogle Scholar
  16. Ian Haney-Lopez. 2012. Intentional Blindness. New York University Law Review 87, 6 (2012), 1779–1877.Google ScholarGoogle Scholar
  17. Bernard E. Harcourt. 2007. Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age. University of Chicago Press.Google ScholarGoogle Scholar
  18. Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016), 3315–3323.Google ScholarGoogle Scholar
  19. Galen Harrison, Julia Hanson, Christine Jacinto, Julio Ramirez, and Blase Ur. 2020. An empirical study on the perceived fairness of realistic, imperfect machine learning models. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). Association for Computing Machinery, New York, NY, USA, 392–402.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Daniel E Ho and Alice Xiang. 2020. Affirmative Algorithms: The Legal Grounds for Fairness as Awareness. University of Chicago Law Review Online(2020), 134–154.Google ScholarGoogle Scholar
  21. Aziz Z. Huq. 2018. Racial Equity in Algorithmic Criminal Justice. Duke Law Journal 68, 6 (2018), 1043–1134.Google ScholarGoogle Scholar
  22. Aziz Z. Huq. 2020. Constitutional Rights in the Machine Learning State. Cornell law Review 105, 7 (2020), 1875–1954.Google ScholarGoogle Scholar
  23. Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding discrimination through causal reasoning. arXiv preprint arXiv:1706.02744(2017).Google ScholarGoogle Scholar
  24. Pauline Kim. 2017. Auditing Algorithms for Discrimination. University of Pennsylvania Law Review Online 166, 1 (Jan. 2017), 189–203.Google ScholarGoogle Scholar
  25. Pauline T Kim. 2016. Data-driven discrimination at work. William & Mary Law Review 58 (2016), 857–934.Google ScholarGoogle Scholar
  26. Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan. 2018. Algorithmic fairness. In Aea papers and proceedings, Vol. 108. 22–27.Google ScholarGoogle Scholar
  27. Joshua A. Kroll, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. 2016. Accountable Algorithms. University of Pennsylvania Law Review 165, 3 (2016), 633–706.Google ScholarGoogle Scholar
  28. James A. Kushner. 2019. Government Discrimination: Equal Protection Law and Litigation (2019-2020 ed.). Clark Boardman Callaghan, Deerfield, IL.Google ScholarGoogle Scholar
  29. Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In Advances in neural information processing systems. 4066–4076.Google ScholarGoogle Scholar
  30. Zachary C Lipton, Alexandra Chouldechova, and Julian McAuley. 2018. Does mitigating ML’s impact disparity require treatment disparity?. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 8136–8146.Google ScholarGoogle Scholar
  31. Sandra G. Mayson. 2018. Bias in, Bias out. Yale Law Journal 128, 8 (2018), 2218–2301.Google ScholarGoogle Scholar
  32. Emma Pierson, Camelia Simoiu, Jan Overgoor, Sam Corbett-Davies, Daniel Jenson, Amy Shoemaker, Vignesh Ramachandran, Phoebe Barghouty, Cheryl Phillips, Ravi Shroff, 2020. A large-scale analysis of racial disparities in police stops across the United States. Nature Human Behaviour(2020), 1–10.Google ScholarGoogle Scholar
  33. Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Manipulating and Measuring Model Interpretability. arXiv:1802.07810 [cs] (Nov. 2019). arXiv:1802.07810.Google ScholarGoogle Scholar
  34. Russell K. Robinson. 2016. Unequal Protection. Stanford Law Review 68, 1 (2016), 151–234.Google ScholarGoogle Scholar
  35. Sarah C. Rom, Alexa Weiss, and Paul Conway. 2017. Judging those who judge: Perceivers infer the roles of affect and cognition underpinning others’ moral dilemma responses. Journal of Experimental Social Psychology 69 (March 2017), 44–58.Google ScholarGoogle ScholarCross RefCross Ref
  36. Reva B. Siegel. 2003. Equality Talk: Antisubordination and Anticlassification Values in Constitutional Struggles over Brown Symposium: Brown at Fifty. Harvard Law Review 117, 5 (2003), 1470–1547.Google ScholarGoogle ScholarCross RefCross Ref
  37. Reva B Siegel. 2010. From colorblindness to antibalkanization: An emerging ground of decision in race equality cases. Yale Law Journal 120(2010), 1278–1367.Google ScholarGoogle Scholar
  38. Jennifer Skeem, John Monahan, and Christopher Lowenkamp. 2016. Gender, risk assessment, and sanctioning: The cost of treating women like men.Law and human behavior 40, 5 (2016), 580.Google ScholarGoogle Scholar
  39. Sonja B. Starr. 2014. Evidence-Based Sentencing and the Scientific Rationalization of Discrimination. Stanford Law Review 66, 4 (2014), 803–872.Google ScholarGoogle Scholar
  40. Michael Tonry. 2013. Legal and Ethical Issues in the Prediction of Recidivism. Federal Sentencing Reporter 26, 3 (2013), 167–176.Google ScholarGoogle ScholarCross RefCross Ref
  41. Alice Xiang. 2021. Reconciling legal and technical approaches to algorithmic bias. Tennessee Law Review 88, 3 (2021). forthcoming.Google ScholarGoogle Scholar
  42. Crystal Yang and Will Dobbie. 2020. Equal Protection Under Algorithms: A New Statistical and Legal Framework. Michigan Law Review 119, 2 (2020), 291–396.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    EAAMO '21: Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization
    October 2021
    207 pages
    ISBN:9781450385534
    DOI:10.1145/3465416

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 4 November 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Upcoming Conference

    EAAMO '24
    Equity and Access in Algorithms, Mechanisms, and Optimization
    October 30 - November 1, 2024
    San Luis Potosi , Mexico

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format