skip to main content
10.1145/3456529.3456537acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccdaConference Proceedingsconference-collections
research-article

Algorithmic Fairness in Applied Machine Learning Contexts

Published:13 July 2021Publication History

ABSTRACT

Fairness is a common standard in machine learning principles, ethics declarations, and best practices statements. Fairness, though, does not have a singular definition in machine learning. One common theme among fairness concepts is equal treatment by a machine learning model across groups. In some cases, it might be more fair for the machine learning model to produce different predictions or classifications for each group in concordance with differences in their outcome rates. Equal treatment might be an appropriate standard of fairness in some circumstances but not in others. Codes of ethics and standards for machine learning offer many different suggestions about how machine learning ought to be fair. Not only is there a diversity of fairness concepts, but standards also often offer little or no guidance on how these fairness axioms should guide the real-world practice of developing and deploying machine learning models in applied settings. The context in which machine learning is applied may determine which aspects of fairness are expected or upheld. Machine learning to shape, for example, (a) consumer loan approval or rates, (b) job recommendations, (c) text translations, (d) credit decisions, and (e) justice decisions might all impel different conceptions of machine learning fairness. Going beyond an expectation of “equal treatment”, machine learning in each of these areas might think about fairness differently.

References

  1. Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. In proceedings of the 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), May 29, 2018, Gothenburg, Sweden, 1-7. https://doi.org/10.1145/3194770.3194776Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Reuben Binns. 2018. Fairness in machine learning: Lessons from political philosophy. In Proceedings of the Conference on Fairness, Accountability and Transparency, February 23 and 24, 2018, New York University, NYC., 149-159.Google ScholarGoogle Scholar
  3. Virginia Tsintzou, Evaggelia Pitoura, and Panayiotis Tsaparas. 2018 Bias disparity in recommendation systems. arXiv:1811.01461. Retrieved from https://arxiv.org/abs/1811.01461Google ScholarGoogle Scholar
  4. French National Ethical Consultative Committee for Life Sciences and Health (CCNE). 2018. Digital technology and healthcare. Which ethical issues for which regulations? Report of the taskforce. Paris, FranceGoogle ScholarGoogle Scholar
  5. Google. AI Principles, Fairness. Retrieved from https://ai.google/responsibilities/responsible-ai-practices/?category=fairnessGoogle ScholarGoogle Scholar
  6. Accenture Federal Services. 2018. Responsible AI: A Framework for Building Trust in Your AI Solutions.Google ScholarGoogle Scholar
  7. Department of Health and Social Care. 2019. Code of conduct for data-driven health and care technology. London, United Kingdom.Google ScholarGoogle Scholar
  8. Internet Society, Artificial Intelligence and Machine Learning. 2017. Policy Paper. Reston, VA.Google ScholarGoogle Scholar
  9. John T. Jost, 2019. A quarter century of system justification theory: Questions, answers, criticisms, and societal applications. British Journal of Social Psychology, 58, 2 (Apr. 2019), 263-314. DOI: https://doi.org/10.1111/bjso.12297Google ScholarGoogle ScholarCross RefCross Ref
  10. Danny Osborne, Nikhil K. Sengupta, and Chris G. Sibley, 2019. System justification theory at 25: Evaluating a paradigm shift in psychology and looking towards the future. British Journal of Social Psychology, 58, 2 (Apr. 2019), 340-361. DOI: https://doi.org/10.1111/bjso.12302Google ScholarGoogle ScholarCross RefCross Ref
  11. John A. Bargh, Mark Chen, and Lara Burrows, 1996. Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. Journal of personality and social psychology, 71, 2 (Aug. 1996), 230. DOI: https://doi.org/10.1037/0022-3514.71.2.230Google ScholarGoogle ScholarCross RefCross Ref
  12. United States of America Federal Trade Commission. Compiled by M. Greg Braswell y Elizabeth Chernow, U.S. Federal Trade Commission. Consumer Credit Law & Practice in the U.S. Washington, DC.Google ScholarGoogle Scholar
  13. Kevin A. Clarke, and Lawrence S. Rothenberg, 2018. Mortgage Pricing and Race: Evidence from the Northeast. American Law and Economics Review, 20, 1 (Spring, 2018), 138-167. DOI: https://doi.org/10.1093/aler/ahx021Google ScholarGoogle ScholarCross RefCross Ref
  14. Matthew Adam Bruckner, 2018. The promise and perils of algorithmic lenders' use of big data. Chi.-Kent L. Rev., 93, 1 (2018), 3-60.Google ScholarGoogle Scholar
  15. Robert Bartlett, Adair Morse, Richard Stanton, and Nancy Wallace, 2019. Consumer-lending discrimination in the FinTech era (No. w25943). National Bureau of Economic Research. Retrieved from: https://www.nber.org/papers/w25943Google ScholarGoogle Scholar
  16. Alexander W. Butler, Erik J. Mayer, and James Weston. 2019. Racial Discrimination in the Auto Loan Market. SSRN 3301009. Retrieved from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3301009Google ScholarGoogle Scholar
  17. Jacob William Faber, 2018. Segregation and the geography of creditworthiness: Racial inequality in a recovered mortgage market. Housing Policy Debate, 28, 2 (2018), 215-247. DOI: https://doi.org/10.1080/10511482.2017.1341944Google ScholarGoogle ScholarCross RefCross Ref
  18. Matthew A. Bruckner, 2019. Preventing Predation and Encouraging Innovation in Fintech Lending. SSRN 3406045. Retrieved from: https://ssrn.com/abstract=3406045Google ScholarGoogle Scholar
  19. Will Dobbie, Andres Liberman, Daniel Paravisini, and Vikram Pathania, 2018. Measuring bias in consumer lending. National Bureau of Economic Research, No. w24953. Retrieved from: https://www.nber.org/papers/w24953Google ScholarGoogle Scholar
  20. Marcelo O. Prates, Pedro Avelar, and Luis C. Lamb, 2019. Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications. arXiv:1809.02208v4. Retrieved from: https://arxiv.org/abs/1809.02208Google ScholarGoogle Scholar
  21. Danielle Gaucher, Justin Friesen, and Aaron C. Kay, 2011. Evidence that gendered wording in job advertisements exists and sustains gender inequality. Journal of personality and social psychology 101, 1 (Jul. 2011), 109 –128. DOI: https://doi.apa.org/doiLanding?doi=10.1037%2Fa0022530Google ScholarGoogle ScholarCross RefCross Ref
  22. Susan Leavy, Gerardine Meaney, Karen Wade, and Derek Greene, 2020. Mitigating Gender Bias in Machine Learning Data Sets. arXiv:2005.06898. Retrieved from: https://arxiv.org/abs/2005.06898Google ScholarGoogle Scholar
  23. Dastin, J. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved from: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08GGoogle ScholarGoogle Scholar
  24. Tang Shiliang, Xinyi Zhang, Jenna Cryan, Miriam J Metzger, Haitao Zheng, and Ben Y Zhao. 2017. Gender bias in the job market: A longitudinal analysis. Proceedings of the ACM on Human-Computer Interaction 1, CSCW (Dec. 2017), 1–19. DOI: https://doi.org/10.1145/3134734Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Danielle Saunders, and Bill Byrne, 2020. Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem. arXiv:2004.04498. Retrieved from: https://arxiv.org/abs/2004.04498Google ScholarGoogle Scholar
  26. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang, 2019. Mitigating gender bias in natural language processing: Literature review. arXiv:1906.08976. Retrieved from: https://arxiv.org/abs/1906.08976Google ScholarGoogle Scholar
  27. Joel Escude Font, and Marta R. Costa-Jussa, 2019. Equalizing gender biases in neural machine translation with word embeddings techniques. arXiv:1901.03116. Retrieved from: https://arxiv.org/abs/1901.03116Google ScholarGoogle Scholar
  28. Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta, 2018. Gender bias in neural natural language processing. arXiv:1807.11714. Retrieved from: https://arxiv.org/abs/1807.11714Google ScholarGoogle Scholar
  29. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang, 2019. Mitigating gender bias in natural language processing: Literature review. arXiv:1906.08976. Retrieved from: https://arxiv.org/abs/1906.08976Google ScholarGoogle Scholar
  30. Steven Ongena, and Alexander Popov, 2016. Gender bias and credit access. Journal of Money. Credit and Banking, 48, 8 (December, 2016), 1691-1724. DOI: https://doi.org/10.1111/jmcb.12361Google ScholarGoogle ScholarCross RefCross Ref
  31. David G. Blanchflower, Phillip B. Levine, and David Zimmerman, 2003. Discrimination in the small-business credit market. Review of Economics and Statistics, 85, 4 (Nov. 2003), 930-943. DOI: https://doi.org/10.1162/003465303772815835Google ScholarGoogle ScholarCross RefCross Ref
  32. Tzu-Tsung Wong, and Shang-Jung Yeh, 2019. Weighted Random Forests for Evaluating Financial Credit Risk. Proceedings of Engineering and Technology Innovation, 13 (Jul. 2019), 1-9.Google ScholarGoogle Scholar
  33. Bertrand K. Hassani, 2020. Societal biases reinforcement through machine learning: A credit scoring perspective. arXiv:2006.08350. Retrieved from: https://arxiv.org/abs/2006.08350Google ScholarGoogle Scholar
  34. Stephen L. Ross, and John Yinger, 2002. The color of credit: Mortgage discrimination, research methodology, and fair-lending enforcement. MIT press, Cambridge, MAGoogle ScholarGoogle Scholar
  35. Elizabeth Asiedu, James A. Freeman, and Akwasi Nti-Addae, 2012. Access to credit by small businesses: How relevant are race, ethnicity, and gender?. American Economic Review, 102, 3 (May 2012), 532-37. DOI: https://doi.org/10.1257/aer.102.3.532Google ScholarGoogle ScholarCross RefCross Ref
  36. Andreas Fuster, Paul Goldsmith-Pinkham, Tarun Ramadorai, and Ansgar Walther, 2020. Predictably unequal? The effects of machine learning on credit markets. The Effects of Machine Learning on Credit Markets. SSRN 3072038. Retrieved from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3072038Google ScholarGoogle Scholar
  37. Jorge Galindo, and Pablo Tamayo, 2000. Credit risk assessment using statistical and machine learning: basic methodology and risk modeling applications. Computational Economics, 15, 1-2 (Apr. 2000), 107-143. DOI: https://doi.org/10.1023/A:1008699112516Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Chih-FongTsai, and Ming_lun Chen, 2010. Credit rating by hybrid machine learning techniques. Applied soft computing, 10, 2 (Mar. 2010), 374-380. DOI: https://doi.org/10.1016/j.asoc.2009.08.003Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Lkhagvadorj Munkhdalai, Tsendsuren Munkhdalai, Oyun-Erdene Namsrai, Jong Yun Lee, and Keun Ho Ryu, 2019. An empirical comparison of machine-learning methods on bank client credit assessments. Sustainability, 11, 3 (Jan. 2019), 699. DOI: https://doi.org/10.3390/su11030699Google ScholarGoogle ScholarCross RefCross Ref
  40. Peter Martey Addo, Dominique Guegan, and Bertrand Hassani, 2018. Credit risk analysis using machine and deep learning models. Risks, 6, 2 (Apr. 2018), 38. DOI: https://doi.org/10.3390/risks6020038Google ScholarGoogle ScholarCross RefCross Ref
  41. Amir E. Khandani, Adlar J. Kim, and Andrew W. Lo, 2010. Consumer credit-risk models via machine-learning algorithms. Journal of Banking & Finance, 34, 11 (Oct. 2010), 2767-2787. DOI: 10.1016/j.jbankfin.2010.06.001Google ScholarGoogle ScholarCross RefCross Ref
  42. Coalition for Critical Technology. 2020. Abolish the #TechToPrisonPipeline, Crime prediction technology reproduces injustices and causes real harm. Medium, June 23, 2020. Retrieved from: https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16Google ScholarGoogle Scholar
  43. Jeremy Travis, Bruce Western, and F. Stevens Redburn, 2014. The growth of incarceration in the United States: Exploring causes and consequences. National Research Council of the National Academies. The National Academies Press, Washington DC.Google ScholarGoogle Scholar
  44. E. Ann Carson, and William J. Sabol, 2012. Prisoners in 2011. NCJ 239808. Department of Justice, Bureau of Justice Statistics, Washington, DCGoogle ScholarGoogle Scholar
  45. Kathleen Maguire (Ed.). 2013. Sourcebook of Criminal Justice Statistics Online. University at Albany, Hindelang Criminal Justice Research Center, Albany, NY.Google ScholarGoogle Scholar
  46. Mitchell Ojmarrh and Michael S. Caudy. 2013. Examining Racial Disparities in Drug Arrests. Justice Quarterly 32, 2 (Jan. 2013), 288–313. DOI: https://doi.org/10.1080/07418825.2012.761721Google ScholarGoogle ScholarCross RefCross Ref
  47. Emma Pierson, Camelia Simoiu, Jan Overgoor, Sam Corbett-Davies, Daniel Jenson, Amy Shoemaker, Vignesh Ramachandran, Phoebe Barghouty, Cheryl Phillips, Ravi Shroff, and Sharad Goel, 2020. A large-scale analysis of racial disparities in police stops across the United States. Nature Human Behaviour, 4 (Jul. 2020), 736–745. DOI: https://doi.org/10.1038/s41562-020-0858-1Google ScholarGoogle ScholarCross RefCross Ref
  48. Jennifer H. Peck, and Wesley G. Jennings, 2016. A critical examination of “being Black” in the juvenile justice system. Law and Human Behavior, 40, 3 (Jun. 2016), 219–232. DOI: https://doi.org/10.1037/lhb0000180Google ScholarGoogle ScholarCross RefCross Ref
  49. Michael Evangelist, Joseph P. Ryan, Bryan G. Victor, Andrew Moore, and Brian E. Perron, 2017. Disparities at adjudication in the juvenile justice system: An examination of race, gender, and age. Social Work Research, 41, 4 (Nov. 2017), 199-212. DOI: https://doi.org/10.1093/swr/svx017Google ScholarGoogle ScholarCross RefCross Ref
  50. Jennifer L Welsh, Fred Schmidt, Lauren McKinnon, H K Chattha, and Joanna R Meyers. A comparative study of adolescent risk assessment instruments: Predictive and incremental validity. Assessment, 15, 1 (Mar. 2008), 104-115. DOI: https://doi.org/10.1177/1073191107307966Google ScholarGoogle ScholarCross RefCross Ref
  51. Joanna R. Meyers, and Fred Schmidt, 2008. Predictive validity of the Structured Assessment for Violence Risk in Youth (SAVRY) with juvenile offenders. Criminal Justice and Behavior, 35, 3 (Mar. 2008), 344-355. DOI: https://doi.org/10.1177/0093854807311972Google ScholarGoogle ScholarCross RefCross Ref
  52. Orbis Partners, Inc. 2007. Long-term validation of the Youth Assessment and Screening Instrument (YASI) in New York State juvenile probation. Retrieved from: https://www.criminaljustice.ny.gov/opca/pdfs/YASI-Long-Term-Validation-Report.pdfGoogle ScholarGoogle Scholar
  53. Matthew DeMichele, Peter Baumgartner, Michael Wenger, Kelle Barrick, Megan Comfort, and Shilpi Misra, 2018. The public safety assessment: A re-validation and assessment of predictive utility and differential prediction by race and gender in Kentucky. SSRN 3168452. Retrieved from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3168452Google ScholarGoogle Scholar
  54. James L. Johnson, Christopher Lowenkamp, Scott W. VanBenschoten, and Charles R. Robinson, 2013. The federal Post Conviction Risk Assessment (PCRA): A construction and validation study. Psychological Services, 10, 1 (Feb. 2013), 87–96. DOI: https://doi.org/10.1037/a0030343Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICCDA '21: Proceedings of the 2021 5th International Conference on Compute and Data Analysis
    February 2021
    194 pages
    ISBN:9781450389112
    DOI:10.1145/3456529

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 July 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format