skip to main content
research-article

Context-conscious fairness in using machine learning to make decisions

Published: 05 August 2019 Publication History

Abstract

The increasing adoption of machine learning to inform decisions in employment, pricing, and criminal justice has raised concerns that algorithms may perpetuate historical and societal discrimination. Academics have responded by introducing numerous definitions of "fairness" with corresponding mathematical formalisations, proposed as one-size-fits-all, universal conditions. This paper will explore three of the definitions and demonstrate their embedded ethical values and contextual limitations, using credit risk evaluation as an example use case. I will propose a new approach - context-conscious fairness - that takes into account two main trade-offs: between aggregate benefit and inequity and between accuracy and interpretability. Fairness is not a notion with absolute and binary measurement; the target outcomes and their trade-offs must be specified with respect to the relevant domain context.

References

[1]
Baum, J. S. J. J., D., & Stute, D. (2015). Supreme court affirms fha disparate impact claims.
[2]
Bertrand, M., & Mullainathan, S. (2003, July). Are emily and greg more employable than lakisha and jamal? a field experiment on labor market discrimination (Working Paper No. 9873). National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w9873
[3]
Deku, S. Y., Kara, A., & Molyneux, P. (2016). Access to consumer credit in the uk. The European Journal of Finance, 22(10), 941--964. Retrieved from
[4]
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. S. (2011). Fairness through awareness. CoRR, abs/1104.3913. Retrieved from http://arxiv.org/abs/1104.3913
[5]
Dworkin, R. (1981). What is equality? part 1: Equality of welfare. Philosophy and Public Affairs, 10(3), 185--246.
[6]
European Commission AI HLEG. (2019). Draft ai ethics guidelines for trustworthy ai (Tech. Rep.). European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG). Retrieved from https://ec.europa.eu/futurium/en/node/6044
[7]
Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., & Walther, A. (2017, 01). Predictably unequal? the effects of machine learning on credit markets. SSRN Electronic Journal.
[8]
Gajane, P. (2017). On formalizing fairness in prediction with machine learning. CoRR, abs/1710.03184. Retrieved from http://arxiv.org/abs/1710.03184
[9]
Grosz, R. A. E. H. A. M. T. M. D. M. Y. S., Barbara. (2015). Artificial intelligence and life in 2030 (Tech. Rep.). Stanford University. Retrieved from https://ai100.stanford.edu/sites/default/files/ai_100_report_0831fnl.pdf
[10]
Hacker, P., & Wiedemann, E. (2017). A continuous framework for fairness. CoRR, abs/1712.07924. Retrieved from http://arxiv.org/abs/1712.07924
[11]
Holden, J., & Smith, M. (2016). Preparing for the future of ai (Tech. Rep.). U.S. Executive Office of the President, National Science and Technology Council Committee on Technology. Retrieved from https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf
[12]
IEEE Global Initiative. (2018). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems. Retrieved from https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
[13]
Kendig, D. (1973). Discrimination against women in home mortgage financing. Yale Review of Law and Social Action, 3(2).
[14]
Koren, J. R. (2016, Jul). What does that web search say about your credit? Retrieved from https://www.latimes.com/business/la-fi-zestfinance-baidu-20160715-snap-story.html
[15]
Kusner, M. J., Loftus, J. R., Russell, C., & Silva, R. (2017, March). Counterfactual Fairness. arXiv e-prints, arXiv:1703.06856.
[16]
Lewis, D. (1973). Causation. Journal of Philosophy, 70(17), 556--567.
[17]
Lowenthal, T. (2017). Essop v home office: Proving indirect discrimination.
[18]
Nelson, R. K., Winling, L., Marciano, R., & Connolly, N. (2016). Mapping inequality. American Panorama. Retrieved from https://dsl.richmond.edu/panorama/redlining/#loc=5/36.721/-96.943&opacity=0.8&text=intro
[19]
Rawls, J. (1971). A theory of justice. Harvard University Press.

Cited By

View all
  • (2024)A comprehensive survey and classification of evaluation criteria for trustworthy artificial intelligenceAI and Ethics10.1007/s43681-024-00590-8Online publication date: 21-Oct-2024
  • (2023)Fairness-Enhancing Deep Learning for Ride-Hailing Demand PredictionIEEE Open Journal of Intelligent Transportation Systems10.1109/OJITS.2023.32975174(551-569)Online publication date: 2023
  • (2022)Critical Tools for Machine Learning: Working with Intersectional Critical Concepts in Machine Learning Systems DesignProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency10.1145/3531146.3533207(1528-1541)Online publication date: 21-Jun-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image AI Matters
AI Matters  Volume 5, Issue 2
June 2019
44 pages
EISSN:2372-3483
DOI:10.1145/3340470
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 August 2019
Published in SIGAI-AIMATTERS Volume 5, Issue 2

Check for updates

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)22
  • Downloads (Last 6 weeks)3
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)A comprehensive survey and classification of evaluation criteria for trustworthy artificial intelligenceAI and Ethics10.1007/s43681-024-00590-8Online publication date: 21-Oct-2024
  • (2023)Fairness-Enhancing Deep Learning for Ride-Hailing Demand PredictionIEEE Open Journal of Intelligent Transportation Systems10.1109/OJITS.2023.32975174(551-569)Online publication date: 2023
  • (2022)Critical Tools for Machine Learning: Working with Intersectional Critical Concepts in Machine Learning Systems DesignProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency10.1145/3531146.3533207(1528-1541)Online publication date: 21-Jun-2022
  • (2021)Equality of opportunity in travel behavior prediction with deep neural networks and discrete choice modelsTransportation Research Part C: Emerging Technologies10.1016/j.trc.2021.103410132(103410)Online publication date: Nov-2021
  • (2021)Algorithmic Fairness in Mortgage Lending: from Absolute Conditions to Relational Trade-offsMinds and Machines10.1007/s11023-020-09529-431:1(165-191)Online publication date: 1-Mar-2021
  • (2021)Algorithmic Fairness in Mortgage Lending: From Absolute Conditions to Relational Trade-offsThe 2020 Yearbook of the Digital Ethics Lab10.1007/978-3-030-80083-3_12(145-171)Online publication date: 31-Oct-2021
  • (undefined)Algorithmic Fairness in Mortgage Lending: From Absolute Conditions to Relational Trade-OffsSSRN Electronic Journal10.2139/ssrn.3559407

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media