skip to main content
research-article

On the Applicability of Machine Learning Fairness Notions

Published: 29 May 2021 Publication History

Abstract

Machine Learning (ML) based predictive systems are increasingly used to support decisions with a critical impact on individuals' lives such as college admission, job hiring, child custody, criminal risk assessment, etc. As a result, fairness emerged as an important requirement to guarantee that ML predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities. Given the inherent subjectivity of viewing the concept of fairness, several notions of fairness have been introduced in the literature. This paper is a survey of fairness notions that, unlike other surveys in the literature, addresses the question of "which notion of fairness is most suited to a given real-world scenario and why?". Our attempt to answer this question consists in (1) identifying the set of fairness-related characteristics of the real-world scenario at hand, (2) analyzing the behavior of each fairness notion, and then (3) fitting these two elements to recommend the most suitable fairness notion in every specific setup. The results are summarized in a decision diagram that can be used by practitioners and policy makers to navigate the relatively large catalogue of ML fairness notions.

References

[1]
Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias. propublica. See https://www. propublica. org/article/machine-bias-risk-assessmentsin- criminal-sentencing (2016)
[2]
Barocas, S., Hardt, M., Narayanan, A.: Fairness in machine learning. NIPS Tutorial (2017)
[3]
Barocas, S., Hardt, M., Narayanan, A.: Fairness and Machine Learning. fairmlbook.org (2019), http://www.fairmlbook.org
[4]
Barocas, S., Selbst, A.D.: Big data's disparate impact. Calif. L. Rev. 104, 671 (2016)
[5]
Bellin, J.: The inverse relationship between the constitutionality and effectiveness of new york city stop and frisk. BUL Rev. 94, 1495 (2014)
[6]
Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research p. 0049124118782533 (2018)
[7]
Chalfin, A., Danieli, O., Hillis, A., Jelveh, Z., Luca, M., Ludwig, J., Mullainathan, S.: Productivity and selection of human capital with machine learning. American Economic Review 106(5), 124--27 (2016)
[8]
Chouldechova, A.: Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5(2), 153--163 (2017)
[9]
Corbett-Davies, S., Goel, S.: The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023 (2018)
[10]
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., Huq, A.: Algorithmic decision making and the cost of fairness. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 797--806 (2017)
[11]
Crenshaw, K.: Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stan. L. Rev. 43, 1241 (1990)
[12]
Dehghan, A., Ortiz, E.G., Shu, G., Masood, S.Z.: Dager: Deep age, gender and emotion recognition using convolutional neural network. arXiv:1702.04280 (2017)
[13]
Dieterich, W., Mendoza, C., Brennan, T.: Compas risk scales: Demonstrating accuracy equity and predictive parity. Northpointe Inc (2016) SIGKDD Explorations Volume 23, Issue 1 21
[14]
Dodson, M.K., Cliby, W.A., Keeney, G.L., Peterson, M.F., Podritz, K.C.: Skene's gland adenocarcinoma with increased serum level of prostate-specific antigen. Gynecologic oncology 55(2), 304--307 (1994)
[15]
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd innovations in theoretical computer science conference. pp. 214--226 (2012)
[16]
Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115--118 (2017)
[17]
Eubanks, V.: Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press (2018)
[18]
Friedler, S.A., Scheidegger, C., Venkatasubramanian, S.: On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236 (2016)
[19]
Friedler, S.A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E.P., Roth, D.: A comparative study of fairness-enhancing interventions in machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency. pp. 329--338 (2019)
[20]
Galhotra, S., Brun, Y., Meliou, A.: Fairness testing: testing software for discrimination. In: Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. pp. 498--510 (2017)
[21]
Garg, P., Villasenor, J., Foggo, V.: Fairness metrics: A comparative analysis. arXiv preprint arXiv:07864 (2020)
[22]
Garvie, C.: The perpetual line-up: Unregulated police face recognition in America. Georgetown Law, Center on Privacy & Technology (2016)
[23]
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. arXiv preprint arXiv:02413 (2016)
[24]
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in neural information processing systems. pp. 3315--3323 (2016)
[25]
Jannach, D., Zanker, M., Felfernig, A., Friedrich, G.: Recommender systems: an introduction. Cambridge University Press (2010)
[26]
Kamiran, F., Zliobaite, I., Calders, T.: Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowledge and information systems 35(3), 613--644 (2013)
[27]
Kilbertus, N., Carulla, M.R., Parascandolo, G., Hardt, M., Janzing, D., Sch¨olkopf, B.: Avoiding discrimination through causal reasoning. In: Advances in Neural Information Processing Systems. pp. 656--666 (2017)
[28]
Kim, M., Reingold, O., Rothblum, G.: Fairness through computationally-bounded awareness. In: NIPS. pp. 4842-- 4852 (2018)
[29]
Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-o's in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016)
[30]
Kusner, M.J., Loftus, J., Russell, C., Silva, R.: Counterfactual fairness. In: Advances in Neural Information Processing Systems. pp. 4066--4076 (2017)
[31]
Lipton, Z., McAuley, J., Chouldechova, A.: Does mitigating ml's impact disparity require treatment disparity? In: Advances in Neural Information Processing Systems. pp. 8125--8135 (2018)
[32]
Lowry, S., Macpherson, G.: A blot on the profession. British medical journal (Clinical research ed.) 296(6623), 657 (1988)
[33]
Marope, P.T.M., Wells, P.J., Hazelkorn, E.: Rankings and accountability in higher education: Uses and misuses. Unesco (2013)
[34]
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019)
[35]
Mitchell, S., Potash, E., Barocas, S., D'Amour, A., Lum, K.: Prediction-based decisions and fairness: A catalogue of choices, assumptions, and definitions. arXiv preprint arXiv:1811.07867 (2020)
[36]
O'Neill, C.: Weapons of math destruction. How Big Data Increases Inequality and Threatens Democracy (2016)
[37]
Pearl, J., Glymour, M., Jewell, N.P.: Causal inference in statistics: A primer. John Wiley & Sons (2016)
[38]
Santelices, M.V., Wilson, M.: Unfair treatment? the case of freedle, the sat, and the standardization approach to differential item functioning. Harvard Educational Review 80(1), 106--134 (2010)
[39]
Shrestha, Y.R., Yang, Y.: Fairness in algorithmic decision-making: Applications in multi-winner voting, machine learning, and recommender systems. Algorithms 12(9), 199 (2019)
[40]
Spanakis, E.K., Golden, S.H.: Race/ethnic difference in diabetes and diabetic complications. Current diabetes reports 13(6), 814--823 (2013)
[41]
Verma, S., Rubin, J.: Fairness definitions explained. In: 2018 IEEE/ACM International Workshop on Software Fairness (FairWare). pp. 1--7. IEEE (2018)
[42]
Wu, X., Zhang, X.: Automated inference on criminality using face images. arXiv preprint arXiv:1611.04135 pp. 4038--4052 (2016)
[43]
Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In: Proceedings of the 26th international conference on world wide web. pp. 1171--1180 (2017)
[44]
Zliobaite, I.: A survey on measuring indirect discrimination in machine learning. arXiv preprint arXiv:1511.00148 (2015)

Cited By

View all
  • (2024)Standardized interpretable fairness measures for continuous risk scoresProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692203(3327-3346)Online publication date: 21-Jul-2024
  • (2024)Assessing Disparities in Predictive Modeling Outcomes for College Student Success: The Impact of Imputation Techniques on Model Performance and FairnessEducation Sciences10.3390/educsci1402013614:2(136)Online publication date: 29-Jan-2024
  • (2024)Fair multivariate adaptive regression splines for ensuring equity and transparencyProceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v38i20.30211(22076-22086)Online publication date: 20-Feb-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM SIGKDD Explorations Newsletter
ACM SIGKDD Explorations Newsletter  Volume 23, Issue 1
June 2021
99 pages
ISSN:1931-0145
EISSN:1931-0153
DOI:10.1145/3468507
Issue’s Table of Contents
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 May 2021
Published in SIGKDD Volume 23, Issue 1

Check for updates

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)192
  • Downloads (Last 6 weeks)21
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Standardized interpretable fairness measures for continuous risk scoresProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692203(3327-3346)Online publication date: 21-Jul-2024
  • (2024)Assessing Disparities in Predictive Modeling Outcomes for College Student Success: The Impact of Imputation Techniques on Model Performance and FairnessEducation Sciences10.3390/educsci1402013614:2(136)Online publication date: 29-Jan-2024
  • (2024)Fair multivariate adaptive regression splines for ensuring equity and transparencyProceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v38i20.30211(22076-22086)Online publication date: 20-Feb-2024
  • (2024)Fairness Feedback Loops: Training on Synthetic Data Amplifies BiasProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659029(2113-2147)Online publication date: 3-Jun-2024
  • (2024)Ethnic Classifications in Algorithmic Fairness: Concepts, Measures and Implications in PracticeProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658902(237-253)Online publication date: 3-Jun-2024
  • (2024)Assessing and Addressing Model Trustworthiness Trade-offs in Trauma TriageInternational Journal on Artificial Intelligence Tools10.1142/S021821302460007833:03Online publication date: 25-Apr-2024
  • (2024)Simulation-Driven Balancing of Competitive Game Levels With Reinforcement LearningIEEE Transactions on Games10.1109/TG.2024.339953616:4(903-913)Online publication date: Dec-2024
  • (2024)A Systematic and Formal Study of the Impact of Local Differential Privacy on Fairness: Preliminary Results2024 IEEE 37th Computer Security Foundations Symposium (CSF)10.1109/CSF61375.2024.00039(1-16)Online publication date: 8-Jul-2024
  • (2024)Finding representative group fairness metrics using correlation estimationsExpert Systems with Applications10.1016/j.eswa.2024.125652(125652)Online publication date: Nov-2024
  • (2024)Bringing practical statistical science to AI and predictive model fairness testingAI and Ethics10.1007/s43681-024-00518-2Online publication date: 8-Aug-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media