skip to main content
10.1145/3514094.3534137acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article

An Ontology for Fairness Metrics

Published: 27 July 2022 Publication History

Abstract

Recent research has revealed that many machine-learning models and the datasets they are trained on suffer from various forms of bias, and a large number of different fairness metrics have been created to measure this bias. However, determining which metrics to use, as well as interpreting their results, is difficult for a non-expert due to a lack of clear guidance and issues of ambiguity or alternate naming schemes between different research papers. To address this knowledge gap, we present the Fairness Metrics Ontology (FMO), a comprehensive and extensible knowledge resource that defines each fairness metric, describes their use cases, and details the relationships between them. We include additional concepts related to fairness and machine learning models, enabling the representation of specific fairness information within a resource description framework (RDF) knowledge graph. We evaluate the ontology by examining the process of how reasoning-based queries to the ontology were used to guide the fairness metric-based evaluation of a synthetic data model.

References

[1]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[2]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2017. Fairness in machine learning. NIPS Tutorial. https://fairmlbook.org/tutorial1.html
[3]
Sean Bechhofer, Frank van Harmelen, Jim Hendler, Ian Horrocks, Deborah L. McGuinness, Peter F. Patel-Schneider, and Lynn Andrea Stein. 2004. OWL Web Ontology Language Reference. World Wide Web Consortium. https://www.w3.org/TR/2004/REC-owl-ref-20040210/
[4]
Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, and Yunfeng Zhang. 2018. AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. arxiv: 1810.01943 [cs.AI]
[5]
Karan Bhanot, Saloni Dash, Joe Pedersen, Isabelle Guyon, and Kristin P Bennett. 2021 a. Quantifying Resemblance of Synthetic Medical Time-Series. In Proceedings of the 29th European Symposium on Artificial Neural Networks ESANN, Online. 6--8.
[6]
Karan Bhanot, Miao Qi, John S Erickson, Isabelle Guyon, and Kristin P Bennett. 2021 b. The problem of fairness in synthetic healthcare data. Entropy, Vol. 23, 9 (2021), 1165.
[7]
Abeba Birhane, Vinay Uday Prabhu, and John Whaley. 2022. Auditing Saliency Cropping Algorithms. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, Waikoloa, HI, USA, 4051--4059.
[8]
Joy A. Buolamwini. 2016. How I'm fighting bias in algorithms. TED Conferences. Video. https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms
[9]
Joy A. Buolamwini. 2017. Gender shades: intersectional phenotypic and demographic evaluation of face datasets and gender classifiers. Ph.D. Dissertation. Massachusetts Institute of Technology.
[10]
Simon Caton and Christian Haas. 2020. Fairness in Machine Learning: A Survey. [arXiv]2010.04053 https://arxiv.org/abs/2010.04053
[11]
Victoria Cheng, Vinith M Suriyakumar, Natalie Dullerud, Shalmali Joshi, and Marzyeh Ghassemi. 2021. Can you fake it until you make it? impacts of differentially private synthetic data on downstream classification fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 149--160.
[12]
Marc Cheong, Reeva Lederman, Aidan McLoughney, Sheilla Njoto, Leah Ruppanner, and Anthony Wirth. 2020. Ethical Implications of AI Bias as a Result of Workforce Gender Imbalance. Final Report for UniBank (Teachers Mutual Bank Limited). CIS & The Policy Lab, University of Melbourne. https://about.unimelb.edu.au/__data/assets/pdf_file/0024/186252/NEW-RESEARCH-REPORT-Ethical-Implications-of-AI-Bias-as-a-Result-of-Workforce-Gender-Imbalance-UniMelb,-UniBank.pdf
[13]
Richard Cyganiak, David Wood, and Markus Lanthaler (Eds.). 2014. RDF 1.1 Concepts and Abstract Syntax. World Wide Web Consortium. http://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/
[14]
Jeffrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
[15]
Michel Dumontier, Christopher JO Baker, Joachim Baran, Alison Callahan, Leonid Chepelev, José Cruz-Toledo, Nicholas R Del Rio, Geraint Duck, Laura I Furlong, Nichealla Keath, et al. 2014. The Semanticscience Integrated Ontology (SIO) for biomedical research and knowledge discovery. Journal of biomedical semantics, Vol. 5, 1 (2014), 1--11.
[16]
Pratyush Garg, John Villasenor, and Virginia Foggo. 2020. Fairness Metrics: A Comparative Analysis. In 2020 IEEE International Conference on Big Data (Big Data). IEEE, Atlanta, GA, USA, 3662--3666. https://doi.org/10.1109/BigData50022.2020.9378025
[17]
Alejandra Gonzalez-Beltran, Philippe Rocca-Serra, Orlaith Burke, and Susanna-Assunta Sansone. 2012. STATO: an Ontology of Statistical Methods. ISA-tools. http://stato-ontology.org/
[18]
Aman Gupta, Deepak Bhatt, and Anubha Pandey. 2021. Transitioning from Real to Synthetic data: Quantifying the bias in model. arxiv: 2105.04144 [cs.LG]
[19]
Steve Harris and Andy Seaborne (Eds.). 2013. SPARQL 1.1 Query Language. World Wide Web Consortium. https://www.w3.org/TR/2013/REC-sparql11-query-20130321/
[20]
Ben Hutchinson and Margaret Mitchell. 2018. 50 Years of Test (Un)fairness: Lessons for Machine Learning. CoRR, Vol. abs/1811.10104 (2018). showeprint[arXiv]1811.10104 http://arxiv.org/abs/1811.10104
[21]
Plotly Technologies Inc. 2015. Collaborative data science. Montreal, QC. https://plot.ly
[22]
Patrik Joslin Kenfack, Daniil Dmitrievich Arapov, Rasheed Hussain, S.M. Ahsan Kazmi, and Adil Khan. 2021. On the Fairness of Generative Adversarial Networks (GANs). In 2021 International Conference "Nonlinearity, Information and Robotics" (NIR). 1--7. https://doi.org/10.1109/NIR52917.2021.9666131
[23]
Joon Sik Kim, Jiahao Chen, and Ameet Talwalkar. 2020. Model-Agnostic Characterization of Fairness Trade-offs. CoRR, Vol. abs/2004.03424 (2020). [arXiv]2004.03424 https://arxiv.org/abs/2004.03424
[24]
Svetlana Kiritchenko and Saif M. Mohammad. 2018. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems. arxiv: 1805.04508 [cs.CL]
[25]
Karima Makhlouf, Sami Zhioua, and Catuscia Palamidessi. 2021. On the applicability of machine learning fairness notions. ACM SIGKDD Explorations Newsletter, Vol. 23, 1 (2021), 14--23.
[26]
Jamie P. McCusker, Sabbir M Rashid, Nkechinyere Agu, Kristin P Bennett, and Deborah L McGuinness. 2018. The Whyis Knowledge Graph Framework in Action. In International Semantic Web Conference (P&D/Industry/BlueSky). CEUR-WS.org, Monterey, CA, USA, 1--4.
[27]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv., Vol. 54, 6, Article 115 (jul 2021), 35 pages. https://doi.org/10.1145/3457607
[28]
Cecilia Panigutti, Alan Perotti, and Dino Pedreschi. 2020. Doctor XAI: An Ontology-Based Approach to Black-Box Sequential Data Classification Explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* '20). Association for Computing Machinery, New York, NY, USA, 629--639. https://doi.org/10.1145/3351095.3372855
[29]
Miao Qi, Owen Cahan, Morgan A Foreman, Daniel M Gruen, Amar K Das, and Kristin P Bennett. 2021. Quantifying representativeness in randomized clinical trials using machine learning fairness metrics. JAMIA open, Vol. 4, 3 (2021), ooab077.
[30]
Debjani Saha, Candice Schumann, Duncan Mcelfresh, John Dickerson, Michelle Mazurek, and Michael Tschantz. 2020. Measuring non-expert comprehension of machine learning fairness metrics. In International Conference on Machine Learning. PMLR, 8377--8387.
[31]
Dagobert Soergel and Olivia Helfer. 2016. A Metrics Ontology. An intellectual infrastructure for defining, managing, and applying metrics. In Knowledge organization for a sustainable world: challenges and perspectives for cultural, scientific, and technological sharing in a connected society: proceedings of the fourteenth international ISKO conference 27--29 September 2016 Ri., Vol. 15. NIH Public Access, Ergon Verlag, Rio de Janeiro, Brazil, 333.
[32]
Sahil Verma and Julia Rubin. 2018. Fairness Definitions Explained. In 2018 IEEE/ACM International Workshop on Software Fairness (FairWare). IEEE, Gothenburg, Sweden, 1--7. https://doi.org/10.23919/FAIRWARE.2018.8452913
[33]
Andrew Yale, Saloni Dash, Karan Bhanot, Isabelle Guyon, John S Erickson, and Kristin P Bennett. 2020 a. Synthesizing Quality Open Data Assets from Private Health Research Studies. In International Conference on Business Information Systems. Springer, 324--335.
[34]
Andrew Yale, Saloni Dash, Ritik Dutta, Isabelle Guyon, Adrien Pavao, and Kristin P Bennett. 2020 b. Generation and evaluation of privacy preserving synthetic health data. Neurocomputing, Vol. 416 (2020), 244--255.
[35]
Samuel Yeom and Michael Carl Tschantz. 2021. Avoiding Disparity Amplification under Different Worldviews. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Mar 2021). https://doi.org/10.1145/3442188.3445892

Cited By

View all
  • (2024)Fairness in Machine Learning: A SurveyACM Computing Surveys10.1145/361686556:7(1-38)Online publication date: 9-Apr-2024
  • (2024)FairDeDup: Detecting and Mitigating Vision-Language Fairness Disparities in Semantic Dataset Deduplication2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01319(13905-13916)Online publication date: 16-Jun-2024
  • (2024)Analysing and organising human communications for AI fairness assessmentAI & SOCIETY10.1007/s00146-024-01974-4Online publication date: 10-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
July 2022
939 pages
ISBN:9781450392471
DOI:10.1145/3514094
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 July 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. bias
  2. fairness metric
  3. machine learning evaluation
  4. rdf knowledge graph

Qualifiers

  • Research-article

Conference

AIES '22
Sponsor:
AIES '22: AAAI/ACM Conference on AI, Ethics, and Society
May 19 - 21, 2021
Oxford, United Kingdom

Acceptance Rates

Overall Acceptance Rate 61 of 162 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)175
  • Downloads (Last 6 weeks)32
Reflects downloads up to 23 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Fairness in Machine Learning: A SurveyACM Computing Surveys10.1145/361686556:7(1-38)Online publication date: 9-Apr-2024
  • (2024)FairDeDup: Detecting and Mitigating Vision-Language Fairness Disparities in Semantic Dataset Deduplication2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01319(13905-13916)Online publication date: 16-Jun-2024
  • (2024)Analysing and organising human communications for AI fairness assessmentAI & SOCIETY10.1007/s00146-024-01974-4Online publication date: 10-Jun-2024
  • (2023)Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image DevelopmentProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604685(70-83)Online publication date: 8-Aug-2023
  • (2023)Unmasking Nationality Bias: A Study of Human Perception of Nationalities in AI-Generated ArticlesProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604667(554-565)Online publication date: 8-Aug-2023
  • (2023)From Plane Crashes to Algorithmic Harm: Applicability of Safety Engineering Frameworks for Responsible MLProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581407(1-18)Online publication date: 19-Apr-2023
  • (2023)An Ontology for Reasoning About Fairness in Regression and Machine LearningKnowledge Graphs and Semantic Web10.1007/978-3-031-47745-4_18(243-261)Online publication date: 13-Nov-2023
  • (2023)Leveraging Group Contrastive Explanations for Handling FairnessExplainable Artificial Intelligence10.1007/978-3-031-44070-0_17(332-345)Online publication date: 21-Oct-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media