ABSTRACT
The research addresses bias and inequity in binary classification problems in machine learning. Despite existing ethical frameworks for artificial intelligence, detailed guidance on practices and tech niques to address these issues is lacking. The main objective is to identify and analyze theoretical and practical components related to the detection and mitigation of biases and inequalities in machine learning. The proposed approach combines best practices, ethics, and technology to promote the responsible use of artificial intelligence in Colombia. The methodology covers the definition of performance and fairness interests, interventions in preprocessing, processing, and post-processing, and the generation of recommendations and explainability of the model.
- DPN. Documento conpes 3920, 2018. Departamento Nacional de Planeación Colombia.Google Scholar
- DPN. Documento conpes 3975, 2019. Departamento Nacional de Planeación Colombia.Google Scholar
- Maria Paula Mujica Ramírez and Armando Guio Español. Misión de expertos en ia de colombia. pages 1--88, 2022.Google Scholar
- Michelle Seng Ah Lee and Jatinder Singh. Risk identification questionnaire for detecting unintended bias in the machine learning development lifecycle. AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, (2019):704--714, 2021.Google ScholarDigital Library
- Andrew D Selbst, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. Fairness and abstraction in sociotechnical systems. 1(1):1--17, 2018.Google Scholar
- Ana Lavalle, Alejandro Maté, Juan Trujillo, and Jorge García. A methodology based on rebalancing techniques to measure and improve fairness in artificial intelligence algorithms. CEUR Workshop Proceedings, 3130:81--85, 2022.Google Scholar
- Rashidul Islam, Shimei Pan, and James R. Foulds. Can we obtain fairness for free? AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 586--596, 2021.Google ScholarDigital Library
- Brianna Richardson, Prasanna Sattigeri, Dennis Wei, Karthikeyan Natesan Ramamurthy, Kush Varshney, Amit Dhurandhar, and Juan E. Gilbert. Add-removeor-relabel: Practitioner-friendly bias mitigation via influential fairness. ACM International Conference Proceeding Series, pages 736--752, 2023.Google Scholar
- Anant Saraswat, Manjish Pal, Subham Pokhriyal, and Kumar Abhishek. Towards fair machine learning using combinatorial methods. Evolutionary Intelligence, (0123456789), 2022.Google Scholar
- Sorelle A. Friedler, Sonam Choudhary, Carlos Scheidegger, Evan P. Hamilton, Suresh Venkatasubramanian, and Derek Roth. A comparative study of fairnessenhancing interventions in machine learning. FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, pages 329--338, 2019.Google Scholar
- Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring adult: New datasets for fair machine learning. Advances in Neural Information Processing Systems, 8:6478--6490, 2021.Google Scholar
- Valliappa Lakshmanan, Sara Robinson, and Michael Munn. Machine learning design patterns. O'Reilly Media, 2020.Google Scholar
- Damien Dablain, Bartosz Krawczyk, and Nitesh Chawla. Towards a holistic view of bias in machine learning: Bridging algorithmic fairness and imbalanced learning. 2022.Google Scholar
- Serg Masís. Interpretable machine learning with Python: Learn to build interpretable high-performance models with hands-on real-world examples. Packt, 2021.Google Scholar
- Investigating oversampling techniques for fair machine learning models. Lecture Notes in Business, 2021.Google Scholar
- Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K.E. Bellamy, and Casey Dugan. Explaining models: An empirical study of how explanations impact fairness judgment. International Conference on Intelligent User Interfaces, Proceedings IUI, Part F1476:275--285, 2019.Google Scholar
- Benjamin Fish and Luke Stark. Reflexive design for fairness and other human values in formal models. AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 89--99, 2021.Google ScholarDigital Library
- Mingyang Wan, Daochen Zha, Ninghao Liu, and Na Zou. In-processing modeling techniques for machine learning fairness: A survey. ACM Transactions on Knowledge Discovery from Data, 17(3), 2023.Google ScholarDigital Library
- Zhenpeng Chen, Jie M. Zhang, Federica Sarro, and Mark Harman. A comprehensive empirical study of bias mitigation methods for machine learning classifiers. ACM Transactions on Software Engineering and Methodology, 32(4):1--30, 2023.Google ScholarDigital Library
- Nianyun Li, Naman Goel, and Elliott Ash. Data-centric factors in algorithmic fairness. AIES 2022 - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pages 396--410, 2022.Google ScholarDigital Library
- Joymallya Chakraborty, Suvodeep Majumder, and Tim Menzies. Bias in machine learning software: Why? how? what to do?, volume 1. Association for Computing Machinery, 2021.Google Scholar
- Simon Caton, Saiteja Malisetty, and Christian Haas. Impact of imputation strategies on fairness in machine learning. Journal of Artificial Intelligence Research, 74:1011--1035, 2022.Google ScholarDigital Library
- Teresa Salazar, Miriam Seoane Santos, Helder Araujo, and Pedro Henriques Abreu. Fawos: Fairness-aware oversampling algorithm based on distributions of sensitive attributes. IEEE Access, 9:81370--81379, 2021.Google ScholarCross Ref
- Flavio P. Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney. Optimized pre-processing for discrimination prevention. Advances in Neural Information Processing Systems, 2017- Decem(Nips):3993--4002, 2017.Google Scholar
- Faisal Kamiran and Toon Calders. Data preprocessing techniques for classification without discrimination, volume 33. 2012.Google ScholarDigital Library
- Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2564--2572. PMLR, 10--15 Jul 2018.Google Scholar
- Chowdhury Mohammad Rakin Haider, Chris Clifton, and Yan Zhou. Unfair ai: It isn't just biased data. In 2022 IEEE International Conference on Data Mining (ICDM), pages 957--962, 2022.Google ScholarCross Ref
- Saeid Tizpaz-Niari, Ashish Kumar, Gang Tan, and Ashutosh Trivedi. Fairnessaware configuration of machine learning libraries. Proceedings - International Conference on Software Engineering, 2022-May:909--920, 2022.Google Scholar
- Mike H.M. Teodorescu and Xinyu Yao. Machine learning fairness is computationally difficult and algorithmically unsatisfactorily solved. 2021 IEEE High Performance Extreme Computing Conference, HPEC 2021, 2021.Google Scholar
- Hantian Zhang, Xu Chu, Abolfazl Asudeh, and Shamkant B. Navathe. Omnifair: A declarative system for model-agnostic group fairness in machine learning. Proceedings of the ACM SIGMOD International Conference on Management of Data, pages 2076--2088, 2021.Google ScholarDigital Library
- Andre F.cruz, Pedro Saleiro, Catarina Belem, Carlos Soares, and Pedro Bizarro. Promoting fairness through hyperparameter optimization. Proceedings - IEEE International Conference on Data Mining, ICDM, 2021-Decem(Icdm):1036--1041, 2021.Google Scholar
- Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre Kreitmann, Jonathan Bischof, and Ed H. Chi. Putting fairness principles into practice: Challenges, metrics, and improvements. AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 453--459, 2019.Google ScholarDigital Library
- Jose M. Alvarez, Kristen M. Scott, Bettina Berendt, and Salvatore Ruggieri. Domain adaptive decision trees: Implications for accuracy and fairness. In 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT '23. ACM, June 2023.Google ScholarDigital Library
- Pádraig Cunningham and Sarah Jane Delany. Underestimation Bias and Underfitting in Machine Learning, page 20--31. Springer International Publishing, 2021.Google Scholar
- David Lovell, Bridget McCarron, Brendan Langfield, Khoa Tran, and Andrew P. Bradley. Taking the confusion out of multinomial confusion matrices and imbalanced classes. Communications in Computer and Information Science, 1504 CCIS:16--30, 2021.Google Scholar
- Bishwamittra Ghosh, Debabrota Basu, and Kuldeep S. Meel. How biased are your features?: Computing fairness influence functions with global sensitivity analysis. ACM International Conference Proceeding Series, pages 138--148, 2023.Google ScholarDigital Library
- Sofie Goethals, David Martens, and Toon Calders. PreCoF: counterfactual explanations for fairness. Number January. Springer US, 2023.Google Scholar
- Jin Young Kim and Sung Bae Cho. Fair representation for safe artificial intelligence via adversarial learning of unbiased information bottleneck. CEUR Workshop Proceedings, 2560:105--112, 2020.Google Scholar
- Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. Semi-fairvae: Semisupervised fair representation learning with adversarial variational autoencoder, 2022.Google Scholar
- Thibault Laugel, Adulam Jeyasothy, Marie Jeanne Lesot, Christophe Marsala, and Marcin Detyniecki. Achieving diversity in counterfactual explanations: a review and discussion. ACM International Conference Proceeding Series, pages 1859--1869, 2023.Google ScholarDigital Library
- Guanhong Tao, Weisong Sun, Tingxu Han, Chunrong Fang, and Xiangyu Zhang. Ruler: discriminative and iterative adversarial training for deep neural network fairness. ESEC/FSE 2022 - Proceedings of the 30th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, (Idi):1173--1184, 2022.Google ScholarDigital Library
- Kacper Sokol, Raul Santos-Rodriguez, and Peter Flach. Fat forensics: A python toolbox for algorithmic fairness, accountability and transparency[formula presented]. Software Impacts, 14(July):100406, 2022.Google Scholar
- James M. Hickey, Pietro G. Di Stefano, and Vlasios Vasileiou. Fairness by Explicability and Adversarial SHAP Learning, volume 12459 LNAI. Springer International Publishing, 2021.Google ScholarDigital Library
- Tom Van Nuenen, Xavier Ferrer, Jose M. Such, and Mark Cote. Transparency for whom? assessing discriminatory artificial intelligence. Computer, 53(11):36--44, 2020.Google ScholarCross Ref
Index Terms
- Framework for Bias Detection in Machine Learning Models: A Fairness Approach
Recommendations
Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey
This paper provides a comprehensive survey of bias mitigation methods for achieving fairness in Machine Learning (ML) models. We collect a total of 341 publications concerning bias mitigation for ML classifiers. These methods can be distinguished based on ...
A Comprehensive Empirical Study of Bias Mitigation Methods for Machine Learning Classifiers
Software bias is an increasingly important operational concern for software engineers. We present a large-scale, comprehensive empirical study of 17 representative bias mitigation methods for Machine Learning (ML) classifiers, evaluated with 11 ML ...
A Survey on Bias and Fairness in Machine Learning
Invited TutorialWith the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive ...
Comments