Abstract
Prevalent methodology towards constructing fair machine learning (ML) systems, is to enforce a strict equality metric for demographic groups based on protected attributes like race and gender. While definitions of fairness in philosophy are varied, mitigating bias in ML classifiers often relies on demographic parity-based constraints across sub-populations. However, enforcing such constraints blindly can lead to undesirable trade-offs between group-level accuracy if groups possess different underlying sampled population metrics, an occurrence that is surprisingly common in real-world applications like credit risk and income classification. Similarly, attempts to relax hard constraints may lead to unintentional degradation in classification performance, without benefit to any demographic group. In these increasingly likely scenarios, we make the case for transparent human intervention in making the trade-offs between the accuracies of demographic groups. We propose that transparency in trade-offs between demographic groups should be a key tenet of ML design and implementation. Our evaluation demonstrates that a transparent human-in-the-loop trade-off technique based on the Pareto principle increases both overall and group-level accuracy by 9.5% and 9.6% respectively, in two commonly explored UCI datasets for credit risk and income classification.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Salamanca, C.D.J.R., et al.: A local law in relation to automated decision systems used by agencies (2018)
Bickel, P.J., Hammel, E.A., O’Connell, J.W.: Sex bias in graduate admissions: data from Berkeley. Science 187(4175), 398–404 (1975)
Zhao, J., Wang, T., Yatskar, M., Ordonez, V., Chang, K.W., Reducing gender bias amplification using corpus-level constraints. In: EMNLP, Men also like shopping (2017)
Ensign, D., Friedler, S.A., Neville, S., Scheidegger, C.E., Venkatasubramanian, S.: Runaway feedback loops in predictive policing. CoRR, abs/1706.09847 (2017)
Kearns, M., Neel, S., Roth, A., Wu, Z.S.: Auditing and learning for subgroup fairness. In: ICML, Preventing fairness gerrymandering (2018)
Menon, A.K., Williamson, R.C.: The cost of fairness in binary classification. In: Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pp. 107–118. PMLR, New York, NY, USA, 23–24 Feburary 2018
Dwork, C., Hardt, M., Reingold, O., Zemel, R.S.: Fairness through awareness. In: ITCS, Toniann Pitassi (2012)
Dwork, C., Ilvento, C.: Fairness under composition. In: ITCS (2019)
Godfrey, P., Shipley, R., Gryz, J.: Algorithms and analyses for maximal vector computation. VLDB J. 16(1), 5–28 (2007)
Dheeru, D., Taniskidou, E.K.: UCI machine learning repository (2017)
Selbst, A.D., Boyd, D., Friedler, S.A., Venkatasubramanian, S., Vertesi, J.: Fairness and abstraction in sociotechnical systems. In: FAT* 2019 (2019)
Grgic-Hlaca, N.: The case for process fairness in learning : Feature selection for fair decision making. In: NIPS Symposium on Machine Learning and the Law, vol. 1, pp. 2 (2016)
Madras, D., Creager, E., Pitassi, T., Zemel, R.: Fairness through causal awareness: learning causal latent-variable models for biased data. In: FAT* 2019 (2019)
Buolamwini, J., Gebru, T.: Gender shades: Intersectional accuracy disparities in commercial gender classification. In: FAT (2018)
Chen, I., Johansson, F.D., Sontag, D.A.: Why is my classifier discriminatory? CoRR, abs/1805.12002 (2018)
Angwin, S.M.A., Larson, J.E., Kirchner, L.: How we analyzed the compas recidivism algorithm.(2016)
Foster, D., Vohra, R.: An economic argument for affirmative action. Rationality Soc. 4(2), 176–188 (1992)
Ali, J., Zafar, M.B., Singla, A., Gummadi, K.P.:. Loss-aversively fair classification. In: AIES 2019, pp. 211–218 (2019)
Rawls, J.: A Theory of Justice (1971)
Agarwal, A., Beygelzimer, A., DudÃk, M., John, L., Wallach, H.M.: A reductions approach to fair classification. CoRR, abs/1803.02453 (2018)
Andrew, Y.N.: Feature selection, \(<\)i\(>\)l\(<\)/i\(>\)\(<\)sub\(>\)1\(<\)/sub\(>\) vs. \(<\)i\(>\)l\(<\)/i\(>\)\(<\)sub\(>\)2\(<\)/sub\(>\) regularization, and rotational invariance. In: Proceedings of the Twenty-First International Conference on Machine Learning, ICML 2004, p. 78. Association for Computing Machinery, New York, NY, USA (2004)
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. CoRR, abs/1610.02413 (2016)
Beutel, A., Chen, J., Zhao, Z., Hsin Chi, E.H.: Data decisions and theoretical implications when adversarially learning fair representations. CoRR, abs/1707.00075 (2017)
Martinez, N., Bertran, M., Sapiro, G.: A multi objective perspective. In: ICML, Minimax pareto fairness (2020)
Kearns, M., Neel, S., Roth, A., Wu, Z.S.: Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. CoRR, abs/1711.05144 (2017)
Supreme Court of the United States. Griggs v. duke power co. (1971)
Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2015, pp. 259–268. ACM, New York, NY, USA (2015)
Chouldechova, A., G’Sell, M.: Fairer and more accurate, but for whom? 06 (2017)
Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of Machine Learning. MIT Press, Adaptive Computation and Machine Learning (2012)
Shawe-Taylor, J., Anthony, M., Biggs, N.: Bounding sample size with the vapnik-chervonenkis dimension. 42, 65–73 (1993)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Balashankar, A., Lees, A. (2022). The Need for Transparent Demographic Group Trade-Offs in Credit Risk and Income Classification. In: Smits, M. (eds) Information for a Better World: Shaping the Global Future. iConference 2022. Lecture Notes in Computer Science(), vol 13192. Springer, Cham. https://doi.org/10.1007/978-3-030-96957-8_30
Download citation
DOI: https://doi.org/10.1007/978-3-030-96957-8_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-96956-1
Online ISBN: 978-3-030-96957-8
eBook Packages: Computer ScienceComputer Science (R0)