Skip to main content

The Need for Transparent Demographic Group Trade-Offs in Credit Risk and Income Classification

  • Conference paper
  • First Online:
Information for a Better World: Shaping the Global Future (iConference 2022)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 13192))

Included in the following conference series:

  • 1663 Accesses

Abstract

Prevalent methodology towards constructing fair machine learning (ML) systems, is to enforce a strict equality metric for demographic groups based on protected attributes like race and gender. While definitions of fairness in philosophy are varied, mitigating bias in ML classifiers often relies on demographic parity-based constraints across sub-populations. However, enforcing such constraints blindly can lead to undesirable trade-offs between group-level accuracy if groups possess different underlying sampled population metrics, an occurrence that is surprisingly common in real-world applications like credit risk and income classification. Similarly, attempts to relax hard constraints may lead to unintentional degradation in classification performance, without benefit to any demographic group. In these increasingly likely scenarios, we make the case for transparent human intervention in making the trade-offs between the accuracies of demographic groups. We propose that transparency in trade-offs between demographic groups should be a key tenet of ML design and implementation. Our evaluation demonstrates that a transparent human-in-the-loop trade-off technique based on the Pareto principle increases both overall and group-level accuracy by 9.5% and 9.6% respectively, in two commonly explored UCI datasets for credit risk and income classification.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Salamanca, C.D.J.R., et al.: A local law in relation to automated decision systems used by agencies (2018)

    Google Scholar 

  2. Bickel, P.J., Hammel, E.A., O’Connell, J.W.: Sex bias in graduate admissions: data from Berkeley. Science 187(4175), 398–404 (1975)

    Article  Google Scholar 

  3. Zhao, J., Wang, T., Yatskar, M., Ordonez, V., Chang, K.W., Reducing gender bias amplification using corpus-level constraints. In: EMNLP, Men also like shopping (2017)

    Google Scholar 

  4. Ensign, D., Friedler, S.A., Neville, S., Scheidegger, C.E., Venkatasubramanian, S.: Runaway feedback loops in predictive policing. CoRR, abs/1706.09847 (2017)

    Google Scholar 

  5. Kearns, M., Neel, S., Roth, A., Wu, Z.S.: Auditing and learning for subgroup fairness. In: ICML, Preventing fairness gerrymandering (2018)

    Google Scholar 

  6. Menon, A.K., Williamson, R.C.: The cost of fairness in binary classification. In: Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pp. 107–118. PMLR, New York, NY, USA, 23–24 Feburary 2018

    Google Scholar 

  7. Dwork, C., Hardt, M., Reingold, O., Zemel, R.S.: Fairness through awareness. In: ITCS, Toniann Pitassi (2012)

    Google Scholar 

  8. Dwork, C., Ilvento, C.: Fairness under composition. In: ITCS (2019)

    Google Scholar 

  9. Godfrey, P., Shipley, R., Gryz, J.: Algorithms and analyses for maximal vector computation. VLDB J. 16(1), 5–28 (2007)

    Article  Google Scholar 

  10. Dheeru, D., Taniskidou, E.K.: UCI machine learning repository (2017)

    Google Scholar 

  11. Selbst, A.D., Boyd, D., Friedler, S.A., Venkatasubramanian, S., Vertesi, J.: Fairness and abstraction in sociotechnical systems. In: FAT* 2019 (2019)

    Google Scholar 

  12. Grgic-Hlaca, N.: The case for process fairness in learning : Feature selection for fair decision making. In: NIPS Symposium on Machine Learning and the Law, vol. 1, pp. 2 (2016)

    Google Scholar 

  13. Madras, D., Creager, E., Pitassi, T., Zemel, R.: Fairness through causal awareness: learning causal latent-variable models for biased data. In: FAT* 2019 (2019)

    Google Scholar 

  14. Buolamwini, J., Gebru, T.: Gender shades: Intersectional accuracy disparities in commercial gender classification. In: FAT (2018)

    Google Scholar 

  15. Chen, I., Johansson, F.D., Sontag, D.A.: Why is my classifier discriminatory? CoRR, abs/1805.12002 (2018)

    Google Scholar 

  16. Angwin, S.M.A., Larson, J.E., Kirchner, L.: How we analyzed the compas recidivism algorithm.(2016)

    Google Scholar 

  17. Foster, D., Vohra, R.: An economic argument for affirmative action. Rationality Soc. 4(2), 176–188 (1992)

    Google Scholar 

  18. Ali, J., Zafar, M.B., Singla, A., Gummadi, K.P.:. Loss-aversively fair classification. In: AIES 2019, pp. 211–218 (2019)

    Google Scholar 

  19. Rawls, J.: A Theory of Justice (1971)

    Google Scholar 

  20. Agarwal, A., Beygelzimer, A., Dudík, M., John, L., Wallach, H.M.: A reductions approach to fair classification. CoRR, abs/1803.02453 (2018)

    Google Scholar 

  21. Andrew, Y.N.: Feature selection, \(<\)i\(>\)l\(<\)/i\(>\)\(<\)sub\(>\)1\(<\)/sub\(>\) vs. \(<\)i\(>\)l\(<\)/i\(>\)\(<\)sub\(>\)2\(<\)/sub\(>\) regularization, and rotational invariance. In: Proceedings of the Twenty-First International Conference on Machine Learning, ICML 2004, p. 78. Association for Computing Machinery, New York, NY, USA (2004)

    Google Scholar 

  22. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. CoRR, abs/1610.02413 (2016)

    Google Scholar 

  23. Beutel, A., Chen, J., Zhao, Z., Hsin Chi, E.H.: Data decisions and theoretical implications when adversarially learning fair representations. CoRR, abs/1707.00075 (2017)

    Google Scholar 

  24. Martinez, N., Bertran, M., Sapiro, G.: A multi objective perspective. In: ICML, Minimax pareto fairness (2020)

    Google Scholar 

  25. Kearns, M., Neel, S., Roth, A., Wu, Z.S.: Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. CoRR, abs/1711.05144 (2017)

    Google Scholar 

  26. Supreme Court of the United States. Griggs v. duke power co. (1971)

    Google Scholar 

  27. Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2015, pp. 259–268. ACM, New York, NY, USA (2015)

    Google Scholar 

  28. Chouldechova, A., G’Sell, M.: Fairer and more accurate, but for whom? 06 (2017)

    Google Scholar 

  29. Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of Machine Learning. MIT Press, Adaptive Computation and Machine Learning (2012)

    Google Scholar 

  30. Shawe-Taylor, J., Anthony, M., Biggs, N.: Bounding sample size with the vapnik-chervonenkis dimension. 42, 65–73 (1993)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ananth Balashankar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Balashankar, A., Lees, A. (2022). The Need for Transparent Demographic Group Trade-Offs in Credit Risk and Income Classification. In: Smits, M. (eds) Information for a Better World: Shaping the Global Future. iConference 2022. Lecture Notes in Computer Science(), vol 13192. Springer, Cham. https://doi.org/10.1007/978-3-030-96957-8_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-96957-8_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-96956-1

  • Online ISBN: 978-3-030-96957-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics