Skip to main content
Log in

Responsible AI in automated credit scoring systems

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, may deliver the expectations over many application sectors across the field. For this to occur, expert systems and rule-based models need to overcome the limitation of fairness and interpretability. Paradigms underlying this problem fall within the so-called explainable AI (XAI) field. This report presents the work on German credit card dataset to overcome the challenges of fairness, bias and in return, deem the machine learning models giving a responsible expectation. This is defined as responsible AI in practice. Since the dataset we dealt with, is to classify credit score of a user as good or bad, using fair ML modelling approach, the key metric of interest is the F1-score to reduce share of misclassifications. It is observed that hyper parameter tuned XGBoost model (GC2) gives optimal performance in terms of both F1-score, accuracy and fairness for the case of both gender and age as protected variable through Disparate Impact Remover, a pre-processing bias mitigation technique. The same is deployed using both Heroku through Flask API (for age). The Disparate Impact Analysis (DIA) using H2O.AI helped to identify optimum threshold levels at which the fairness metrics are observed at legally acceptable/permissible levels for both age and gender. Overall, fairness, bias responsibility and explainability have been established for the dataset considered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Info. Fusion 82, 82–115 (2020)

    Article  Google Scholar 

  2. Antoniadi, A.M., Du, Y., Guendouz, Y., Wei, L., Mazo, C., Becker, B.A., Mooney, C.: Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review. Appl. Sci. 11, 5088 (2021)

    Article  Google Scholar 

  3. Ryan, M.: The future of transportation: ethical, legal, social and economic impacts of self-driving vehicles in the year 2025. Sci Eng Ethics 26, 1185–1208 (2020)

    Article  Google Scholar 

  4. Langenbucher, K.: Responsible A.I.—based credit scoring—a legal framework. 31, European Business Law Review, Issue 4, pp. 527–572 (2020)

  5. Golbin, I., Rao, A.S., Hadjarian, A., Krittman, D.: Responsible AI: a primer for the legal community. IEEE Int. Conf. Big Data (Big Data) 2020, 2121–2126 (2020)

    Google Scholar 

  6. Bussmann, N., Giudici, P., Marinelli, D., et al.: Explainable machine learning in credit risk management. Comput. Econ. 57, 203–216 (2021). (Accenture Federal Services - “Responsible AI: A framework for building trust in your AI solutions”, 2021.)

    Article  Google Scholar 

  7. Torrent, N.L., VIsani, G., Bagli, E.: PSD2 explainable AI model for Credit Scoring. arXiv:2011.10367, 2021.

  8. Chen, J., Kallus, N., Mao, X., Svacha, G., Udell, M.: Fairness under unawareness: assessing disparity when protected class is unobserved (2019). https://arxiv.org/pdf/1811.11154.pdf.

  9. Bellamy, R.K., Dey, K., Hind, M., Hoffman, S.C., Houde, S., Kannan, K., et al.: AI Fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias (2018). arXiv:1810.01943.

  10. Biswas, S. and Rajan, H.: Do the machine learning models on a crowd sourced platform exhibit bias? An empirical study on model fairness. In: Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. (ESEC/FSE 2020). Association for Computing Machinery, New York, NY, USA, pp. 642–653. https://doi.org/10.1145/3368089.3409704

  11. Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., and Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268 (2015)

  12. Zafar, M.B., Valera, I., Rodriguez, M.G. and Gummadi, K.P.: Fairness constraints: mechanisms for fair classification. arXiv preprint (2015). arXiv:1507.05259.

  13. Kamiran, F., Calders, T.: Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 33(1), 1–33 (2012)

    Article  Google Scholar 

  14. Zhang, B.H., Lemoine, B. and Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340 (2018)

  15. Kamishima, T., Akaho, S., Asoh, H., and Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, pp. 35–50 (2012)

  16. Hardt, M., Price, E., and Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, pp. 3315–3323 (2016)

  17. Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., and Weinberger, K.Q.: On fairness and calibration. In Advances in Neural Information Processing Systems, pp. 5680–5689 (2017)

  18. Kamiran, F., Karim, A., and Zhang, X.: Decision theory for discrimination-aware classification. In 2012 IEEE 12th International Conference on Data Mining. IEEE, pp. 924–929 (2012)

  19. H2O.AI.: H2O: Scalable Machine Learning Platform. Version 3.30.0.6. (2020). https://github.com/h2oai/h2o-3.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Srikanth Itapu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jammalamadaka, K.R., Itapu, S. Responsible AI in automated credit scoring systems. AI Ethics 3, 485–495 (2023). https://doi.org/10.1007/s43681-022-00175-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-022-00175-3

Keywords