Abstract
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, may deliver the expectations over many application sectors across the field. For this to occur, expert systems and rule-based models need to overcome the limitation of fairness and interpretability. Paradigms underlying this problem fall within the so-called explainable AI (XAI) field. This report presents the work on German credit card dataset to overcome the challenges of fairness, bias and in return, deem the machine learning models giving a responsible expectation. This is defined as responsible AI in practice. Since the dataset we dealt with, is to classify credit score of a user as good or bad, using fair ML modelling approach, the key metric of interest is the F1-score to reduce share of misclassifications. It is observed that hyper parameter tuned XGBoost model (GC2) gives optimal performance in terms of both F1-score, accuracy and fairness for the case of both gender and age as protected variable through Disparate Impact Remover, a pre-processing bias mitigation technique. The same is deployed using both Heroku through Flask API (for age). The Disparate Impact Analysis (DIA) using H2O.AI helped to identify optimum threshold levels at which the fairness metrics are observed at legally acceptable/permissible levels for both age and gender. Overall, fairness, bias responsibility and explainability have been established for the dataset considered.













Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Info. Fusion 82, 82–115 (2020)
Antoniadi, A.M., Du, Y., Guendouz, Y., Wei, L., Mazo, C., Becker, B.A., Mooney, C.: Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review. Appl. Sci. 11, 5088 (2021)
Ryan, M.: The future of transportation: ethical, legal, social and economic impacts of self-driving vehicles in the year 2025. Sci Eng Ethics 26, 1185–1208 (2020)
Langenbucher, K.: Responsible A.I.—based credit scoring—a legal framework. 31, European Business Law Review, Issue 4, pp. 527–572 (2020)
Golbin, I., Rao, A.S., Hadjarian, A., Krittman, D.: Responsible AI: a primer for the legal community. IEEE Int. Conf. Big Data (Big Data) 2020, 2121–2126 (2020)
Bussmann, N., Giudici, P., Marinelli, D., et al.: Explainable machine learning in credit risk management. Comput. Econ. 57, 203–216 (2021). (Accenture Federal Services - “Responsible AI: A framework for building trust in your AI solutions”, 2021.)
Torrent, N.L., VIsani, G., Bagli, E.: PSD2 explainable AI model for Credit Scoring. arXiv:2011.10367, 2021.
Chen, J., Kallus, N., Mao, X., Svacha, G., Udell, M.: Fairness under unawareness: assessing disparity when protected class is unobserved (2019). https://arxiv.org/pdf/1811.11154.pdf.
Bellamy, R.K., Dey, K., Hind, M., Hoffman, S.C., Houde, S., Kannan, K., et al.: AI Fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias (2018). arXiv:1810.01943.
Biswas, S. and Rajan, H.: Do the machine learning models on a crowd sourced platform exhibit bias? An empirical study on model fairness. In: Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. (ESEC/FSE 2020). Association for Computing Machinery, New York, NY, USA, pp. 642–653. https://doi.org/10.1145/3368089.3409704
Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., and Venkatasubramanian, S.: Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 259–268 (2015)
Zafar, M.B., Valera, I., Rodriguez, M.G. and Gummadi, K.P.: Fairness constraints: mechanisms for fair classification. arXiv preprint (2015). arXiv:1507.05259.
Kamiran, F., Calders, T.: Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 33(1), 1–33 (2012)
Zhang, B.H., Lemoine, B. and Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340 (2018)
Kamishima, T., Akaho, S., Asoh, H., and Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, pp. 35–50 (2012)
Hardt, M., Price, E., and Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, pp. 3315–3323 (2016)
Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., and Weinberger, K.Q.: On fairness and calibration. In Advances in Neural Information Processing Systems, pp. 5680–5689 (2017)
Kamiran, F., Karim, A., and Zhang, X.: Decision theory for discrimination-aware classification. In 2012 IEEE 12th International Conference on Data Mining. IEEE, pp. 924–929 (2012)
H2O.AI.: H2O: Scalable Machine Learning Platform. Version 3.30.0.6. (2020). https://github.com/h2oai/h2o-3.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Jammalamadaka, K.R., Itapu, S. Responsible AI in automated credit scoring systems. AI Ethics 3, 485–495 (2023). https://doi.org/10.1007/s43681-022-00175-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-022-00175-3