Abstract
Fairness is a crucial concept in the context of artificial intelligence ethics and policy. It is an incremental component in existing ethical principle frameworks, especially for algorithm-enabled decision systems. Yet, translating fairness principles into context specific practices can be undermined by multiple unintended organisational risks. This paper argues that there is a gap between the potential and actual realized value of AI. Therefore, this research attempts to answer how organisations can mitigate AI risks that relate to unfair decision outcomes. We take a holistic view by analyzing the challenges throughout a typical AI product life cycle while focusing on the critical question of how rather broadly defined fairness principles may be translated into day-to-day practical solutions at the organizational level. We report on an exploratory case study of a social impact microfinance organization that is using AI-enabled credit scoring to support the screening process to particularly financially marginalized entrepreneurs. This paper highlights the importance of considering the strategic role of the organisation when developing and evaluating fair algorithm- enabled decision systems. The proposed framework and results of this study can be used to inspire the right questions that suit the context an organisation is situated in when implementing fair AI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
The oversight body is meant to prevent the opaqueness of AI systems and should provide regulation to prevent the common traps mentioned in [41]. The oversight body can be placed independently within the organization without interests that conflict with its role.
- 4.
- 5.
There is no clear consensus on the priorities and subdivisions in the ethics of AI, but several overarching frameworks are mentioned in Sect. 2.4.
References
Ethics & algorithms toolkit. https://ethicstoolkit.ai/. Accessed 15 Jan 2022
Regulation (EU) 2016/679 of the European parliament and of the council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/EC (general data protection regulation) (text with EEA relevance). http://data.europa.eu/eli/reg/2016/679/2016-05-04
Algorithmic accountability policy toolkit. Technical report, AI Now Institute at New York University (2018)
Consequence scanning: An agile event for responsible innovators (2019). https://doteveryone.org.uk/project/consequence-scanning/. Accessed 15 Jan 2022
Examining the black box: Tools for assessing algorithmic systems. Technical report, Ada Lovelace Institute (2020)
Report of the social and human sciences commission (SHS). Technical report 41 C/73, UNESCO (2021)
Alshammari, M., Simpson, A.: Towards a principled approach for engineering privacy by design. In: Schweighofer, E., Leitold, H., Mitrakas, A., Rannenberg, K. (eds.) APF 2017. LNCS, vol. 10518, pp. 161–177. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67280-9_9
Barocas, S., Selbst, A.D.: Big data’s disparate impact. Calif. Law Rev., 671–732 (2016)
Canales, R., Greenberg, J.: A matter of (relational) style: loan officer consistency and exchange continuity in microfinance. Manage. Sci. 62(4), 1202–1224 (2016)
Chouldechova, A., Benavides-Prado, D., Fialko, O., Vaithianathan, R.: A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In: Conference on Fairness, Accountability and Transparency, pp. 134–148. PMLR (2018)
Corbett-Davies, S., Goel, S.: The measure and mismeasure of fairness: a critical review of fair machine learning. arXiv preprint arXiv:1808.00023 (2018)
Edwards, L., Veale, M.: Slave to the algorithm: why a right to an explanation is probably not the remedy you are looking for. Duke L. Tech. Rev. 16, 18 (2017)
Finlay, S.: Credit Scoring, Response Modeling, and Insurance Rating: A Practical Guide to Forecasting Consumer Behavior. Palgrave Macmillan (2012)
Floridi, L.: Establishing the rules for building trustworthy AI. Nat. Mach. Intell. 1(6), 261–262 (2019)
Floridi, L.: Translating principles into practices of digital ethics: five risks of being unethical. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. PSS, vol. 144, pp. 81–90. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_6
Floridi, L., Cowls, J.: A unified framework of five principles for AI in society. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. PSS, vol. 144, pp. 5–17. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_2
Fu, R., Aseri, M., Singh, P., Srinivasan, K.: “un’’ fair machine learning algorithms. Manage. Sci. 68, 4173–4195 (2022)
Fu, R., Huang, Y., Singh, P.V.: AI and algorithmic bias: source, detection, mitigation and implications. Detect. Mitigat. Implications (July 26, 2020) (2020)
Fu, R., Huang, Y., Singh, P.V.: Crowds, lending, machine, and bias. Inf. Syst. Res. 32(1), 72–92 (2021)
Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., Walther, A.: Predictably unequal? The effects of machine learning on credit markets. J. Financ. 77(1), 5–47 (2022)
Green, B., Viljoen, S.: Algorithmic realism: expanding the boundaries of algorithmic thought. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 19–31 (2020)
Greene, D., Hoffmann, A.L., Stark, L.: Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii International Conference on System Sciences (2019)
Groenevelt, E.: Qredits: a data-driven high-tech approach to European microfinance. a ten-year perspective (2019). https://cdn.qredits.nl/shared/files/documents/qredits-a-data-driven-high-touch-approach-to-european-microfinance.pdf
Gunnarsson, B.R., vanden Broucke, S., Baesens, B., Óskarsdóttir, M., Lemahieu, W.: Deep learning for credit scoring: do or don’t? Eur. J. Oper. Res. 295(1), 292–305 (2021). https://www.sciencedirect.com/science/article/pii/S037722172100196X
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 29, 292–305 (2016)
High-Level Expert Group on Artificial Intelligence: Ethics guidelines for trustworthy AI (2019)
High-Level Expert Group on Artificial Intelligence: Assessment list for trustworthy AI (ALTAI) (2020)
Ienca, M.: Democratizing cognitive technology: a proactive approach. Ethics Inf. Technol. 21(4), 267–280 (2019)
Johnson, K., Pasquale, F., Chapman, J.: Artificial intelligence, machine learning, and bias in finance: toward responsible innovation. Fordham L. Rev. 88, 499 (2019)
Katell, M., et al.: Toward situated interventions for algorithmic equity: lessons from the field. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 45–55 (2020)
Lee, M.S.A., Floridi, L., Denev, A.: Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. PSS, vol. 144, pp. 353–371. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_20
Lo Piano, S.: Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Hum. Soc. Sci. Commun. 7(1), 1–7 (2020)
Madaio, M.A., Stark, L., Wortman Vaughan, J., Wallach, H.: Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York (2020)
Malgieri, G.: The concept of fairness in the GDPR: a linguistic and contextual interpretation. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 154–166. Association for Computing Machinery, New York (2020)
Miller, C., Coldicott, R.: People, power and technology: the tech workers’ view (2019). https://doteveryone.org.uk/report/workersview
MIT Media Lab: AI blindspot: a discovery process for preventing, detecting, and mitigating bias in AI systems (2019). https://aiblindspot.media.mit.edu/. Accessed 13 Jan 2022
Morley, J., Floridi, L., Kinsey, L., Elhalal, A.: From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. PSS, vol. 144, pp. 153–183. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_10
Moustakas, C.: Phenomenological Research Methods. Sage Publications (1994)
Namvar, M.: Using business intelligence to support the process of organizational sensemaking. Ph.D. thesis, Deakin University (2016)
Namvar, M., Intezari, A.: Wise data-driven decision-making. In: Dennehy, D., Griva, A., Pouloudi, N., Dwivedi, Y.K., Pappas, I., Mäntymäki, M. (eds.) I3E 2021. LNCS, vol. 12896, pp. 109–119. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85447-8_10
O’neil, C.: Weapons of math destruction: how big data increases inequality and threatens democracy. Crown (2016)
Peters, D., Calvo, R.: Beyond principles: a process for responsible tech (2019). https://medium.com/ethics-of-digital-experience/beyond-principles-a-process-for-responsible-tech-aefc921f7317
Poole, D., Mackworth, A., Goebel, R.: Computational Intelligence. Oxford University Press, Oxford (1998)
PricewaterhouseCoopers: PwC’s responsible AI toolkit. https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence/what-is-responsible-ai.html. Accessed 15 Jan 2022
Selbst, A.D., Boyd, D., Friedler, S.A., Venkatasubramanian, S., Vertesi, J.: Fairness and abstraction in sociotechnical systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 59–68 (2019)
Skeem, J.L., Lowenkamp, C.T.: Risk, race, and recidivism: predictive bias and disparate impact. Criminology 54(4), 680–712 (2016)
Taddeo, M., Floridi, L.: How AI can be a force for good – an ethical framework to harness the potential of AI while keeping humans in control. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. PSS, vol. 144, pp. 91–96. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_7
Tsamados, A., et al.: The ethics of algorithms: key problems and solutions. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series, vol. 144. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_8
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR (2017)
Xivuri, K., Twinomurinzi, H.: A systematic review of fairness in artificial intelligence algorithms. In: Dennehy, D., Griva, A., Pouloudi, N., Dwivedi, Y.K., Pappas, I., Mäntymäki, M. (eds.) I3E 2021. LNCS, vol. 12896, pp. 271–284. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85447-8_24
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Koefer, F., Lemken, I., Pauls, J. (2023). Realising Fair Outcomes from Algorithm-Enabled Decision Systems: An Exploratory Case Study. In: van Hillegersberg, J., Osterrieder, J., Rabhi, F., Abhishta, A., Marisetty, V., Huang, X. (eds) Enterprise Applications, Markets and Services in the Finance Industry. FinanceCom 2022. Lecture Notes in Business Information Processing, vol 467. Springer, Cham. https://doi.org/10.1007/978-3-031-31671-5_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-31671-5_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-31670-8
Online ISBN: 978-3-031-31671-5
eBook Packages: Computer ScienceComputer Science (R0)