Skip to main content

Realising Fair Outcomes from Algorithm-Enabled Decision Systems: An Exploratory Case Study

  • Conference paper
  • First Online:
Enterprise Applications, Markets and Services in the Finance Industry (FinanceCom 2022)

Abstract

Fairness is a crucial concept in the context of artificial intelligence ethics and policy. It is an incremental component in existing ethical principle frameworks, especially for algorithm-enabled decision systems. Yet, translating fairness principles into context specific practices can be undermined by multiple unintended organisational risks. This paper argues that there is a gap between the potential and actual realized value of AI. Therefore, this research attempts to answer how organisations can mitigate AI risks that relate to unfair decision outcomes. We take a holistic view by analyzing the challenges throughout a typical AI product life cycle while focusing on the critical question of how rather broadly defined fairness principles may be translated into day-to-day practical solutions at the organizational level. We report on an exploratory case study of a social impact microfinance organization that is using AI-enabled credit scoring to support the screening process to particularly financially marginalized entrepreneurs. This paper highlights the importance of considering the strategic role of the organisation when developing and evaluating fair algorithm- enabled decision systems. The proposed framework and results of this study can be used to inspire the right questions that suit the context an organisation is situated in when implementing fair AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Other examples, such as the High-Level Expert Group on AI set up by the European Commission have a similar definition of the principle of fairness [26, 27].

  2. 2.

    This framework was furthermore based on common approaches on fairness extracted from literature reviews [16, 37, 48], and toolkits [1, 3,4,5, 44] that focus on ethical AI.

  3. 3.

    The oversight body is meant to prevent the opaqueness of AI systems and should provide regulation to prevent the common traps mentioned in [41]. The oversight body can be placed independently within the organization without interests that conflict with its role.

  4. 4.

    Such as the seven dimensions framework for machine learning systems provided by [22] and further analyzed by [32].

  5. 5.

    There is no clear consensus on the priorities and subdivisions in the ethics of AI, but several overarching frameworks are mentioned in Sect. 2.4.

References

  1. Ethics & algorithms toolkit. https://ethicstoolkit.ai/. Accessed 15 Jan 2022

  2. Regulation (EU) 2016/679 of the European parliament and of the council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/EC (general data protection regulation) (text with EEA relevance). http://data.europa.eu/eli/reg/2016/679/2016-05-04

  3. Algorithmic accountability policy toolkit. Technical report, AI Now Institute at New York University (2018)

    Google Scholar 

  4. Consequence scanning: An agile event for responsible innovators (2019). https://doteveryone.org.uk/project/consequence-scanning/. Accessed 15 Jan 2022

  5. Examining the black box: Tools for assessing algorithmic systems. Technical report, Ada Lovelace Institute (2020)

    Google Scholar 

  6. Report of the social and human sciences commission (SHS). Technical report 41 C/73, UNESCO (2021)

    Google Scholar 

  7. Alshammari, M., Simpson, A.: Towards a principled approach for engineering privacy by design. In: Schweighofer, E., Leitold, H., Mitrakas, A., Rannenberg, K. (eds.) APF 2017. LNCS, vol. 10518, pp. 161–177. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67280-9_9

    Chapter  Google Scholar 

  8. Barocas, S., Selbst, A.D.: Big data’s disparate impact. Calif. Law Rev., 671–732 (2016)

    Google Scholar 

  9. Canales, R., Greenberg, J.: A matter of (relational) style: loan officer consistency and exchange continuity in microfinance. Manage. Sci. 62(4), 1202–1224 (2016)

    Article  Google Scholar 

  10. Chouldechova, A., Benavides-Prado, D., Fialko, O., Vaithianathan, R.: A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In: Conference on Fairness, Accountability and Transparency, pp. 134–148. PMLR (2018)

    Google Scholar 

  11. Corbett-Davies, S., Goel, S.: The measure and mismeasure of fairness: a critical review of fair machine learning. arXiv preprint arXiv:1808.00023 (2018)

  12. Edwards, L., Veale, M.: Slave to the algorithm: why a right to an explanation is probably not the remedy you are looking for. Duke L. Tech. Rev. 16, 18 (2017)

    Google Scholar 

  13. Finlay, S.: Credit Scoring, Response Modeling, and Insurance Rating: A Practical Guide to Forecasting Consumer Behavior. Palgrave Macmillan (2012)

    Google Scholar 

  14. Floridi, L.: Establishing the rules for building trustworthy AI. Nat. Mach. Intell. 1(6), 261–262 (2019)

    Article  Google Scholar 

  15. Floridi, L.: Translating principles into practices of digital ethics: five risks of being unethical. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. PSS, vol. 144, pp. 81–90. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_6

    Chapter  Google Scholar 

  16. Floridi, L., Cowls, J.: A unified framework of five principles for AI in society. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. PSS, vol. 144, pp. 5–17. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_2

    Chapter  Google Scholar 

  17. Fu, R., Aseri, M., Singh, P., Srinivasan, K.: “un’’ fair machine learning algorithms. Manage. Sci. 68, 4173–4195 (2022)

    Article  Google Scholar 

  18. Fu, R., Huang, Y., Singh, P.V.: AI and algorithmic bias: source, detection, mitigation and implications. Detect. Mitigat. Implications (July 26, 2020) (2020)

    Google Scholar 

  19. Fu, R., Huang, Y., Singh, P.V.: Crowds, lending, machine, and bias. Inf. Syst. Res. 32(1), 72–92 (2021)

    Google Scholar 

  20. Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., Walther, A.: Predictably unequal? The effects of machine learning on credit markets. J. Financ. 77(1), 5–47 (2022)

    Article  Google Scholar 

  21. Green, B., Viljoen, S.: Algorithmic realism: expanding the boundaries of algorithmic thought. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 19–31 (2020)

    Google Scholar 

  22. Greene, D., Hoffmann, A.L., Stark, L.: Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. In: Proceedings of the 52nd Hawaii International Conference on System Sciences (2019)

    Google Scholar 

  23. Groenevelt, E.: Qredits: a data-driven high-tech approach to European microfinance. a ten-year perspective (2019). https://cdn.qredits.nl/shared/files/documents/qredits-a-data-driven-high-touch-approach-to-european-microfinance.pdf

  24. Gunnarsson, B.R., vanden Broucke, S., Baesens, B., Óskarsdóttir, M., Lemahieu, W.: Deep learning for credit scoring: do or don’t? Eur. J. Oper. Res. 295(1), 292–305 (2021). https://www.sciencedirect.com/science/article/pii/S037722172100196X

  25. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 29, 292–305 (2016)

    Google Scholar 

  26. High-Level Expert Group on Artificial Intelligence: Ethics guidelines for trustworthy AI (2019)

    Google Scholar 

  27. High-Level Expert Group on Artificial Intelligence: Assessment list for trustworthy AI (ALTAI) (2020)

    Google Scholar 

  28. Ienca, M.: Democratizing cognitive technology: a proactive approach. Ethics Inf. Technol. 21(4), 267–280 (2019)

    Article  Google Scholar 

  29. Johnson, K., Pasquale, F., Chapman, J.: Artificial intelligence, machine learning, and bias in finance: toward responsible innovation. Fordham L. Rev. 88, 499 (2019)

    Google Scholar 

  30. Katell, M., et al.: Toward situated interventions for algorithmic equity: lessons from the field. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 45–55 (2020)

    Google Scholar 

  31. Lee, M.S.A., Floridi, L., Denev, A.: Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. PSS, vol. 144, pp. 353–371. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_20

    Chapter  Google Scholar 

  32. Lo Piano, S.: Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Hum. Soc. Sci. Commun. 7(1), 1–7 (2020)

    Google Scholar 

  33. Madaio, M.A., Stark, L., Wortman Vaughan, J., Wallach, H.: Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York (2020)

    Google Scholar 

  34. Malgieri, G.: The concept of fairness in the GDPR: a linguistic and contextual interpretation. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* 2020, pp. 154–166. Association for Computing Machinery, New York (2020)

    Google Scholar 

  35. Miller, C., Coldicott, R.: People, power and technology: the tech workers’ view (2019). https://doteveryone.org.uk/report/workersview

  36. MIT Media Lab: AI blindspot: a discovery process for preventing, detecting, and mitigating bias in AI systems (2019). https://aiblindspot.media.mit.edu/. Accessed 13 Jan 2022

  37. Morley, J., Floridi, L., Kinsey, L., Elhalal, A.: From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. PSS, vol. 144, pp. 153–183. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_10

    Chapter  Google Scholar 

  38. Moustakas, C.: Phenomenological Research Methods. Sage Publications (1994)

    Google Scholar 

  39. Namvar, M.: Using business intelligence to support the process of organizational sensemaking. Ph.D. thesis, Deakin University (2016)

    Google Scholar 

  40. Namvar, M., Intezari, A.: Wise data-driven decision-making. In: Dennehy, D., Griva, A., Pouloudi, N., Dwivedi, Y.K., Pappas, I., Mäntymäki, M. (eds.) I3E 2021. LNCS, vol. 12896, pp. 109–119. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85447-8_10

    Chapter  Google Scholar 

  41. O’neil, C.: Weapons of math destruction: how big data increases inequality and threatens democracy. Crown (2016)

    Google Scholar 

  42. Peters, D., Calvo, R.: Beyond principles: a process for responsible tech (2019). https://medium.com/ethics-of-digital-experience/beyond-principles-a-process-for-responsible-tech-aefc921f7317

  43. Poole, D., Mackworth, A., Goebel, R.: Computational Intelligence. Oxford University Press, Oxford (1998)

    MATH  Google Scholar 

  44. PricewaterhouseCoopers: PwC’s responsible AI toolkit. https://www.pwc.com/gx/en/issues/data-and-analytics/artificial-intelligence/what-is-responsible-ai.html. Accessed 15 Jan 2022

  45. Selbst, A.D., Boyd, D., Friedler, S.A., Venkatasubramanian, S., Vertesi, J.: Fairness and abstraction in sociotechnical systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 59–68 (2019)

    Google Scholar 

  46. Skeem, J.L., Lowenkamp, C.T.: Risk, race, and recidivism: predictive bias and disparate impact. Criminology 54(4), 680–712 (2016)

    Article  Google Scholar 

  47. Taddeo, M., Floridi, L.: How AI can be a force for good – an ethical framework to harness the potential of AI while keeping humans in control. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. PSS, vol. 144, pp. 91–96. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_7

    Chapter  Google Scholar 

  48. Tsamados, A., et al.: The ethics of algorithms: key problems and solutions. In: Floridi, L. (ed.) Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series, vol. 144. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81907-1_8

  49. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR (2017)

    Google Scholar 

  50. Xivuri, K., Twinomurinzi, H.: A systematic review of fairness in artificial intelligence algorithms. In: Dennehy, D., Griva, A., Pouloudi, N., Dwivedi, Y.K., Pappas, I., Mäntymäki, M. (eds.) I3E 2021. LNCS, vol. 12896, pp. 271–284. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85447-8_24

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Franziska Koefer or Ivo Lemken .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Koefer, F., Lemken, I., Pauls, J. (2023). Realising Fair Outcomes from Algorithm-Enabled Decision Systems: An Exploratory Case Study. In: van Hillegersberg, J., Osterrieder, J., Rabhi, F., Abhishta, A., Marisetty, V., Huang, X. (eds) Enterprise Applications, Markets and Services in the Finance Industry. FinanceCom 2022. Lecture Notes in Business Information Processing, vol 467. Springer, Cham. https://doi.org/10.1007/978-3-031-31671-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-31671-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-31670-8

  • Online ISBN: 978-3-031-31671-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics