Skip to main content
Log in

Many hands make many fingers to point: challenges in creating accountable AI

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Given the complexity of teams involved in creating AI-based systems, how can we understand who should be held accountable when they fail? This paper reports findings about accountable AI from 26 interviews conducted with stakeholders in AI drawn from the fields of AI research, law, and policy. Participants described the challenges presented by the distributed nature of how AI systems are designed, developed, deployed, and regulated. This distribution of agency, alongside existing mechanisms of accountability, responsibility, and liability, creates barriers for effective accountable design. As agency is distributed across the socio-technical landscape of an AI system, users without deep knowledge of the operation of these systems become disempowered, unable to challenge or contest when it impacts their lives. In this context, accountability becomes a matter of building systems that can be challenged, interrogated, and, most importantly, adjusted in use to accommodate counter-intuitive results and unpredictable impacts. Thus, accountable system design can work to reconfigure socio-technical landscapes to protect the users of AI and to prevent unjust apportionment of risk.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Availability of data and material

Not applicable.

Code availability

Not applicable.

References

  • Ames MG (2018) Deconstructing the algorithmic sublime. Big Data Soc. https://doi.org/10.1177/2053951718779194

    Article  Google Scholar 

  • Bandura A (1989) Human agency in social cognitive theory. Am Psychol 44(9):1175–1184. https://doi.org/10.1037/0003-066x.44.9.1175

    Article  Google Scholar 

  • Bandura A (2006) Toward a psychology of human agency. Perspect Psychol Sci 1(2):164–180. https://doi.org/10.1111/j.1745-6916.2006.00011.x

    Article  Google Scholar 

  • Bovens M (2007) Analysing and assessing accountability: a conceptual framework. Eur Law J 13(4):447–468. https://doi.org/10.1111/j.1468-0386.2007.00378.x

    Article  MathSciNet  Google Scholar 

  • Callon M (1986) The sociology of an actor-network: the case of the electric vehicle. In: Callon M, Law J, Rip A (eds) Mapping the dynamics of science and technology. Palgrave Macmillan, London, pp 19–34

    Chapter  Google Scholar 

  • Carrion A (2013) Very fast money: high-frequency trading on the NASDAQ. J Financ Market 16(4):680–711. https://doi.org/10.1016/j.finmar.2013.06.005

    Article  Google Scholar 

  • Citron DK, Pasquale F (2014) The scored society: due process for automated predictions. Wash L Rev 89:1–33

    Google Scholar 

  • Clarke V, Braun V, Hayfield N (2015) Thematic analysis. In: Smith J (ed) Qualitative psychology: a practical guide to research methods. Sage Publishing Inc, London, pp 222–248

    Google Scholar 

  • Coeckelbergh M (2020) Artificial intelligence, responsibility attribution, and a relational justification of explainability. Sci Eng Ethics 26(4):2051–2068. https://doi.org/10.1007/s11948-019-00146-8

    Article  Google Scholar 

  • Danaher J (2019) The rise of the robots and the crisis of moral patiency. AI & Soc 34(1):129–136. https://doi.org/10.1007/s00146-017-0773-9

    Article  Google Scholar 

  • Flanagan JC (1954) The critical incident technique. Psychol Bull 51(4):327–358. https://doi.org/10.1037/h0061470

    Article  Google Scholar 

  • Fleischmann KR, Wallace WA (2009) Ensuring transparency in computational modeling. Comm ACM 52(3):131–134. https://doi.org/10.1145/1467247.1467278

    Article  Google Scholar 

  • Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena C (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707. https://doi.org/10.1007/s11023-018-9482-5

    Article  Google Scholar 

  • Foucault M (1991) Governmentality. In: Burchell G, Gordon C, Miller P (eds) The Foucault effect: studies in governmentality. University of Chicago Press, Chicago, pp 87–104

    Google Scholar 

  • Friedman B, Kahn P, Borning A (2006) Value sensitive design and information systems. In: Zhang P, Galletta D (eds) Human-computer interaction in management information systems. M.E. Sharpe Inc., New York, pp 348–372

    Google Scholar 

  • Gillespie T (2014) The relevance of algorithms. In: Gillespie T, Boczkowski PJ, Foot KA (eds) Media technologies: essays on communication, materiality, and society. MIT Press, Cambridge, MA, pp 167–194

    Google Scholar 

  • Harcourt BE (2008) Against prediction: profiling, policing, and punishing in an actuarial age. University of Chicago Press, Chicago, IL

    Google Scholar 

  • Heer J (2019) Agency plus automation: designing artificial intelligence into interactive systems. Proc Nat Acad Sci 116(6):1844–1850. https://doi.org/10.1073/pnas.1807184115

    Article  Google Scholar 

  • Iphofen R, Kritikos M (2021) Regulating artificial intelligence and robotics: ethics by design in a digital society. Contemp Soc Sci 16(2):170–184. https://doi.org/10.1080/21582041.2018.1563803

    Article  Google Scholar 

  • Jarrahi MH (2018) Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Bus Horiz 61(4):577–586. https://doi.org/10.1016/j.bushor.2018.03.007

    Article  Google Scholar 

  • Johnson JD (2017) Ethics, agency, and power: Toward an algorithmic rhetoric. In: Hess A, Davisson A (eds) Theorizing digital rhetoric. Routledge, New Yok, pp 196–208

    Chapter  Google Scholar 

  • Kalluri P (2020) Don’t ask if artificial intelligence is good or fair, ask how it shifts power. Nature 583:169. https://doi.org/10.1038/d41586-020-02003-2

    Article  Google Scholar 

  • Kitchin R (2014) Big Data, new epistemologies and paradigm shifts. Big Data Soc. https://doi.org/10.1177/2053951714528481

    Article  Google Scholar 

  • Knobel C, Bowker GC (2011) Values in design. Comm ACM 54(7):26–28. https://doi.org/10.1145/1965724.1965735

    Article  Google Scholar 

  • Latour B (1992) Where are the missing masses? The sociology of a few mundane artefacts. In: Bijker W, Law J (eds) Shaping technology, building society. MIT Press, Cambridge, MA, pp 225–258

    Google Scholar 

  • Latour B (2002) Gabriel tarde and the end of the social. In: Joyce P (ed) The social in question: new bearings in history and the social sciences. Routledge, New York, pp 117–132

    Google Scholar 

  • Latour B (2010) Tarde’s idea of quantification. In: Candea M (ed) The social after gabriel tarde: debates and assessments. Routledge, New York, pp 187–202

    Google Scholar 

  • Lawless WF, Mittu R, Sofge D, Russell S (eds) (2017) Autonomy and artificial intelligence: a threat or savior? Springer International Publishing AG, Cham

    Google Scholar 

  • Lipton ZC (2018) The mythos of model interpretability. ACM Queue 16(3):1–27

    Article  Google Scholar 

  • Manhoka I (2020) Surveillance, panopticism, and self-discipline in the digital age. Surveillance Soc 16(2):219–237

    Article  Google Scholar 

  • Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell 1(11):501–507. https://doi.org/10.1038/s42256-019-0114-4

    Article  Google Scholar 

  • Monroe D (2018) AI, explain yourself. Comm ACM 61(11):11–13. https://doi.org/10.1145/3276742

    Article  Google Scholar 

  • Noy C (2008) Sampling knowledge: the hermeneutics of snowball sampling in qualitative research. Int J Soc Res Meth 11(4):327–344. https://doi.org/10.1080/13645570701401305

    Article  Google Scholar 

  • O’Sullivan S, Nevejans N, Allen C et al (2019) Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int J Med Robotics Comput Assist Surg 15:e1968. https://doi.org/10.1002/rcs.1968

    Article  Google Scholar 

  • Porayska-Pomsta K, Rajendran G (2019) Accountability in human and Artificial Intelligence decision-making as the basis for diversity and educational inclusion. In: Knox J, Wang Y, Gallagher M (eds) Artificial intelligence and inclusive education. Springer, Singapore, pp 39–59

    Chapter  Google Scholar 

  • Rammert W (2012) Distributed agency and advanced technology. Or: how to analyse constellations of collective inter-agency. In: Passoth JH, Peuker B, Schillmeier M (eds) Agency without actors? New approaches to collective action. Routledge, NY, pp 89–112

    Google Scholar 

  • Ribes D, Hoffman AS, Slota SC, Bowker GC (2019) The logic of domains. Soc Stud Sci 49(3):281–309. https://doi.org/10.1177/0306312719849709

    Article  Google Scholar 

  • Rip A (2012) The context of innovation journeys. Creativ Innovat Manag 21(2):158–170. https://doi.org/10.1111/j.1467-8691.2012.00640.x

    Article  Google Scholar 

  • Rip A, Kemp R (1998) Technological change. In: Rayner S, Malone EL (eds) Human choice and climate change. Batelle Press, Columbus, OH, pp 327–399

    Google Scholar 

  • Roh Y, Heo G, Whang SE (2019) A survey on data collection for machine learning: a big data-ai integration perspective. IEEE Trans Knowl Data Eng 33(4):1328–1347. https://doi.org/10.1109/TKDE.2019.2946162

    Article  Google Scholar 

  • Ryan M, Antoniou J, Brooks L, Jiya T, Macnish K, Stahl B (2021) Research and practice of AI ethics: a case study approach juxtaposing academic discourse with organisational reality. Sci Eng Ethics 27(2):1–29. https://doi.org/10.1007/s11948-021-00293-x

    Article  Google Scholar 

  • Schillemans T, Bovens M (2011) The challenge of multiple accountability: does redundancy lead to overload? In: Dubnick MJ, Frederickson HG (eds) Accountable governance: problems and promises. Routledge, New York, pp 3–21

    Google Scholar 

  • Skeem JL, Lowenkamp CT (2016) Risk, race, and recidivism: predictive bias and disparate impact. Criminol Interdisciplinary J 54(4):680–712. https://doi.org/10.1111/1745-9125.12123

    Article  Google Scholar 

  • Slota SC, Fleischmann KR, Greenberg S, Verma N, Cummings B, Li L, Shenefiel C (2020) Good systems, bad data?: interpretations of AI hype and failures. Proc Assoc Info Sci Technol 57(1):e275. https://doi.org/10.1002/pra2.275

    Article  Google Scholar 

  • Tang X, Li X, Ding Y, Song M, Bu Y (2020) The pace of artificial intelligence innovations: speed, talent, and trial-and-error. J Informet 14(4):101094. https://doi.org/10.1016/j.joi.2020.101094

    Article  Google Scholar 

  • Tufekci Z (2015) Algorithmic harms beyond Facebook and Google: emergent challenges of computational agency. Colo Tech LJ 13:203–218

    Google Scholar 

  • Wachter S, Mittelstadt B (2019) A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Colum Bus L Rev 494(2):494–620

    Google Scholar 

  • Chopra AK, Singh MP (2018) Sociotechnical systems and ethics in the large. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp 48–53. https://doi.org/10.1145/3278721.3278740

  • Doshi-Velez F, Kortz M, Budish R, Bavitz C, Gershman SJ, O'Brien D, Scott K, Shieber S, Waldo J, Weinberger D, Weller A, Wood A (2017) Accountability of AI under the law: the role of explanation. Working paper. Berkman Klein Center for Internet & Society. DOI: https://doi.org/10.2139/ssrn.3064761

  • Fleischmann KR, Wallace WA (2017) Ethical implications of computational modeling. The Bridge: Linking Engineering and Society 47(1):45–51

  • Gualdi F, Cordella A (2021) Artificial intelligence and decision-making: The question of accountability In Proceedings of the 54th Hawaii International Conference on System Sciences, pp 2297–2306. https://doi.org/10.24251/HICSS.2021.281

  • Krause J, Perer A, Ng K (2016) Interacting with predictions: visual inspection of black-box machine learning models. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, ACM, pp 5686–5697. https://doi.org/10.1145/2858036.2858529

  • Murukannaiah PK, Ajmeri N, Jonker CM, Singh MP (2020) New foundations of ethical multiagent systems. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pp 1706–1710. https://doi.org/10.5555/3398761.3398958

  • Pagallo U (2017) From automation to autonomous systems: A legal phenomenology with problems of accountability. In 26th International Joint Conference on Artificial Intelligence, IJCAI 2017, pp 17–23. https://doi.org/10.24963/ijcai.2017/3

  • Raji ID, Buolamwini J (2019) Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp 429–435. https://doi.org/10.1145/3306618.3314244

  • Raji ID, Smart A, White RN, Mitchell M, Gebru T, Hutchinson B, Smith-Loud J, Theron D, Barnes P (2020) Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 33–44. https://doi.org/10.1145/3351095.3372873

  • Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. In ICML'17: Proceedings of the 34th International Conference on Machine Learning - vol 70, pp 3145–3153

  • Slota SC, Fleischmann KR, Greenberg S, Verma N, Cummings B, Li L, Shenefiel C (2021). Something New Versus Tried and True: Ensuring ‘Innovative’ AI Is ‘Good’ AI. In Diversity, Divergence, Dialogue: 16th International Conference, iConference 2021, Beijing, China, March 17–31, 2021, Proceedings, Part I 16, Springer International Publishing, pp 24–32 https://doi.org/10.1007/978-3-030-71292-1_3

  • Stone P, Brooks R, Brynjolfsson E, Calo R, Etzioni O, Hager G, Hirschberg J, Kalyanakrishnan S, Kamar E, Kraus S, Leyton-Brown K, Parkes D, Press W, Saxenian A, Shah J, Tambe M, Teller A (2016) Artificial Intelligence and Life in 2030: One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA. https://ai100.stanford.edu/2016-report

  • Tae KH, Roh Y, Oh YH, Kim H, Whang SE (2019) Data cleaning for accurate, fair, and robust models: A big data-AI integration approach. In DEEM'19: Proceedings of the 3rd International Workshop on Data Management for End-to-End Machine Learning, Article 5, pp 1–4. https://doi.org/10.1145/3329486.3329493

  • Wieringa M (2020) What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20), pp 1–18

Download references

Acknowledgements

This research was funded by Cisco Systems under RFP16-02, Legal Implications for IoT, Machine Learning, and Artificial Intelligence. We thank our research participants, including Blake Anderson, Ruzena Bajcsy, Hal Daume III, Stephen Elkins, Enzo Fenoglio, Iria Giuffrida, Dean Harvey, James Hodson, Wei San Hui, Amir Husain, Jeff Kirk, Frederic Lederer, Ted Lehr, Terrell McSweeny, Matt Scherer, Peter Stone, Nicolas Vermeys, and Christopher Yoo as well as eight anonymous participants.

Funding

This research was funded by a grant from Cisco Systems, Inc. RFP-16-02 Legal Implications for IoT, ML, & AI.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by KF, SG, SS, NV, BC and LL. The first draft of the manuscript was written by SS and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Stephen C. Slota.

Ethics declarations

Conflict of interest

The author(s) declares that there is no conflict of interest.

Ethical approval

This study was approved by the University of Texas at Austin IRB, #2018-10-0015.

Consent for publication

Not applicable.

Consent for participation

All participants completed the informed consent process prior to participation in the study. The consent process and research activities were reviewed and approved by the University of Texas at Austin IRB, #2018-10-0015.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Slota, S.C., Fleischmann, K.R., Greenberg, S. et al. Many hands make many fingers to point: challenges in creating accountable AI. AI & Soc 38, 1287–1299 (2023). https://doi.org/10.1007/s00146-021-01302-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01302-0

Keywords

Navigation