Abstract
Recent developments in digital technology domains like Artificial Intelligence and Internet of Things lead to possibilities for more autonomous systems that more directly interact with their business or social environment. This poses new but often implicit questions with respect to the responsibility for decisions made by these systems and the effects of these decisions on their environment. Requirements embedded in design considerations like security constraints, privacy regulations, and traceability demands, however, imply explicit questions with respect to responsibility in information processing. Responsibility may be expected from four different classes of actors: the system itself, the system designer, the system user, and the party whose data is managed by the system, i.e., the object of the system. This paper presents the questions regarding the allocation of responsibility with the development and deployment of autonomous systems, based on requirements imposed by their application context. The discussion of the effect of explainability of decisions on the allocation of responsibility is an essential ingredient here. We provide the necessary concepts and models to make a first analysis and design of the allocation of responsibility.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Amin, T., Mobbs, R.J., Mostafa, N., Sy, L.W., Choy, W.J.: Wearable devices for patient monitoring in the early postoperative period: a literature review. Mhealth 7 (2021)
Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58, 82–115 (2020)
Coeckelbergh, M.: Artificial intelligence, responsibility attribution, and a relational justification of explainability. Sci. Eng. Ethics 26(4), 2051–2068 (2020)
Future of Life Institute: The Asilomar AI principles (2017). https://futureoflife.org/open-letter/ai-principles/
Gotterbarn, D., et al.: ACM code of ethics and professional conduct (2018). https://www.acm.org/code-of-ethics
Grefen, P.: Digital literacy and electronic business. Encyclopedia 1(3), 934–941 (2021)
Hatzius, J., Briggs, J., Kodnani, D., Pierdomenico, G.: The potentially large effects of artificial intelligence on economic growth (2023). https://www.gspublishing.com/content/research/en/reports/2023/03/27/d64e052b-0f6e-45d7-967b-d7be35fabd16.html
Kampel, L., Simos, D.E., Kuhn, D.R., Kacker, R.N.: An exploration of combinatorial testing-based approaches to fault localization for explainable AI. Ann. Math. Artif. Intell. 90, 951–964 (2022). https://doi.org/10.1007/s10472-021-09772-0
Lee, H., Chen, Y.P.P.: Image based computer aided diagnosis system for cancer detection. Expert Syst. Appl. 42(12), 5356–5365 (2015)
Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)
Lu, M., Turetken, O., Adali, O.E., Castells, J., Blokpoel, R., Grefen, P.: C-ITS (cooperative intelligent transport systems) deployment in Europe: challenges and key findings. In: 25th ITS World Congress, Copenhagen, Denmark, pp. 17–21 (2018)
Nahavandi, S.: Trusted autonomy between humans and robots: toward human-on-the-loop in robotics and autonomous systems. IEEE Syst. Man Cybern. Mag. 3(1), 10–17 (2017)
Ng, K.K., Chen, C.H., Lee, C., Jiao, J.R., Yang, Z.X.: A systematic literature review on intelligent automation: aligning concepts from theory, practice, and future perspectives. Adv. Eng. Inform. 47, 101246 (2021)
Nissenbaum, H.: Accountability in a computerized society. Sci. Eng. Ethics 2, 25–42 (1996)
Schoenherr, J.R.: Ethical Artificial Intelligence from Popular to Cognitive Science: Trust in the Age of Entanglement, Routledge (2022)
Turetken, O., Grefen, P., Gilsing, R., Adali, O.E.: Service-dominant business model design for digital innovation in smart mobility. Bus. Inf. Syst. Eng. 61, 9–29 (2019)
Zemmar, A., Lozano, A.M., Nelson, B.J.: The rise of robots in surgical environments during COVID-19. Nat. Mach. Intell. 2(10), 566–572 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wilbik, A., Grefen, P. (2024). Responsibility and Explainability in Using Intelligent Systems. In: Phillipson, F., Eichler, G., Erfurth, C., Fahrnberger, G. (eds) Innovations for Community Services. I4CS 2024. Communications in Computer and Information Science, vol 2109. Springer, Cham. https://doi.org/10.1007/978-3-031-60433-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-60433-1_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-60432-4
Online ISBN: 978-3-031-60433-1
eBook Packages: Computer ScienceComputer Science (R0)