Abstract
Algorithms have advanced in status from supporting human decision-making to making decisions for themselves. The fundamental issue here is the relationship between Big Data and algorithms, or how algorithms empower data with direction and purpose. In this paper, I provide a conceptual framework for analyzing and improving ethical decision-making in Human-AI interaction. On the one hand, I examine the challenges and the limitations facing the field of Machine Ethics and Explainability in its aim to provide and justify ethical decisions. On the other hand, I propose connecting counterfactual explanations with the emotion of regret, as requirements for improving ethical decision-making in novel situations and under uncertainty. To test whether this conceptual framework has empirical value, I analyze the COVID-19 epidemic in terms of “what might have been” to answer the following question: could some of the unintended consequences of this health crisis have been avoided if the available data had been used differently before the crisis happened and as it unfolded?
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Allam, Z.: The rise of machine intelligence in the COVID-19 pandemic and its impact on health policy. In: Surveying the Covid-19 Pandemic and Its Implications, pp. 89–96 (2020). https://doi.org/10.1016/B978-0-12-824313-8.00006-1
Baum, K., Hermanns, H., Speith, T.: From machine ethics to machine explainability and back. In: International Symposium on Artificial Intelligence and Mathematics, ISAIM 2018, Fort Lauderdale, Florida, USA, 3–5 January 2018, pp. 1–8 (2018)
Buchanan, J., Summerville, A., Lehmann, J., Reb, J.: The regret elements scale: distinguishing the affective and cognitive components of regret. Judgm. Decis. Mak. 11, 275–286 (2016)
Byrne, R.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-2019) (2019)
Cheng, V.C., Lau, S.K., Woo, P.C., Yuen, K.Y.: Severe acute respiratory syndrome coronavirus as an agent of emerging and reemerging infection. Clin. Microbiol. Rev. 20(4), 660–694 (2007). https://doi.org/10.1128/CMR.00023-07
Coeckelbergh, M.: Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf. Technol. 12, 209–221 (2010). https://doi.org/10.1007/s10676-010-9235-5
Confalonieri, R., Çoba, L., Wagner, B., Besold, T.R.: A historical perspective of explainable artificial intelligence. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 11, e1391 (2021)
Damasio, A.R.: Descartes’ Error: Emotion, Reason, and the Human Brain. Grosset/Putnam, New York (1994)
de Campos-Rudinsky, T.C., Undurraga, E.: Public health decisions in the COVID-19 pandemic require more than ‘follow the science’. J. Med. Ethics 47(5), 296–299 (2021)
De Regt, H., Dieks, D.: A contextual approach to scientific understanding. Synthese 144, 137–170 (2005). https://doi.org/10.1007/s11229-005-5000-4
Dreyfus, H.L.: What Computers Can’t Do: The Limits of Artificial Intelligence. Harper Collins (1978)
Epstude, K., Roese, N.J.: The functional theory of counterfactual thinking. Pers. Soc. Psychol. Rev. Off. J. Soc. Pers. Soc. Psychol. 12(2), 168–192 (2008). https://doi.org/10.1177/1088868308316091
Lewis, D.: Causation. J. Philos. 70(17), 556–567 (1973)
Lipton, P.: Understanding without explanation. In: de Regt, H.W., Leonelli, S., Eigner, K. (eds.) Scientific Understanding: Philosophical Perspectives, pp. 43–63. University of Pittsburgh Press, Pittsburgh (2009)
Maclure, J.: AI, explainability and public reason: the argument from the limitations of the human mind. Minds Mach. 31(3), 421–438 (2021). https://doi.org/10.1007/s11023-021-09570-x
Marcatto, F., Cosulich, A., Ferrante, D.: Once bitten, twice shy: experienced regret and non-adaptive choice switching. PeerJ 3, e1035 (2015). https://doi.org/10.7717/peerj.1035
Merlot, J.: Das Pandemie-Planspiel. SPIEGEL Wissenschaft, 07 April 2020. https://www.spiegel.de/wissenschaft/medizin/coronavirus-was-der-rki-katastrophenplan-aus-2012-mit-der-echten-pandemie-zu-tun-hat-a-8d0820ca-95a7-469b-8a6a-074d940543d6
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2017). https://doi.org/10.1016/j.artint.2018.07.007
Pearl, J., Mackenzie, D.: The Book of Why: The New Science of Cause and Effect. Basic Books, New York (2018)
Shani, Y., Zeelenberg, M.: When and why do we want to know? How experienced regret promotes post-decision information search. J. Behav. Decis. Mak. 20(3), 207–222 (2007). https://doi.org/10.1002/bdm.55
Stepin, I., et al.: Paving the way towards counterfactual generation in argumentative conversational agents. In: Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019), pp. 20–25. Association for Computational Linguistics (2019)
Tolmeijer, S., et al.: Implementations in machine ethics: a survey. ACM Comput. Surv. 53(6), 1–38 (2021). Article no: 132. https://doi.org/10.1145/3419633
Varela, F.J.: Invitation aux sciences cognitives, Seuil (1996)
Varela, F.J., Thompson, E., Rosch, E.: The Embodied Mind: Cognitive Science and Human Experience. The MIT Press, Cambridge (1991)
van Wynsberghe, A., Robbins, S.: Critiquing the reasons for making artificial moral agents. Sci. Eng. Ethics 25(3), 719–735 (2019). https://doi.org/10.1007/s11948-018-0030-8
Wallach, W., Allen, C., Smit, I.: Machine morality: bottom-up and top-down approaches for modeling human moral faculties. AI Soc. 22, 565–582 (2008). https://doi.org/10.1007/s00146-007-0099-0
Zeelenberg, M., Pieters, R.: A theory of regret regulation 1.0. J. Consum. Psychol. 17(1), 3–18 (2007). https://doi.org/10.1207/s15327663jcp1701_3
Zerilli, J., Knott, A., Maclaurin, J., Gavaghan, C.: Transparency in algorithmic and human decision-making: is there a double standard? Philos. Technol. 32(4), 661–683 (2018). https://doi.org/10.1007/s13347-018-0330-6
Zoshak, J., Dew, K.: Beyond Kant and Bentham: how ethical theories are being used in artificial moral agents. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI 2021), pp. 1–15. Association for Computing Machinery, New York (2021). Article 590. https://doi.org/10.1145/3411764.3445102
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Martín-Peña, R. (2022). Does the COVID-19 Pandemic have Implications for Machine Ethics?. In: Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G. (eds) HCI International 2022 – Late Breaking Posters. HCII 2022. Communications in Computer and Information Science, vol 1655. Springer, Cham. https://doi.org/10.1007/978-3-031-19682-9_82
Download citation
DOI: https://doi.org/10.1007/978-3-031-19682-9_82
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-19681-2
Online ISBN: 978-3-031-19682-9
eBook Packages: Computer ScienceComputer Science (R0)