Skip to main content

Advertisement

Log in

Embedded Ethics for Responsible Artificial Intelligence Systems (EE-RAIS) in disaster management: a conceptual model and its deployment

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

In this paper, we argue that Responsible Artificial Intelligence Systems (RAIS) require a shift toward embedded ethics to address value-based challenges facing AI in disaster management; and we propose a model to achieve it. Disaster management requires Artificial Intelligence Systems (AIS) that would be sensitive to ethical, legal, and multi-dimensional values while being responsive and accountable in complex and acute disruptions that simultaneously call for fair, value-laden, and immediate decisions. Without such a necessary shift, AIS will be incapable of responding properly to major value-based challenges of axiological and hierarchical types, and might leave AIS vulnerable to meta-disasters, such as intelligent digital disasters. This study focuses on RAI in the context of disaster management and proposes a model of Embedded Ethics for Responsible Artificial Intelligence Systems (EE-RAIS), which is empowered by four platforms of embedded ethics—educational, cross-functional, developmental, and algorithmic embedded ethics—as well as four imperative metrics—ethical intelligence, legal intelligence, social-emotional competency, and artificial wisdom. The final section of the paper explores how EE-RAIS can be deployed for the purpose of disaster management and fair crisis informatics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Data availability

There is no external data for this study.

Notes

  1. . Global Risk Management Study. (2019). Defining the risk function’s sphere of control, & Responsible AI in practice—essential but not easy. (2021). Accenture.

References

  1. Xu, M., David, J.M., Kim, S.H.: The fourth industrial revolution opportunities and challenges. Int. J. Financ. Res. 9(2), 90 (2018)

    Article  MATH  Google Scholar 

  2. Foer, F.: World without mind: the existential threat of big tech. Penguin (2018)

  3. Galanos, V.: Exploring expanding expertise: artificial intelligence as an existential threat and the role of prestigious commentators, 2014–2018. Technol. Anal. Strateg. Manag. (2019)

  4. Aissaoui, N.: The digital divide: a literature review and some directions for future research in light of COVID-19. Global Knowl. Memory Commun. (2021). https://doi.org/10.1108/GKMC-06-2020-0075

    Article  MATH  Google Scholar 

  5. Milano, S., Taddeo, M. and Floridi, L.: Recommender systems and their ethical challenges. Ai & Soc. 35, 957–967 (2020)

    Article  MATH  Google Scholar 

  6. Suresh, H. and Guttag, J. V.: A framework for understanding unintended consequences of machine learning. (2019)

  7. Gevaert, C.M. et al.: Fairness and accountability of AI in disaster risk management: Opportunities and challenges. Patterns (2021)

  8. Schwartz, L., Hunt, M., Redwood-Campbell, L. and de Laat, S.: Ethics and emergency disaster response. normative approaches and training needs for humanitarian health care providers. In: O’Mathúna, D. P., Gordijn, B. and Clarke, M. (eds.) Disaster bioethics: normative issues when nothing is normal: normative issues when nothing is normal, in Public Health Ethics Analysis. Dordrecht: Springer Netherlands, 2014, pp. 33–48. https://doi.org/10.1007/978-94-007-3864-5_3

  9. Afroogh, S., et al.: Tracing app technology: an ethical review in the COVID-19 era and directions for post-COVID-19. Ethics Inf. Technol. (2022). https://doi.org/10.1007/s10676-022-09659-6

    Article  Google Scholar 

  10. Merin, O., Ash, N., Levy, G., Schwaber, M.J., Kreiss, Y.: The Israeli field hospital in haiti — ethical dilemmas in early disaster response. New Eng. J. Med. 362(11), e38 (2010). https://doi.org/10.1056/NEJMp1001693

    Article  Google Scholar 

  11. Subbaraman, N.: Who gets a COVID vaccine first? Access plans are taking shape. Nature 585(7826), 492–493 (2020). https://doi.org/10.1038/d41586-020-02684-9

    Article  MATH  Google Scholar 

  12. Afroogh, S., Kazemi, A., Seyedkazemi, A.: COVID-19, scarce resources and priority ethics: why should maximizers be more conservative? Ethics Med Public Health 18, 100698 (2021). https://doi.org/10.1016/j.jemep.2021.100698

    Article  MATH  Google Scholar 

  13. Harbers, M., de Greeff, J., Kruijff-Korbayová, I., Neerincx, M. A., and v Hindriks, K.: Exploring the Ethical Landscape of Robot-Assisted Search and Rescue. In: Aldinhas Ferreira, M. I., Silva Sequeira, J., Tokhi, M. O., Kadar, E. E. and Virk, G. S. (eds.) A World with Robots: International Conference on Robot Ethics: ICRE 2015, in Intelligent Systems, Control and Automation: Science and Engineering. Cham: Springer International Publishing, pp. 93–107 (2017). https://doi.org/10.1007/978-3-319-46667-5_7

  14. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1(9), 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2

    Article  Google Scholar 

  15. Parker, M.J., Fraser, C., Abeler-Dörner, L., Bonsall, D.: Ethics of instantaneous contact tracing using mobile phone apps in the control of the COVID-19 pandemic. J. Med. Ethics 46(7), 427–431 (2020). https://doi.org/10.1136/medethics-2020-106314

    Article  Google Scholar 

  16. Sharma, T., Bashir, M.: Use of apps in the COVID-19 response and the loss of privacy protection. Nat. Med. 26(8), 1165–1167 (2020). https://doi.org/10.1038/s41591-020-0928-y

    Article  MATH  Google Scholar 

  17. Tanzi, T.J.: Some thoughts on disaster management. URSI Radio Sci. Bull. 2015(355), 13–17 (2015)

    MATH  Google Scholar 

  18. Battistuzzi, L., Recchiuto, C.T. and Sgorbissa, A.: Ethical concerns in rescue robotics: a scoping review. Ethics Inf. Technol. 23(4), 863–875 (2021)

    Article  Google Scholar 

  19. Negre, E.: Crisis Management and Distrust: Study of an Industrial Accident in France. http://scholarspace.manoa.hawaii.edu/handle/10125/70887 (2021)

  20. Sud, K.: Artificial intelligence in disaster management: rescue robotics, aerial mapping and information sourcing. In: Kumar, T. V. V. and Sud, K. (eds.) AI and Robotics in Disaster Studies, in Disaster Research and Management Series on the Global South. Singapore: Springer, pp. 33–46 (2020). https://doi.org/10.1007/978-981-15-4291-6_3

  21. Battistuzzi, L., Recchiuto, C.T., Sgorbissa, A.: Ethical concerns in rescue robotics: a scoping review. Ethics Inf. Technol. 23(4), 863–875 (2021). https://doi.org/10.1007/s10676-021-09603-0

    Article  Google Scholar 

  22. Tan, M.L., et al.: Mobile applications in crisis informatics literature: a systematic review. Int. J. Disaster Risk Reduc. 24, 297–311 (2017)

    Article  MATH  Google Scholar 

  23. Ogie, R. I. et al.: Artificial intelligence in disaster risk communication: a systematic literature review. In: 5th International Conference on Information and Communication Technologies for Disaster Management (ICT-DM). IEEE, (2018)

  24. Bakker, M.H., van Bommel, M., Kerstholt, J.H., Giebels, E.: The influence of accountability for the crisis and type of crisis communication on people’s behavior, feelings and relationship with the government. Public Relat. Rev. 44(2), 277–286 (2018). https://doi.org/10.1016/j.pubrev.2018.02.004

    Article  Google Scholar 

  25. Afroogh, S.: A probabilistic theory of trust concerning artificial intelligence: can intelligent robots trust humans? AI Ethics (2022). https://doi.org/10.1007/s43681-022-00174-4

    Article  Google Scholar 

  26. Yigitcanlar, T., Cugurullo, F.: the sustainability of artificial intelligence: an urbanistic viewpoint from the lens of smart and sustainable cities. Sustainability 12(20), 8548 (2020). https://doi.org/10.3390/su12208548

    Article  MATH  Google Scholar 

  27. Kabir, M. H. et al.: Explainable artificial intelligence for smart city application: a secure and trusted platform. In: Explainable Artificial Intelligence for Cyber Security. Springer, Cham (2022)

  28. Tan, J. et al.: Counterfactual explainable recommendation. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, (2021)

  29. O’Sullivan, S., et al.: Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 15(1), e1968 (2019). https://doi.org/10.1002/rcs.1968

    Article  MATH  Google Scholar 

  30. Tizhoosh, H.R., et al.: Artificial intelligence and digital pathology: challenges and opportunities. J. Pathol. Inform. (2018). https://doi.org/10.4103/jpi.jpi_53_18

    Article  MATH  Google Scholar 

  31. Alvarado, R.: What kind of trust does AI deserve, if any? AI Ethics (2022). https://doi.org/10.1007/s43681-022-00224-x

    Article  MATH  Google Scholar 

  32. Sanders, M.: Data, policy and the disaster of misrepresentation and mistrust, vol. 1, p. 12 (2021). files/7324/Sanders-2021-Data, Policy and the Disaster of Misrepresentation.pdf

  33. Hamon, R. et al.: Robustness and explainability of artificial intelligence. Publications Office of the European Union, (2020)

  34. Thakker, D., Mishra, B.K., Abdullatif, A., Mazumdar, S., Simpson, S.: Explainable artificial intelligence for developing smart cities solutions. Smart Cities 3(4), 1353–1382 (2020). https://doi.org/10.3390/smartcities3040065

    Article  Google Scholar 

  35. Cirqueira, D. et al.: Explainable sentiment analysis application for social media crisis management in retail. In: WUDESHI-DR 2020, pp. 319–328 (2020). https://www.scitepress.org/PublicationsDetail.aspx?ID=VvncnO94xBc=&t=1

  36. Negre, E.: Crisis management and distrust: study of an industrial accident in France. In: Proceedings of the 54th Hawaii International Conference on System Sciences (2021)

  37. Samek, W. and Müller, K.-R.: Towards Explainable Artificial Intelligence. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K. and Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, in Lecture Notes in Computer Science. Cham: Springer International Publishing, pp. 5–22 (2019). https://doi.org/10.1007/978-3-030-28954-6_1

  38. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G.-Z.: XAI—Explainable artificial intelligence. Sci. Robot 4(37), eaay7120 (2019). https://doi.org/10.1126/scirobotics.aay7120

    Article  MATH  Google Scholar 

  39. Došilović, F. K., Brčić, M. and Hlupić, N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 210–215 (2018).https://doi.org/10.23919/MIPRO.2018.8400040

  40. Tjoa, E., Khok, H. J., Chouhan, T. and Cuntai, G.: Improving deep neural network classification confidence using Heatmap-based explainable AI. arXiv, (2022). http://arxiv.org/abs/2201.00009

  41. Yu, Z., Sohail, A., Nofal, T.A., Tavares, J.M.R.S.: Explainability of neural network clustering in interpreting the COVID-19 emergency data. Fractals (2021). https://doi.org/10.1142/S0218348X22401223

    Article  MATH  Google Scholar 

  42. Ribeiro, M. T., Singh, S. and Guestrin, C.: ‘Why Should I Trust You?’: Explaining the predictions of any classifier. In KDD ’16. Association for Computing Machinery, pp. 1135–1144 (2016). https://doi.org/10.1145/2939672.2939778

  43. Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., Müller, K.-R.: Unmasking Clever Hans predictors and assessing what machines really learn. Nat. Commun. 10(1), 1096 (2019). https://doi.org/10.1038/s41467-019-08987-4

    Article  MATH  Google Scholar 

  44. Dias, F. M. and Antunes, A.: Fault tolerance improvement through architecture change in artificial neural networks. In: Kang, L., Cai, Z., Yan, X. and Liu, Y. (eds.) In Lecture Notes in Computer Science. Springer, pp. 248–257 (2008). https://doi.org/10.1007/978-3-540-92137-0_28

  45. Winick, B.J.: The right to refuse mental health treatment: a therapeutic jurisprudence analysis. Int. J. Law Psychiatry 17(1), 99–117 (1994). https://doi.org/10.1016/0160-2527(94)90039-6

    Article  Google Scholar 

  46. Kerr, J. E.: A new era of responsibility: a modern american mandate for corporate social responsibility symposium: law, entrepreneurship, and economic recovery. UMKC Law Rev. 78(2), 327–366. https://heinonline.org/HOL/P?h=hein.journals/umkc78&i=331 (2009)

  47. Working Group Summary: Responsible Artificial Intelligence for Disaster Risk Management. OpenDRI, (2021)

  48. Afroogh, S., Esmalian, A., Donaldson, J., Mostafavi, A.: Empathic design in engineering education and practice: an approach for achieving inclusive and effective community resilience. Sustainability 13(7), 4060 (2021)

    Article  Google Scholar 

  49. Sloane, M., Moss, E.: AI’s social sciences deficit. Nat. Mach. Intell. 1(8), 330–331 (2019). https://doi.org/10.1038/s42256-019-0084-6

    Article  Google Scholar 

  50. Mittelstadt, B.: Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 1(11), 501–507 (2019). https://doi.org/10.1038/s42256-019-0114-4

    Article  Google Scholar 

  51. Greene, D., Hoffmann, A. L. and Stark, L.: Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. (2019). https://doi.org/10.24251/HICSS.2019.258

  52. Mehrabi, N., et al.: A survey on bias and fairness in machine learning. ACM Comput. Surv. 54(6), 1–35 (2021)

    Article  MATH  Google Scholar 

  53. Floridi, L., et al.: How to design AI for social good: seven essential factors. Sci. Eng. Ethics (2020). https://doi.org/10.1007/978-3-030-81907-1_9

    Article  MATH  Google Scholar 

  54. Umbrello, S., Van de Poel, I.: Mapping value sensitive design onto AI for social good principles. AI Ethics 1(3), 283–296 (2021)

    Article  Google Scholar 

  55. Esteva, A., et al.: A guide to deep learning in healthcare. Nat Med 25(1), 24–29 (2019). https://doi.org/10.1038/s41591-018-0316-z

    Article  MATH  Google Scholar 

  56. Kargar, M., Zhang, C., Song, X.: integrated optimization of powertrain energy management and vehicle motion control for autonomous hybrid electric vehicles. Am. Control Conf. (ACC) 2022, 404–409 (2022). https://doi.org/10.23919/ACC53348.2022.9867721

    Article  MATH  Google Scholar 

  57. Kargar, M., Sardarmehni, T., Song, X.: Optimal powertrain energy management for autonomous hybrid electric vehicles with flexible driveline power demand using approximate dynamic programming. IEEE Trans. Veh. Technol. 71(12), 12564–12575 (2022). https://doi.org/10.1109/TVT.2022.3199681

    Article  MATH  Google Scholar 

  58. Sun, W., Bocchini, P., Davison, B.D.: Applications of artificial intelligence for disaster management. Nat. Hazards 103(3), 2631–2689 (2020). https://doi.org/10.1007/s11069-020-04124-3

    Article  MATH  Google Scholar 

  59. Baruque, B., Corchado, E., Mata, A. and Corchado, J.M.: A forecasting solution to the oil spill problem based on a hybrid intelligent system. Inf. Sci. 180(10), 2029–2043 (2010)

    Article  MATH  Google Scholar 

  60. Yang, Y., Zhang, C., Fan, C., Mostafavi, A., Hu, X.: Towards fairness-aware disaster informatics: an interdisciplinary perspective. EEE Access 8, 201040–201054 (2020)

    Google Scholar 

  61. Brandão, M., Jirotka, M., Webb, H., Luff, P.: Fair navigation planning: A resource for characterizing and designing fairness in mobile robots. Artif. Intell. 282, 103259 (2020). https://doi.org/10.1016/j.artint.2020.103259

    Article  MathSciNet  MATH  Google Scholar 

  62. Tóth, Z., Caruana, R., Gruber, T., Loebbecke, C.: The dawn of the AI robots: towards a new framework of AI robot accountability. J. Bus. Ethics 178(4), 895–916 (2022). https://doi.org/10.1007/s10551-022-05050-z

    Article  Google Scholar 

  63. Washburn, A. and A. A. A. T. A. and Washburn, L. D. R.: Robot errors in proximate HRI: how functionality framing affects perceived reliability and trust. In: ACM Transactions on Human-Robot Interaction (THRI) 9, vol. 3 (2020)

  64. Frering, L. and B. K. D. A. D. K. T. G. H. K. and Matthias Eder, G. S.-W.: Enabling and Assessing Trust when Cooperating with Robots in Disaster Response (EASIER). arXiv preprint arXiv:2207.03763 (2022)

  65. Andrada, G., Clowes, R.W., Smart, P.R.: Varieties of transparency: exploring agency within AI systems. AI Soc (2022). https://doi.org/10.1007/s00146-021-01326-6

    Article  MATH  Google Scholar 

  66. Ososky, S. and F. J. P. H. and Tracy Sanders, J. Y. C.: Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. In: Unmanned systems technology XVI, vol. 9084, pp. 112–123. SPIE, (2014)

  67. Holzinger, A. and H. D. K. Z. and Georg Langs, M.H.: Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 9, vol. 4, (2019)

  68. Ghassemi, P. and this link will open in a new window Link to external site.: Decentralized Planning Algorithms and Hybrid Learning for Scalable and Explainable Swarm Robotic Systems. United States -- New York. https://www.proquest.com/docview/2497108019/abstract/4E7ACC1DEFDD4EDEPQ/1 (2021)

  69. Angerschmid, A., Zhou, J., Theuermann, K., Chen, F., Holzinger, A.: Fairness and Explanation in AI-Informed Decision Making. Mach Learn Knowl Extr 4(2), 556–579 (2022). https://doi.org/10.3390/make4020026

    Article  MATH  Google Scholar 

  70. Pettet, G., Mukhopadhyay, A., Kochenderfer, M., Vorobeychik, Y. and Dubey, A.: On Algorithmic Decision Procedures in Emergency Response Systems in Smart and Connected Communities.” arXiv. http://arxiv.org/abs/2001.07362 (2020)

  71. Porayska-Pomsta, K. K. and Rajendran, G.: Accountability in human and artificial decision-making as the basis for diversity and educational inclusion. In: Knox, J., Wang, Y. and Gallagher, M, (eds.) Speculative Futures for Artificial Intelligence and Educational Inclusion. (pp. 39–59). Springer Nature: Singapore. https://link.springer.com/ (2019)

  72. Ashoori, M. and Weisz, J. D.: In AI We Trust? Factors That Influence Trustworthiness of AI-infused Decision-Making Processes. arXiv:1912.02675 [cs]. http://arxiv.org/abs/1912.02675 (2019)

  73. Waltl, B., Vogl, R.: Increasing transparency in algorithmic- decision-making with explainable AI. Datenschutz und Datensicherheit - DuD 42(10), 613–617 (2018). https://doi.org/10.1007/s11623-018-1011-4

    Article  MATH  Google Scholar 

  74. Diehl, G. and Adams, J. A.: An Ethical Framework for Message Prioritization in Disaster Response. In: 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 9–14 (2021). https://doi.org/10.1109/SSRR53300.2021.9597680

  75. Liel, A.B., Corotis, R.B., Camata, G., Sutton, J., Holtzman, R., Spacone, E.: Perceptions of decision-making roles and priorities that affect rebuilding after disaster: the example of L’Aquila, Italy. Earthq. Spectra 29(3), 843–868 (2013)

    Article  Google Scholar 

  76. Holloway, R. and S. Z. N. J. C. D. B. J. and I. P. F. W., Rasmussen, S.A.: Updated preparedness and response framework for influenza pandemics. Updated preparedness and response framework for influenza pandemics. (2014)

  77. de Groot, R.S., Alkemade, R., Braat, L., Hein, L., Willemen, L.: Challenges in integrating the concept of ecosystem services and values in landscape planning, management and decision making. Ecol. Complex. 7(3), 260–272 (2010). https://doi.org/10.1016/j.ecocom.2009.10.006

    Article  Google Scholar 

  78. Keeney, R. L. and Raiffa, H.: Decisions with Multiple Objectives: Preferences and Value Trade-Offs. Cambridge University Press. https://www.google.com/books?id=1oEa-BiARWUC (1993)

  79. Melgarejo, L.-F., Lakes, T.: Urban adaptation planning and climate-related disasters: An integrated assessment of public infrastructure serving as temporary shelter during river floods in Colombia. Int. J. Disaster Risk Reduc. 9, 147–158 (2014)

    Article  Google Scholar 

  80. Kim, K.H., et al.: How do people think about the implementation of speech and video recognition technology in emergency medical practice? PLoS One 17(9), e0275280 (2022). https://doi.org/10.1371/journal.pone.0275280

    Article  MathSciNet  Google Scholar 

  81. Wright, J.: Suspect AI: vibraimage, emotion recognition technology and algorithmic opacity. Sci. Technol. Soc. (2021). https://doi.org/10.2139/ssrn.3682874

    Article  MATH  Google Scholar 

  82. Dahal, A. and Lombardo, L.: Explainable artificial intelligence in geoscience: A glimpse into the future of landslide susceptibility modeling. Comput. Geosci. 176, 105364 (2023)

    Article  MATH  Google Scholar 

  83. Yu, S. and Carroll, F.: Implications of AI in national security: understanding the security issues and ethical challenges. In Montasari, R. and Jahankhani, H. (Eds.) Artificial Intelligence in Cyber Security: Impact and Implications: Security Challenges, Technical and Ethical Issues, Forensic Investigative Challenges, in Advanced Sciences and Technologies for Security Applications. Cham: Springer International Publishing, pp. 157–175 (2021). https://doi.org/10.1007/978-3-030-88040-8_6

  84. Jaremko, J.L., et al.: Canadian association of radiologists white paper on ethical and legal issues related to artificial intelligence in radiology. Can. Assoc. Radiol. J. 70(2), 107–118 (2019). https://doi.org/10.1016/j.carj.2019.03.001

    Article  Google Scholar 

  85. Matyuk, Y.S.: Ethical and legal aspects of development and implementation of artificial intelligence systems. Int. Sci. Conf. (2022). https://doi.org/10.15405/epsbs.2022.06.76

    Article  Google Scholar 

  86. Zhang, J., Tao, D.: empowering things with intelligence: a survey of the progress, challenges, and opportunities in artificial intelligence of things. IEEE Internet Things J. 8(10), 7789–7817 (2021). https://doi.org/10.1109/JIOT.2020.3039359

    Article  MATH  Google Scholar 

  87. Gerke, S., Minssen, T., and Cohen, G.: Chapter 12 - Ethical and legal challenges of artificial intelligence-driven healthcare. In Bohr, A. and Memarzadeh, K. (Eds.) Artificial Intelligence in Healthcare. Academic Press, pp. 295–336 (2020). https://www.sciencedirect.com/science/article/pii/B9780128184387000125

  88. Kelly, C.J., Karthikesalingam, A., Suleyman, M., Corrado, G., King, D.: Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 17(1), 195 (2019). https://doi.org/10.1186/s12916-019-1426-2

    Article  Google Scholar 

  89. Cinbis, R.G., Verbeek, J., Schmid, C.: Weakly supervised object localization with multi-fold multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 39(1), 189–203 (2017). https://doi.org/10.1109/TPAMI.2016.2535231

    Article  MATH  Google Scholar 

  90. Fragkos, G., Tsiropoulou, E. E. and Papavassiliou, S.: Disaster management and information transmission decision-making in public safety systems. In: GLOBECOM 2019 - 2019 IEEE Global Communications Conference, IEEE, pp. 1–6 (2019). https://doi.org/10.1109/GLOBECOM38437.2019.9013440.

  91. A., W., and Wallace, F. D. B.: Decision support systems for disaster management. Decis. Support Syst. Disaster Manage. (1985)

  92. Crawford, K., Finn, M.: The limits of crisis data: analytical and ethical challenges of using social and mobile data to understand disasters. GeoJournal 80(4), 491–502 (2015). https://doi.org/10.1007/s10708-014-9597-z

    Article  MATH  Google Scholar 

  93. Boyd, D., Crawford, K.: Critical questions for big data. Inf. Commun. Soc. 15(5), 662–679 (2012). https://doi.org/10.1080/1369118X.2012.678878

    Article  MATH  Google Scholar 

  94. Sun, W., Bocchini, P. and Davison, B.D.: Applications of artificial intelligence for disaster management. Nat. Hazards 103(3),2631–2689 (2020)

    Article  MATH  Google Scholar 

  95. Vanschoren, J., van Rijn, J.N., Bischl, B., Torgo, L.: OpenML: networked science in machine learning. ACM SIGKDD Explor. Newsl. 15(2), 49–60 (2014). https://doi.org/10.1145/2641190.2641198

    Article  MATH  Google Scholar 

  96. Siau, K. and Wang, W.: Building Trust in Artificial Intelligence, Machine Learning, and Robotics. Cutter Bus. Technol. J. 31, 47–53 (2018). https://www.researchgate.net/profile/Keng-Siau-2/publication/324006061_Building_Trust_in_Artificial_Intelligence_Machine_Learning_and_Robotics/links/5ab8744baca2722b97cf9d33/Building-Trust-in-Artificial-Intelligence-Machine-Learning-and-Robotics.pdf

  97. Barredo-Arrieta, A., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012

    Article  Google Scholar 

  98. Acuna, D.E. and Liang, L.: Are AI ethics conferences different and more diverse compared to traditional computer science conferences?. In: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (ACM) (2021)

  99. Selbst, A.D., Boyd, F. and V. S., and V.J. S.A.: Fairness and abstraction in sociotechnical systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (ACM) (2019)

  100. Grosz, B. J., et al.: Embedded EthiCS: Integrating Ethics Broadly Across Computer Science Education. arXiv:1808.05686 [cs], (2018). http://arxiv.org/abs/1808.05686

  101. McLennan, S., Fiske, A., Tigard, D., Müller, R., Haddadin, S., Buyx, A.: Embedded ethics: a proposal for integrating ethics into the development of medical AI. BMC Med. Ethics 23(1), 6 (2022). https://doi.org/10.1186/s12910-022-00746-3

    Article  Google Scholar 

  102. Bonnemains, V., Saurel, C., Tessier, C.: Embedded ethics: some technical and ethical challenges. Ethics Inf. Technol. 20(1), 41–58 (2018). https://doi.org/10.1007/s10676-018-9444-x

    Article  MATH  Google Scholar 

  103. Kenton, W.: “Paradigm shift,” BUSINESS ESSENTIALS, in https://www.investopedia.com/, (2021)

  104. Ismail-Zadeh, A.T., Cutter, S.L., Takeuchi, K., Paton, D.: Forging a paradigm shift in disaster science. Nat. Hazards 86(2), 969–988 (2017). https://doi.org/10.1007/s11069-016-2726-x

    Article  Google Scholar 

  105. Lang, D.J., Wiek, A., Bergmann, M., Stauffacher, M., Martens, P., Moll, P., Swilling, M. and Thomas, C.J.: Transdisciplinary research in sustainability science: practice, principles, and challenges. Sustain. Sci. 7, 25–43 (2012)

    Article  Google Scholar 

  106. Fiske, A., Tigard, D., Müller, R., Haddadin, S., Buyx, A., McLennan, S.: Embedded Ethics Could Help Implement the Pipeline Model Framework for Machine Learning Healthcare Applications. Am. J. Bioeth. 20(11), 32–35 (2020). https://doi.org/10.1080/15265161.2020.1820101

    Article  Google Scholar 

  107. Wallach, W., Allen, C.: Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, New York (2009). https://doi.org/10.1093/acprof:oso/9780195374049.001.0001/acprof-9780195374049

    Book  MATH  Google Scholar 

  108. Lin, P., Abney, K., Bekey, G.A.: Robot Ethics: the Ethical and Social Implications of Robotics. MIT press (2014)

    MATH  Google Scholar 

  109. Tzafestas, S. G.: Roboethics: A Navigating Overview, 1st ed. 2015 edition. New York: Springer (2015). https://www.amazon.com/Roboethics-Navigating-Intelligent-Automation-Engineering/dp/3319217135

  110. Lennick, D., Kiel, F.: Moral Intelligence: Enhancing Business Performance and Leadership Success. Pearson Prentice Hall (2007)

    MATH  Google Scholar 

  111. Cichocki, A., Kuleshov, A.P.: Future trends for human-AI collaboration: a comprehensive taxonomy of AI/AGI using multiple intelligences and learning styles. Comput. Intell. Neurosci. 2021, e8893795 (2021). https://doi.org/10.1155/2021/8893795

    Article  Google Scholar 

  112. Phillips, N.J.: “We're the ones that stand up and tell you the truth”: Necessity of ethical intelligence services. Salus J. 4(2), 47–61 (2016)

    MATH  Google Scholar 

  113. Maruyama, Y.: Moral philosophy of artificial general intelligence: agency and responsibility. Int. Conf. Artif. General Intell. (2021). https://doi.org/10.1007/978-3-030-93758-4_15

    Article  MATH  Google Scholar 

  114. Ben-Haim, Y.: Robust-satisficing ethics in intelligence. Intell. Natl. Secur. 36(5), 721–736 (2021). https://doi.org/10.1080/02684527.2021.1901404

    Article  MATH  Google Scholar 

  115. Segun, S.T.: From machine ethics to computational ethics. AI Soc. 36(1), 263–276 (2021). https://doi.org/10.1007/s00146-020-01010-1

    Article  MATH  Google Scholar 

  116. Torrance, S.: Artificial agents and the expanding ethical circle. AI Soc. 28(4), 399–414 (2013). https://doi.org/10.1007/s00146-012-0422-2

    Article  MATH  Google Scholar 

  117. Kleinberg, J., Mullainathan, S., and Raghavan, M.: Inherent trade-offs in the fair determination of risk scores. arXiv preprint (2016)

  118. Johnson, R. and Cureton, A. Kant’s moral philosophy. (2004)

  119. Dancy, J.: Moral particularism. In: Zalta, E. N. (ed.) The Stanford Encyclopedia of Philosophy (Winter 2017 Edition). https://plato.stanford.edu/archives/win2017/entries/moral-particularism/ (2017)

  120. Hildebrandt, M.: Algorithmic regulation and the rule of law. Philos. Trans. Roy. Soc. A 376(2128), 20170355 (2018). https://doi.org/10.1098/rsta.2017.0355

    Article  MATH  Google Scholar 

  121. Hildebrandt, M.: Law as computation in the era of artificial legal intelligence: speaking law to the power of statistics. Univ. Toronto Law J. (2018). https://doi.org/10.2139/ssrn.2983045

    Article  MATH  Google Scholar 

  122. Stockdale, M. and Mitchell, R.: Legal advice privilege and artificial legal intelligence: Can robots give privileged legal advice? Int. J. Evid. Proof 23(4), 422–439 (2019)

    Article  Google Scholar 

  123. Sun, C., Zhang, Y., Liu, X., and Wu, F.: Legal intelligence: algorithmic, data, and social challenges. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: Association for Computing Machinery, 2020, pp. 2464–2467. https://doi.org/10.1145/3397271.3401466

  124. Wagenaar, D., Curran, A., Balbi, M., Bhardwaj, A., Soden, R., Hartato, E., Mestav Sarica, G., Ruangpan, L., Molinario, G. and Lallemant, D.: Invited perspectives: How machine learning will change flood risk and impact assessment. Nat. Hazards Earth Syst. Sci. 20(4), 1149–1161 (2020)

    Article  Google Scholar 

  125. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267 (2019)

  126. Berg, J., Nolan, E., Yoder, N., Osher, D. and Mart, A.: Social-emotional competencies in context: Using social-emotional learning frameworks to build educators’ understanding. Measuring SEL, Using Data to Inspire Practice, pp. 1–13 (2019)

  127. Emmerling, R.J., Boyatzis, R.E.: Emotional and social intelligence competencies: cross cultural implications. Cross Cultur. Manage. 19(1), 4–18 (2012). https://doi.org/10.1108/13527601211195592

    Article  Google Scholar 

  128. Fiori, M.: A new look at emotional intelligence: a dual-process framework. Pers. Soc. Psychol. Rev. 13(1), 21–44 (2009). https://doi.org/10.1177/1088868308326909

    Article  MATH  Google Scholar 

  129. Beck, M. and Libert, B.: The rise of AI makes emotional intelligence more important. p. 5, 2017. Available: files/7683/Beck and Libert - 2017 - The Rise of AI Makes Emotional Intelligence More I.pdf

  130. Schuller, D., Schuller, B.W.: The age of artificial emotional intelligence. Computer (Long Beach Calif) 51(9), 38–46 (2018). https://doi.org/10.1109/MC.2018.3620963

    Article  MATH  Google Scholar 

  131. Fernando, R. and Lalitha, S.: Artificial intelligence and disaster management in Sri Lanka: problems and prospects. In: AI and Robotics in Disaster Studies. Palgrave Macmillan (2020)

  132. Hellsten, S.K.: Global bioethics: utopia or reality? Dev World Bioethics. (2008). https://doi.org/10.1111/j.1471-8847.2006.00162.x

    Article  MATH  Google Scholar 

  133. Andoh, C.T.: Bioethics and the challenges to its growth in Africa. Open J. Philos. (2011). https://doi.org/10.4236/ojpp.2011.12012

    Article  MATH  Google Scholar 

  134. Jeste, D.V., Graham, S.A., Nguyen, T.T., Depp, C.A., Lee, E.E., Kim, H.-C.: Beyond artificial intelligence: exploring artificial wisdom. Int. Psychogeriatr 32(8), 993–1001 (2020). https://doi.org/10.1017/S1041610220000927

    Article  Google Scholar 

  135. Tsai, C.: Artificial wisdom: a philosophical framework. AI Soc 35(4), 937–944 (2020). https://doi.org/10.1007/s00146-020-00949-5

    Article  MATH  Google Scholar 

  136. Davis, J.P.: Artificial Wisdom? A Potential Limit on AI in Law (and Elsewhere) Symposium: Lawyering in the Age of Artificial Intelligence. Oklahoma Law Rev. 72(1), 51–90 (2019). Available: https://heinonline.org/HOL/P?h=hein.journals/oklrv72&i=52

  137. Grimm, S.: Wisdom. Australas J. Philos. (2015)

  138. Kim, T.W. and Mejia, S., 2019. From artificial intelligence to artificial wisdom: what socrates teaches us. Computer 52(10), 70–74

    Article  Google Scholar 

  139. Science must examine the future of work. Nature 550, 301–302 (2017)

  140. Jones, K.: Trustworthiness. Ethics 123(1), 61–85 (2012)

    Article  MATH  Google Scholar 

  141. Abascal, A., et al.: Domains of deprivation framework” for mapping slums, informal settlements, and other deprived areas in LMICs to improve urban planning and policy: a scoping review. Comput. Environ. Urban Syst. 93, 101770 (2022)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

We are immensely grateful to the anonymous referees of this journal for their comments and insightful suggestions.

Funding

The authors would like to acknowledge funding support from the Texas A&M X-Grant Presidential Excellence Fund (Grant AI2102). Any opinions, findings, conclusions, or recommendations expressed in this research are those of the authors and do not necessarily reflect the views of the funding agency.

Author information

Authors and Affiliations

Authors

Contributions

The hypothesis and approach to the work were conceived collaboratively by the authors. AM and SA generated the idea for the research. AM conducted three focus groups on the research topic; SA conducted the research and formed the arguments for Embedded Ethics for Responsible AIS approach. SA, A.A., and Y.P. developed Sects. 2, 3, and 4.1. S.A, S.G, F.H, and K.R. developed Sect. 4.2. Sections 1, 2, and conclusion are authored by SA. The authors agree to be accountable for the relevant aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Saleh Afroogh.

Ethics declarations

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Research involving human participants and/or animals

The authors declare that there is not any human/animal participation in this research.

Informed consent

I, Saleh Afroogh, as the corresponding author, would confirm that all coauthors that are mentioned in the manuscript agreed to submit the manuscript to your journal.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Afroogh, S., Mostafavi, A., Akbari, A. et al. Embedded Ethics for Responsible Artificial Intelligence Systems (EE-RAIS) in disaster management: a conceptual model and its deployment. AI Ethics 4, 1117–1141 (2024). https://doi.org/10.1007/s43681-023-00309-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-023-00309-1

Keywords