Abstract
While Artificial intelligence technologies continue to proliferate in all areas of contemporary life, researchers are looking for ways to make them safe for users. In the teaching-learning context, this is a trickier problem because it must be clear which principles or ethical frameworks are guiding processes supported by artificial intelligence. After all, people education are at stake. This inquiry presents an approach to value alignment, in educational contexts using artificial pedagogical moral agents (AMPA) adopting the classic BDI model. Besides, we propose a top-down approach explaining why the bottom-up or the hybrid one may would not be advisable in educational grounds.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Anderson, M., Anderson, S.L.: Ethical healthcare agents. In: Sordo, M., Vaidya, S., Jain, L.C. (eds.) Advanced Computational Intelligence Paradigms in Healthcare-3, pp. 233–257. Springer, Heidelberg (2008)
Aliman, N.M., Kester, L.: Requisite variety in ethical utility functions for AI value alignment. In: Workshop on Artificial Intelligence Safety 2019, vol. 2419. CEUR-WS, Macao (2019)
Aliman, N.M., Kester, L., Werkhoven, P.: XR for augmented utilitarianism. In: IEEE International Conference on Artificial Intelligence and Virtual Reality 2019, pp. 283–285. IEEE, San Diego (2019)
Allen, C., Smit, I., Wallach, W.: Artificial morality: top-down, bottom-up and hybrid approaches. Ethics Inf. Technol. 7(3), 149–155 (2005)
Arnold, T., Kasenberg, D., Scheutz, M.: Value alignment or misalignment-what will keep systems accountable? In: AAAI Workshop on AI, Ethics, and Society 2017. AAAI Press, Palo Alto (2017)
Bostrom, N.: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford (2014)
Brundage, M.: Limitations and risks of machine ethics. J. Exp. Theor. Artif. Intell. 26(3), 355–372 (2014)
Cervantes, J.A., et al.: Artificial moral agents: a survey of the current status. Sci. Eng. Ethics 26(2), 501–532 (2019)
Daniel, J., Cano, V., Cervera, M.G.: The future of MOOCs: adaptive learning or business model? Revista de Universidad y Sociedad del Conocimiento 12(1), 64–73 (2015)
Costa, A.R., Coelho, H.: Interactional moral systems: a model of social mechanisms for the moral regulation of exchange processes in agent societies. IEEE Trans. Comput. Soc. Syst. 6(4), 778–796 (2019)
Dennis, L.A., Fisher, M., Lincoln, N.K., Lisitsa, A., Veres, S.M.: Practical verification of decision-making in agent-based autonomous systems. Autom. Softw. Eng. 23(3), 305–359 (2016a)
Dignum, V., et al.: Ethics by design: necessity or curse? In: AAAI/ACM Conference on AI, Ethics, and Society 2018, vol. 18, pp. 60–66. ACM, New York (2018)
Kim, T.W., Donaldson, T., Hooker, J.: Grounding value alignment with ethical principles. arXiv preprint (2019)
Epstein, J.M.: Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science. Princeton University Press, Princeton (2013)
Giraffa, L.M.M., Viccari, R.M.: The use of agents techniques on intelligent tutoring systems. In: Proceedings SCCC 1998, 18th International Conference of the Chilean Society of Computer Science 1998. IEEE, Antofogasta (1998)
Giraffa, L., Móra, M., Viccari, R.: Modelling an interactive ITS using a MAS approach: from design to pedagogical evaluation. In: IEEE Third International Conference on Computational Intelligence and Multimedia Applications 1999, vol. 3. IEEE, New Delhi (1999)
Honarvar, A.R., Ghasem-Aghaee, N.: Casuist BDI-agent: a new extended BDI architecture with the capability of ethical reasoning. In: International Conference on Artificial Intelligence and Computational Intelligence, pp. 86–95. Springer, Heidelberg (2009)
Jacques, P.A., Viccari, R.M.: A BDI approach to infer student's emotions. In: Ibero-American Conference on Artificial Intelligence (IBERAMIA). Advances in Artificial Intelligence, Puebla, vol. 3315, pp. 901–911. Springer, Heidelberg (2004)
Thornton, S.M., et al.: Incorporating ethical considerations into automated vehicle control. IEEE Trans. Intell. Trans. Syst. 18, 1429–1439 (2019)
Vamplew, P.: Human-aligned artificial intelligence is a multi-objective problem. Ethics Inf. Technol. 20, 27–40 (2018)
Wallach, W.: Robot minds and human ethics: the need for a comprehensive model of moral decision making. Ethics Inf. Technol. 12(3), 243–250 (2010)
Wooldridge, M.: Intelligent agents: the key concepts. In: ECCAI Advanced Course on Artificial Intelligence, vol. 2322, pp. 3–43. Springer, Berlin (2001)
Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Córdova, P.R., Vicari, R.M., Brusius, C., Coelho, H. (2021). A Proposal for Artificial Moral Pedagogical Agents. In: Rocha, Á., Adeli, H., Dzemyda, G., Moreira, F., Ramalho Correia, A.M. (eds) Trends and Applications in Information Systems and Technologies. WorldCIST 2021. Advances in Intelligent Systems and Computing, vol 1365. Springer, Cham. https://doi.org/10.1007/978-3-030-72657-7_38
Download citation
DOI: https://doi.org/10.1007/978-3-030-72657-7_38
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-72656-0
Online ISBN: 978-3-030-72657-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)