Skip to main content

A Proposal for Artificial Moral Pedagogical Agents

  • Conference paper
  • First Online:

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1365))

Abstract

While Artificial intelligence technologies continue to proliferate in all areas of contemporary life, researchers are looking for ways to make them safe for users. In the teaching-learning context, this is a trickier problem because it must be clear which principles or ethical frameworks are guiding processes supported by artificial intelligence. After all, people education are at stake. This inquiry presents an approach to value alignment, in educational contexts using artificial pedagogical moral agents (AMPA) adopting the classic BDI model. Besides, we propose a top-down approach explaining why the bottom-up or the hybrid one may would not be advisable in educational grounds.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  • Anderson, M., Anderson, S.L.: Ethical healthcare agents. In: Sordo, M., Vaidya, S., Jain, L.C. (eds.) Advanced Computational Intelligence Paradigms in Healthcare-3, pp. 233–257. Springer, Heidelberg (2008)

    Google Scholar 

  • Aliman, N.M., Kester, L.: Requisite variety in ethical utility functions for AI value alignment. In: Workshop on Artificial Intelligence Safety 2019, vol. 2419. CEUR-WS, Macao (2019)

    Google Scholar 

  • Aliman, N.M., Kester, L., Werkhoven, P.: XR for augmented utilitarianism. In: IEEE International Conference on Artificial Intelligence and Virtual Reality 2019, pp. 283–285. IEEE, San Diego (2019)

    Google Scholar 

  • Allen, C., Smit, I., Wallach, W.: Artificial morality: top-down, bottom-up and hybrid approaches. Ethics Inf. Technol. 7(3), 149–155 (2005)

    Google Scholar 

  • Arnold, T., Kasenberg, D., Scheutz, M.: Value alignment or misalignment-what will keep systems accountable? In: AAAI Workshop on AI, Ethics, and Society 2017. AAAI Press, Palo Alto (2017)

    Google Scholar 

  • Bostrom, N.: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford (2014)

    Google Scholar 

  • Brundage, M.: Limitations and risks of machine ethics. J. Exp. Theor. Artif. Intell. 26(3), 355–372 (2014)

    Google Scholar 

  • Cervantes, J.A., et al.: Artificial moral agents: a survey of the current status. Sci. Eng. Ethics 26(2), 501–532 (2019)

    Google Scholar 

  • Daniel, J., Cano, V., Cervera, M.G.: The future of MOOCs: adaptive learning or business model? Revista de Universidad y Sociedad del Conocimiento 12(1), 64–73 (2015)

    Google Scholar 

  • Costa, A.R., Coelho, H.: Interactional moral systems: a model of social mechanisms for the moral regulation of exchange processes in agent societies. IEEE Trans. Comput. Soc. Syst. 6(4), 778–796 (2019)

    Google Scholar 

  • Dennis, L.A., Fisher, M., Lincoln, N.K., Lisitsa, A., Veres, S.M.: Practical verification of decision-making in agent-based autonomous systems. Autom. Softw. Eng. 23(3), 305–359 (2016a)

    Google Scholar 

  • Dignum, V., et al.: Ethics by design: necessity or curse? In: AAAI/ACM Conference on AI, Ethics, and Society 2018, vol. 18, pp. 60–66. ACM, New York (2018)

    Google Scholar 

  • Kim, T.W., Donaldson, T., Hooker, J.: Grounding value alignment with ethical principles. arXiv preprint (2019)

    Google Scholar 

  • Epstein, J.M.: Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science. Princeton University Press, Princeton (2013)

    Google Scholar 

  • Giraffa, L.M.M., Viccari, R.M.: The use of agents techniques on intelligent tutoring systems. In: Proceedings SCCC 1998, 18th International Conference of the Chilean Society of Computer Science 1998. IEEE, Antofogasta (1998)

    Google Scholar 

  • Giraffa, L., Móra, M., Viccari, R.: Modelling an interactive ITS using a MAS approach: from design to pedagogical evaluation. In: IEEE Third International Conference on Computational Intelligence and Multimedia Applications 1999, vol. 3. IEEE, New Delhi (1999)

    Google Scholar 

  • Honarvar, A.R., Ghasem-Aghaee, N.: Casuist BDI-agent: a new extended BDI architecture with the capability of ethical reasoning. In: International Conference on Artificial Intelligence and Computational Intelligence, pp. 86–95. Springer, Heidelberg (2009)

    Google Scholar 

  • Jacques, P.A., Viccari, R.M.: A BDI approach to infer student's emotions. In: Ibero-American Conference on Artificial Intelligence (IBERAMIA). Advances in Artificial Intelligence, Puebla, vol. 3315, pp. 901–911. Springer, Heidelberg (2004)

    Google Scholar 

  • Thornton, S.M., et al.: Incorporating ethical considerations into automated vehicle control. IEEE Trans. Intell. Trans. Syst. 18, 1429–1439 (2019)

    Google Scholar 

  • Vamplew, P.: Human-aligned artificial intelligence is a multi-objective problem. Ethics Inf. Technol. 20, 27–40 (2018)

    Google Scholar 

  • Wallach, W.: Robot minds and human ethics: the need for a comprehensive model of moral decision making. Ethics Inf. Technol. 12(3), 243–250 (2010)

    Google Scholar 

  • Wooldridge, M.: Intelligent agents: the key concepts. In: ECCAI Advanced Course on Artificial Intelligence, vol. 2322, pp. 3–43. Springer, Berlin (2001)

    Google Scholar 

Download references

Acknowledgements

This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Paulo Roberto Córdova , Rosa Maria Vicari , Carlos Brusius or Helder Coelho .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Córdova, P.R., Vicari, R.M., Brusius, C., Coelho, H. (2021). A Proposal for Artificial Moral Pedagogical Agents. In: Rocha, Á., Adeli, H., Dzemyda, G., Moreira, F., Ramalho Correia, A.M. (eds) Trends and Applications in Information Systems and Technologies. WorldCIST 2021. Advances in Intelligent Systems and Computing, vol 1365. Springer, Cham. https://doi.org/10.1007/978-3-030-72657-7_38

Download citation

Publish with us

Policies and ethics