Skip to main content

Education, Ethical Dilemmas and AI: From Ethical Design to Artificial Morality

  • Conference paper
  • First Online:
Adaptive Instructional Systems. Design and Evaluation (HCII 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12792))

Included in the following conference series:

Abstract

Ethical dilemmas are complex scenarios involving a decision between conflicting choices related to ethical principles. While considering a case of an ethical dilemma in education presented in [17], it can be seen how, in these situations, it might be needed to take into consideration the student’s needs, preferences, and potentially conflicting goals, as well as their personal and social contexts. Due to this, planning and foreseeing ethically challenging situations in advance, which would be how ethical design is normally used in technological artifacts, is not enough. As AI systems become more autonomous, the amount of possible situations, choices and effects their actions can have grow exponentially. In this paper, we bring together the analysis of ethical dilemmas in education and the need to incorporate moral reasoning into the AI systems’ decision procedures. We argue how ethical design, although necessary, is not sufficient for that task and that artificial morality, or equivalent tools, are needed in order to integrate some sort of “ethical sensor” into autonomous systems taking a deeper role in an educational settings in order to enable them to, if not resolve, at least identify new ethically-relevant scenarios they are faced with.

This work has been supported by the project colMOOC “Integrating Conversational Agents and Learning Analytics in MOOCs”, co-funded by the European Commission (ref. 588438-EPP-1-2017-1-EL-EPPKA2-KA), and by a UOC postdoctoral stay.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Although the terms “ethics” and “morality” have slightly different definitions (one being a more reflective discipline, while the other one being more about prescription of behavior), we use them interchangeably in this work to refer to behaviors that are both in accordance to certain ethical principles, as well as considered to bear “good”, or “right” outcomes.

  2. 2.

    In the story “Liar!” [5, ch. 6], precisely, a robot continuously lies to the characters in order to avoid hurting their feelings, which is an unintended understanding of the term “harm” that was not planned in the design of that robot.

  3. 3.

    It is worth mentioning that these two approaches to ethical systems, ethical design and artificial morality, are not mutually exclusive. In fact, Moor points out in his work how the categories he defines in [19] are not exclusive either –an explicit ethical agent can easily be an ethical impact agent and an implicit ethical agent as well. Following this, furnishing an agent with some artificial morality mechanisms does not imply having to ditch ethical design approaches beforehand.

  4. 4.

    This would then open up the Sorites question about “how low is low enough” for the system to make this decision, but this question falls outside the scope of this paper.

  5. 5.

    It is worth recalling a recent case that occurred during 2020 in the UK in which, due to students being unable to attend an A-level exam due to the Covid-19 pandemic, an automated system was implemented in order to predict the student’s grades [16]. It turned out that the predictions made by the students’ teachers and the ones made by the automated system were quite different (being way lower in the automated prediction), which resulted in several protests that led to the UK government disregarding the automated predictions and following the human teachers’ predicted grades. This ties up directly with the fact that human teachers had access to this Personal layer of their students that the automated system, which was fed only on data of what we call the Educational layer, lacked.

  6. 6.

    Learning analytics could help understand the student’s performance and dedication and provide some grounds for a more informed decision.

References

  1. Anderson, M., Anderson, S.: Geneth: a general ethical dilemma analyzer. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014)

    Google Scholar 

  2. Anderson, M., Anderson, S.L., Armen, C.: Medethex: a prototype medical ethics advisor. In: AAAI, pp. 1759–1765 (2006)

    Google Scholar 

  3. Anderson, S.L.: Asimov’s “three laws of robotics” and machine metaethics. Ai Soc. 22(4), 477–493 (2008)

    Google Scholar 

  4. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias. ProPublica, 23 May 2016 (2016)

    Google Scholar 

  5. Asimov, I.: I, robot. HarperCollins Publishers (2013)

    Google Scholar 

  6. Blass, J.: Interactive learning and analogical chaining for moral and commonsense reasoning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016)

    Google Scholar 

  7. Blass, J., Forbus, K.: Moral decision-making by analogy: Generalizations versus exemplars. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29 (2015)

    Google Scholar 

  8. Caliskan, A., Bryson, J.J., Narayanan, A.: Semantics derived automatically from language corpora contain human-like biases. Science 356(6334), 183–186 (2017)

    Google Scholar 

  9. Casas-Roma, J., Conesa, J.: Towards the design of ethically-aware pedagogical conversational agents. In: Barolli, L., Takizawa, M., Yoshihisa, T., Amato, F., Ikeda, M. (eds.) 3PGCIC 2020. LNNS, vol. 158, pp. 188–198. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-61105-7_19

    Chapter  Google Scholar 

  10. Cervantes, J.A., López, S., Rodríguez, L.F., Cervantes, S., Cervantes, F., Ramos, F.: Artificial moral agents: a survey of the current status. Sci. Eng. Ethics 26(2), 501–532 (2020)

    Article  Google Scholar 

  11. Clarke, R.: Asimov’s laws of robotics: implications for information technology. Mach. Ethics 254–284 (2011)

    Google Scholar 

  12. Dede, C., Richards, J., Saxberg, B.: Learning Engineering for Online Education: Theoretical Contexts and Design-Based Examples. Routledge (2018)

    Google Scholar 

  13. Favaretto, M., De Clercq, E., Elger, B.S.: Big data and discrimination: perils, promises and solutions. A systematic review. J. Big Data 6(1), 1–27 (2019)

    Article  Google Scholar 

  14. Gunkel, D.J.: The Machine Question: Critical Perspectives on AI, Robots, and Ethics. MIT Press, Cambridge (2012)

    Book  Google Scholar 

  15. Honarvar, A.R., Ghasem-Aghaee, N.: An artificial neural network approach for creating an ethical artificial agent. In: 2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation-(CIRA), pp. 290–295. IEEE (2009)

    Google Scholar 

  16. Kolkman, D.: “f**k the algorithm”?: What the world can learn from the UK’s a-level grading fiasco, August 2020. https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco/. Accessed 10 Feb 2021

  17. Levinson, M., Fay, J.: Dilemmas of educational ethics: Cases and commentaries. Harvard Education Press (2019)

    Google Scholar 

  18. Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S., Floridi, L.: The ethics of algorithms: mapping the debate. Big Data Soc. 3(2) (2016)

    Google Scholar 

  19. Moor, J.: Four kinds of ethical robots. Philosophy Now 72, 12–14 (2009)

    Google Scholar 

  20. Muntean, I., Howard, D.: Artificial moral agents: creative, autonomous, social. An approach based on evolutionary computation. In: Seibt, J., Hakli, R., Norskov, M. (eds.) Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, pp. 217–230. IOS Press (2014)

    Google Scholar 

  21. Vanderelst, D., Winfield, A.: An architecture for ethical robots inspired by the simulation theory of cognition. Cogn. Syst. Res. 48, 56–66 (2018)

    Google Scholar 

  22. Wallach, W., Franklin, S., Allen, C.: A conceptual and computational model of moral decision making in human and artificial agents. Top. Cognit. Sci. 2(3), 454–485 (2010)

    Google Scholar 

  23. Wallach, W., Allen, C.: Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, Oxford (2008)

    Google Scholar 

  24. Wallach, W., Allen, C., Smit, I.: Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI Soc. 22(4), 565–582 (2008)

    Article  Google Scholar 

  25. Yapo, A., Weiss, J.: Ethical implications of bias in machine learning. In: Proceedings of the 51st Hawaii International Conference on System Sciences (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joan Casas-Roma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Casas-Roma, J., Conesa, J., Caballé, S. (2021). Education, Ethical Dilemmas and AI: From Ethical Design to Artificial Morality. In: Sottilare, R.A., Schwarz, J. (eds) Adaptive Instructional Systems. Design and Evaluation. HCII 2021. Lecture Notes in Computer Science(), vol 12792. Springer, Cham. https://doi.org/10.1007/978-3-030-77857-6_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-77857-6_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77856-9

  • Online ISBN: 978-3-030-77857-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics