Skip to main content

Abstract

We consider the question of what properties a Machine Ethics system should have. This question is complicated by the existence of ethical dilemmas with no agreed upon solution.  We provide an example to motivate why we do not believe falling back on the elicitation of values from stakeholders is sufficient to guarantee correctness of such systems. We go on to define two broad categories of ethical property that have arisen in our own work and present a challenge to the community to approach this question in a more systematic way.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    While humans are full moral agents, it is contentious whether any computational system counts as a full moral agent and most experts are of the opinion that no existing computational system has this property to any meaningful extent.

  2. 2.

    https://autonomy-and-verification.github.io/tools/mcapl.

  3. 3.

    https://github.com/javapathfinder.

References

  1. Anderson, M., Anderson, S.: Machine ethics: creating an ethical intelligent agent. AI Mag. 28(4), 15–26 (2007)

    MATH  Google Scholar 

  2. Asimov, I.: Runaround. In: Astounding Science Fiction. Street and Smith (1942)

    Google Scholar 

  3. Bentzen, M.: The principle of double effect applied to ethical dilemmas of social robots. In: Proceedings of Robophilosophy 2016, pp. 268–279. IOS Press (2016)

    Google Scholar 

  4. Boyer, R.S., Moore, J.S. (eds.): The Correctness Problem in Computer Science. Academic Press, London (1981)

    MATH  Google Scholar 

  5. Bremner, P., Dennis, L.A., Fisher, M., Winfield, A.F.T.: On proactive, transparent, and verifiable ethical reasoning for robots. Proc. IEEE 107(3), 1–21 (2019). https://doi.org/10.1109/JPROC.2019.2898267

  6. Clarke, E.M., Grumberg, O., Peled, D.: Model Checking. MIT Press (1999)

    Google Scholar 

  7. Collenette, J., Dennis, L., Fisher, M.: Prospective responsibility for multi-agent systems. In: Bramer, M., Stahl, F. (eds.) Artificial Intelligence XL, pp. 247–252. Springer Nature Switzerland, Cham (2023)

    Chapter  Google Scholar 

  8. Cranefield, S., Oren, N., Vasconcelos, W.W.: Accountability for practical reasoning agents. In: Lujak, M. (ed.) Agreement Technologies, pp. 33–48. Springer International Publishing, Cham (2019)

    Chapter  Google Scholar 

  9. DeMillo, R.A., Lipton, R.J., Perlis, A.J.: Social processes and proofs of theorems of programs. ACM Commun. 22(5), 271–280 (1979)

    Article  MATH  Google Scholar 

  10. Dennis, L.A.: The MCAPL framework including the agent infrastructure layer an agent java pathfinder. J. Open Sour. Softw. 3(24), 617 (2018). https://doi.org/10.21105/JOSS.00617

  11. Dennis, L.A., Bentzen, M.a.M., Lindner, F., Fisher, M.: Verifiable machine ethics in changing contexts. In: Proceedings of the AAAI Conference on Artificial Intelligence 35(13), 11470–11478 (2021). https://ojs.aaai.org/index.php/AAAI/article/view/17366

  12. Dennis, L.A., Fisher, M., Slavkovik, M., Webster, M.P.: Formal verification of ethical choices in autonomous systems. Rob. Auton. Syst. 77, 1–14 (2016). https://doi.org/10.1016/j.robot.2015.11.012

  13. Dennis, L.A., Fisher, M., Webster, M., Bordini, R.H.: Model checking agent programming languages. Autom. Softw. Eng. 19(1), 5–63 (2012). https://doi.org/10.1007/S10515-011-0088-X

    Article  MATH  Google Scholar 

  14. Falcone, Y., Havelund, K., Reger, G.: A tutorial on runtime verification. Eng. Dependable Softw. Syst. 34, 141–175 (2013)

    Google Scholar 

  15. Fetzer, J.H.: Program verification: the very idea. Commun. ACM 31(9), 1048–1063 (1988)

    Article  MATH  Google Scholar 

  16. Foot, P.: The problem of abortion and the doctrine of the double effect. Oxford Rev. 5, 5–15 (1967)

    MATH  Google Scholar 

  17. Halpern, J.Y.: Actual Causality. MIT Press (2016)

    Google Scholar 

  18. Lindner, F., Bentzen, M., Nebel, B.: The HERA approach to morally competent robots. In: Proceedings of IEEE/RSJ International Conference Intelligent Robots and Systems (IROS) (2017)

    Google Scholar 

  19. Mehlitz, P.C., Rungta, N., Visser, W.: A hands-on java pathfinder tutorial. In: Proceedings of 35th International Conference on Software Engineering (ICSE), pp. 1493–1495. IEEE/ACM (2013). http://dl.acm.org/citation.cfm?id=2486788

  20. Moor, J.: The nature, importance, and difficulty of machine ethics. IEEE Intell. Syst. 21(4), 18–21 (2006)

    Article  MATH  Google Scholar 

  21. Murphy, R., Woods, D.D.: Beyond Asimov: the three laws of responsible robotics. IEEE Intell. Syst. 24(4) (2009)

    Google Scholar 

  22. Rai, T.S., Holyoak, K.J.: Moral principles or consumer preferences? Alternative framings of the trolley problem. Cogn. Sci. 34(2), 311–321 (2010). https://doi.org/10.1111/J.1551-6709.2009.01088.X

    Article  MATH  Google Scholar 

  23. Rao, A.S., Georgeff, M.P.: An abstract architecture for rational agents. In: Proceedings of 3rd International Conference on Principles of Knowledge Representation and Reasoning (KR and R), pp. 439–449. Morgan Kaufmann (1992)

    Google Scholar 

  24. Visser, W., Mehlitz, P.C.: Model checking programs with java pathfinder. In: Proceedings of 12th International SPIN Workshop. Lecture Notes in Computer Science, vol. 3639, p. 27. Springer (2005)

    Google Scholar 

  25. Wallach, W., Allen, C.: Moral Machines: Teaching Robots Right from Wrong. Oxford University Press (2008)

    Google Scholar 

  26. Weller, A.: Transparency: motivations and challenges. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 23–40. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_2

    Chapter  MATH  Google Scholar 

Download references

Acknowledgements

The work in this paper was supported by EPSRC through the “Trustworthy Robotic Assistants” (EP/K006193/1), “Verifiable Autonomy” (EP/L024845/1), “Reconfigurable Autonomy” (EP/J011770/1), “Trustworthy Autonomous Systems Verifiability Node” (EP/V026801/1) and “Computational Agent Responsibility" (EP/W01081X/1) projects, by the Royal Academy of Engineering, through its “Chair in Emerging Technologies" scheme, and by the ERDF/NWDA-funded Virtual Engineering Centre.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Louise A. Dennis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dennis, L.A., Fisher, M. (2025). Specifying Agent Ethics. In: Cranefield, S., Nardin, L.G., Lloyd, N. (eds) Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XVII. COINE 2024. Lecture Notes in Computer Science(), vol 15398. Springer, Cham. https://doi.org/10.1007/978-3-031-82039-7_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-82039-7_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-82038-0

  • Online ISBN: 978-3-031-82039-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics