Skip to main content

Ethical Alignment in Citizen-Centric AI

  • Conference paper
  • First Online:
PRICAI 2024: Trends in Artificial Intelligence (PRICAI 2024)

Abstract

This paper discusses the importance of ethical alignment in AI systems, particularly those designed with citizen end users in mind. It explores the intersection of responsible AI, socio-technical systems, and citizen-centric design, proposing that addressing the ethical aspect of decisions in citizen-centric AI systems enhances trust and acceptance of AI technologies. We focus on four key areas: (1) the formal specification of ethical principles, (2) processes to extract and elicit individual users’ ethical preferences, (3) aggregating these ethical preferences for a collective, and (4) mechanisms to ensure that the behaviour of AI systems aligns with the collective ethical preferences. We put forward a research roadmap by identifying challenges in these areas and highlighting solution concepts with the potential to address them.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that by “preference”, we are referring to users’ preference with respect to different ethical theories, e.g., if a user prefers their autonomous vehicle to follow the utilitarian view, then it is desirable for the vehicle to do so.

  2. 2.

    In this work, we are not evaluating different paradigms of ethics, rather we aim to allow users to see their choice of ethics implemented in the AI technologies they use.

  3. 3.

    https://www.moralmachine.net/.

  4. 4.

    https://ig.ft.com/climate-game/.

References

  1. Round Table Discussion During RAI UK Partner Network Town Hall (2024)

    Google Scholar 

  2. Round Table Discussion During Responsible AI Community Building Event (2024)

    Google Scholar 

  3. Abt, C.C.: Serious Games. University Press of America (1987)

    Google Scholar 

  4. Alaieri, F., Vellino, A.: Ethical decision making in robots: autonomy, trust and responsibility. In: Agah, A., Cabibihan, J.-J., Howard, A.M., Salichs, M.A., He, H. (eds.) ICSR 2016. LNCS (LNAI), vol. 9979, pp. 159–168. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47437-3_16

    Chapter  Google Scholar 

  5. Allen, C., Smit, I., Wallach, W.: Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf. Technol. 7, 149–155 (2005)

    Article  Google Scholar 

  6. Anderson, M., Anderson, S.L.: Machine ethics: creating an ethical intelligent agent. AI Mag. 28(4), 15–15 (2007)

    Google Scholar 

  7. Anderson, M., Anderson, S.L.: Machine Ethics. Cambridge University Press (2011)

    Google Scholar 

  8. Arnold, T., Kasenberg, D., Scheutz, M.: Value alignment or misalignment–what will keep systems accountable? In: Workshops at AAAI-2017 (2017)

    Google Scholar 

  9. Awad, E., et al.: The moral machine experiment. Nature 563(7729), 59–64 (2018)

    Google Scholar 

  10. Beisbart, C., Betz, G., Brun, G.: Making reflective equilibrium precise, a formal model. Ergo: J. Philosop. 8(15), 441–472 (2021)

    Google Scholar 

  11. Bench-Capon, T.J.: Persuasion in practical argument using value-based argumentation frameworks. J. Log. Comput. 13(3), 429–448 (2003)

    Article  MathSciNet  Google Scholar 

  12. Berberich, N., Diepold, K.: The virtuous machine-old ethics for new technology? arXiv preprint arXiv:1806.10322 (2018)

  13. Cervantes, J.A., López, S., Rodríguez, L.F., Cervantes, S., Cervantes, F., Ramos, F.: Artificial moral agents: a survey of the current status. Sci. Eng. Ethics 26(2), 501–532 (2020)

    Article  Google Scholar 

  14. Chhabra, J., Sama, K., Deshmukh, J., Srinivasa, S.: Evaluating computational models of ethics for autonomous decision making. In: AI and Ethics, pp. 1–14 (2024)

    Google Scholar 

  15. Chopra, A.K., Singh, M.P.: Sociotechnical systems and ethics in the large. In: Proceedings of the 2018 AAAI/ACM Conference on AIES, pp. 48–53 (2018)

    Google Scholar 

  16. Cloos, C.: The utilibot project: an autonomous mobile robot based on utilitarianism. In: 2005 AAAI Fall Symposium on Machine Ethics, pp. 38–45 (2005)

    Google Scholar 

  17. Coleman, K.G.: Android arete: toward a virtue ethic for computational agents. Ethics Inf. Technol. 3(4), 247–265 (2001)

    Article  Google Scholar 

  18. Conitzer, V.: Computational Aspects of Preference Aggregation. Ph.D. thesis, Carnegie Mellon University (2006)

    Google Scholar 

  19. Daniels, N.: Justice and Justification: Reflective Equilibrium in Theory and Practice. Cambridge University Press (1996)

    Google Scholar 

  20. Dennis, L., Fisher, M., Slavkovik, M., Webster, M.: Formal verification of ethical choices in autonomous systems. Robot. Autonom. Syst. (2016)

    Google Scholar 

  21. Dignum, V.: Introduction. In: Responsible Artificial Intelligence. AIFTA, pp. 1–7. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30371-6_1

    Chapter  Google Scholar 

  22. Endriss, U.: Judgment Aggregation (2016)

    Google Scholar 

  23. Fioravanti, F., Rahwan, I., Tohmé, F.A.: Classes of aggregation rules for ethical decision making in automated systems. arXiv preprint arXiv:2206.05160 (2022)

  24. Formosa, P., Ryan, M.: Making moral machines: why we need artificial moral agents. AI & Society 36(3), 839–851 (2021)

    Article  Google Scholar 

  25. Friedman, B.: Value-Sensitive Design, Interactions (1996)

    Google Scholar 

  26. Friedman, B., Hendry, D.G., Borning, A., et al.: A survey of value sensitive design methods. In: Foundations and Trends in Human–Computer Interaction (2017)

    Google Scholar 

  27. Gabriel, I.: Artificial intelligence, values, and alignment. Mind. Mach. 30(3), 411–437 (2020)

    Article  Google Scholar 

  28. Govindarajulu, N.S., Bringsjord, S., Ghosh, R., Sarathy, V.: Toward the engineering of virtuous machines. In: Proceedings of AI, Ethics, and Society (2019)

    Google Scholar 

  29. Grossi, D., Pigozzi, G.: Judgment Aggregation: A Primer. Springer Nature (2022)

    Google Scholar 

  30. Jones, A.J., Artikis, A., Pitt, J.: The design of intelligent socio-technical systems. Artif. Intell. Rev. 39, 5–20 (2013)

    Article  Google Scholar 

  31. Kant, I.: Moral Law: Groundwork of the Metaphysics of Morals. Routledge (2013)

    Google Scholar 

  32. Kim, T.W., Hooker, J., Donaldson, T.: Taking principles seriously: a hybrid approach to value alignment in artificial intelligence. JAIR 70, 871–890 (2021)

    Article  MathSciNet  Google Scholar 

  33. Luck, M., et al.: Normative agents. In: Ossowski, S. (ed.) Agreement Technologies, pp. 209–220. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-5583-3_14

  34. Malle, B.F., Scheutz, M., Austerweil, J.L.: Networks of social and moral norms in human and robot agents. In: Ferreira, M.I.A., Silva Sequeira, J., Tokhi, M.O., Kadar, E.E., Virk, G.S. (eds.) A World with Robots. ISCASE, vol. 84, pp. 3–17. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-46667-5_1

    Chapter  Google Scholar 

  35. Mill, J.S.: Utilitarianism. In: Seven Masterpieces of Philosophy, pp. 329–375. Routledge (2016)

    Google Scholar 

  36. Mill, J.S., Bentham, J.: Utilitarianism and Other Essays, Penguin, UK (1987)

    Google Scholar 

  37. Neto, B.d.S., da Silva, V.T., de Lucena, C.J.: Nbdi: an architecture for goal oriented normative agents. In: ICAART 2011 (2011)

    Google Scholar 

  38. Noothigattu, R., et al.: A voting-based system for ethical decision making. In: AAAI (2018)

    Google Scholar 

  39. Oren, N., Luck, M., Norman, T.J.: Argumentation for normative reasoning. In: Proceedings of the Symposium on Behaviour Regulation in Multi-agent Systems, pp. 55–60 (2008)

    Google Scholar 

  40. Osman, N., d’Inverno, M.: A computational framework of human values. In: AAMAS-24 (2024)

    Google Scholar 

  41. Pěchouček, M., Mařík, V.: Industrial deployment of multi-agent technologies: review and selected case studies. AAMAS 17(3), 397–431 (2008)

    Google Scholar 

  42. Pechoucek, M., Thompson, S.G., Voos, H.: Defence Industry Applications of Autonomous Agents and Multi-agent Systems. Springer (2008)

    Google Scholar 

  43. Pini, M.S., Rossi, F., Venable, K.B., Walsh, T.: Incompleteness and incomparability in preference aggregation: complexity results. Artif. Intell. 175(7–8), 1272–1289 (2011)

    Google Scholar 

  44. Russell, S.: Human Compatible: AI and the Problem of Control, Penguin, UK (2019)

    Google Scholar 

  45. Sen, A.: Social choice theory: a re-examination. In: Econometrica, pp. 53–89 (1977)

    Google Scholar 

  46. Sen, A.: Social choice theory. Handb. Math. Econ. 3, 1073–1181 (1986)

    Article  MathSciNet  Google Scholar 

  47. Sierra, C., Osman, N., Noriega, P., Sabater-Mir, J., PerellĂł, A.: Value alignment: a formal approach. arXiv preprint arXiv:2110.09240 (2021)

  48. Solomon, W.D.: Normative ethical theories. In: Wilber, Ch. K. (ed.) Economics, Ethics and Public Policy, pp. 119–138. Rowman & Littlefield Publishers, Boston (1998)

    Google Scholar 

  49. Steen, M.: Ethics as a participatory and iterative process. Commun. ACM 66(5), 27–29 (2023)

    Article  Google Scholar 

  50. Stein, S., Yazdanpanah, V.: Citizen-centric multiagent systems. In: AAMAS 2023, pp. 1802–1807 (2023)

    Google Scholar 

  51. Tennant, E., Hailes, S., Musolesi, M.: Modeling moral choices in social dilemmas with multi-agent reinforcement learning. In: IJCAI-2023, pp. 317–325 (2023)

    Google Scholar 

  52. Tolmeijer, S., Kneer, M., Sarasua, C., Christen, M., Bernstein, A.: Implementations in machine ethics: a survey. ACM Comput. Surv. 53(6), 1–38 (2020)

    Google Scholar 

  53. Trianosky, G.: What is virtue ethics all about? Am. Philos. Q. 27(4), 335–344 (1990)

    Google Scholar 

  54. Van Dam, K.H., Nikolic, I., Lukszo, Z.: Agent-Based Modelling of Socio-technical Systems, vol. 9. Springer (2012)

    Google Scholar 

  55. Van Dang, C., et al.: Application of soar cognitive agent based on utilitarian ethics theory for home service robots. In: URAI, pp. 155–158. IEEE (2017)

    Google Scholar 

  56. Walsh, T.: Uncertainty in preference elicitation and aggregation. In: AAAI, vol. 7, pp. 3–8 (2007)

    Google Scholar 

  57. Wiegel, V., van den Berg, J.: Combining moral theory, modal logic and MAS to create well-behaving artificial agents. Int. J. Soc. Robot. 1(3), 233–242 (2009)

    Google Scholar 

  58. Winfield, A.F., Jirotka, M.: Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philos. Trans. Roy. Soc. A Math. Phys. Eng. Sci. (2018)

    Google Scholar 

  59. Woodgate, J.M., Ajmeri, N.: Macro ethics for governing equitable sociotechnical systems. In: AAMAS 2022, pp. 1824–1828 (2022)

    Google Scholar 

  60. Zafar, U., Bayhan, S., Sanfilippo, A.: Home energy management system concepts, configurations, and technologies for the smart grid. IEEE Access (2020)

    Google Scholar 

  61. Zheng, L., Chiang, W.L., Sheng, Y., et al.: Judging LLM-as-a-judge with MT-bench and chatbot arena. Adv. Neural Inf. Process. Syst. 36 (2024)

    Google Scholar 

  62. Zhou, B., et al.: Smart home energy management systems: concept, configurations, and scheduling strategies. Renew. Sustain. Energy Rev. 61, 30–40 (2016)

    Google Scholar 

Download references

Acknowledgements

This work is supported by the UK Engineering and Physical Sciences Research Council (EPSRC) through a Turing AI Fellowship (EP/V022067/1) on Citizen-Centric AI Systems (https://ccais.ac.uk/) and by Responsible Ai UK (EP/Y009800/1) (https://rai.ac.uk/). We also thank the PRICAI reviewers for their constructive feedback. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any author-accepted manuscript version arising.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jayati Deshmukh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Deshmukh, J., Yazdanpanah, V., Stein, S., Norman, T.J. (2025). Ethical Alignment in Citizen-Centric AI. In: Hadfi, R., Anthony, P., Sharma, A., Ito, T., Bai, Q. (eds) PRICAI 2024: Trends in Artificial Intelligence. PRICAI 2024. Lecture Notes in Computer Science(), vol 15285. Springer, Singapore. https://doi.org/10.1007/978-981-96-0128-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-981-96-0128-8_4

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-96-0127-1

  • Online ISBN: 978-981-96-0128-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics