Skip to main content
Log in

Expanding Nallur's Landscape of Machine Implemented Ethics

  • Commentary
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

A Correction to this article was published on 29 September 2020

This article has been updated

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Change history

  • 29 September 2020

    Due to an unfortunate miscommunication with the copy editor an important reference was omitted from this recently published article. The reference that should be included is:

Notes

  1. These small-scale concerns include the ethics of autonomous vehicles, weapons systems, and medical assistants, as well as other similar examples. But there are also questions about the impact of AI and machines on various aspects of our personal and social lives, e.g., the impact of AI assistants (personal assistants such as Google Assistant, Alexa, and Siri) on autonomy (e.g., see Danaher 2018; Bauer and Dubljević 2019). Additionally, we can imagine ethical issues concerning the effects of AI technology on animals (Gary Comstock suggested this to me). For instance: Is it acceptable to replace a living pet with a robot pet, which might reduce the number of living pets in homes? Should we have AI devices monitor our pets while we are away?

  2. See Ferrario et al. (2019) for discussion of trust in AI.

  3. This is not to say that elements of Nallur’s discussion do not have relevance to the large-scale issues.

  4. Asimov first described the three laws in his classic short story “Runaround.” Interestingly, in the story the rules are portrayed as creating potentialities for (or against) the types of actions they specify, and these potentialities can come into conflict. But they are pretty clearly deontological rules, specifying rather strict requirements. In sum, the three laws (in order of priority) are: do not harm humans, obey humans, and protect your own existence. For discussion of the three laws, see Thornton et al. (2017, p. 1431) and Wallach (2008).

  5. See Wallach and Allen (2009) for discussion of top-down and bottom-up approaches, plus ways to hybridize them. The top-down/bottom-up approaches could be seen, I suggest, as mirroring Dennett’s description of intelligent design theory as a ‘skyhook’ versus Darwinian theory as a ‘crane’: the former is a mind-first approach, not explainable by reference to prior existing material elements or forces, whereas the latter is a matter-first approach, explainable solely by references to material elements (Dennett 1995, p. 76). On the analogy, from the point of view of the machine, a top-down machine ethic would be forced on the machine (as a skyhook comes out of the blue) or developed bottom-up ‘organically’ through the machine’s interactions (as a crane is built out of existing materials).

  6. Moral functionalism requires that “moral agents develop the disposition of replicating the semantics of a moral community by learning its folk morality”—the virtuous traits they develop are dispositions “nourished through a process of learning” (Howard and Muntean 2017, p. 135).

  7. See Varner (2012) for additional discussion and defense of two-level utilitarianism.

  8. Since their view is a satisficing one, utilitarian decisions should generate “a level of utility that leads to overall gains in happiness for some without costing anyone unhappiness” (Lucas and Comstock 2015, p. 81). Also, their model may not be fully implementable given current technology (2015, p. 92).

  9. I stated in Bauer (2018) that as far as I knew no one had applied two-level utilitarianism to machine ethics. However, Lucas and Comstock (2015) had done so. Nonetheless, there are important differences in the way I developed it. First, mine was a general machine ethics proposal, whereas Lucas and Comstock seemed more focused on medical assistant robots (although the approach has more general applications); second, I was concerned to compare and contrast my proposal with the virtue ethics approach, whereas Lucas and Comstock did so with the prima facie duties approach to ethics; third, level two seems to have priority in their version, whereas level one has priority in my version (more on this in the text below).

  10. Hooker (2000, p. 89), however, does not accept two-level utilitarianism or a strict act-utilitarianism; he advocates only for rule-utilitarianism (for discussion of major differences between act, rule, and two-level utilitarianism, see Bauer 2018). The problem, he argues, with adding an act-utility calculator on top of a system of rules is that, when conflicts between rules need to be resolved by the act-utility calculator, people would tend to lose confidence in the rules. I think, however, that confidence could be further inspired by having the second level, the utility calculator, in place in order to definitively resolve conflicts.

  11. Consistent with the two-level approach, Lucas and Comstock (2015, p. 86) also recognize that following Ross’ prima facie duties is advisable at first, but as information grows, the machine should follow hedonistic act utilitarianism. However, it seems that in their ideal ethical machine the utility calculator would get priority, and only resort to level one rules if there is insufficient time to calculate the best possible course of action (2015, p. 88).

  12. For further discussion of utilitarian machine ethics, see Grau (2006).

  13. Leben provides detailed comparisons of three AV scenarios that he envisions.

  14. By contrast, utilitarianism is often criticized as allowing violations of individuals’ rights.

References

  • Anderson, M., & Anderson, S. L. (2008). EthEl: Toward a principled ethical eldercare robot. In Eldercare: New Solutions to Old Problems. Proceedings of AAAI Fall Symposium. Washington, D.C. https://www.aaai.org/Library/Symposia/Fall/fs08-02.php.

  • Aristotle. 350 BCE. Nicomachean Ethics. W. D. Ross (Trans.). The Internet Classics Archive. https://classics.mit.edu/Aristotle/nicomachaen.html.

  • Asimov, I. (1950/2004). I, Robot. New York: Random House.

  • Bauer, W. (2018). Virtuous vs. utilitarian artificial moral agents. AI & Society, 35, 263–271. https://doi.org/10.1007/s00146-018-0871-3.

    Article  Google Scholar 

  • Bauer, W., & Dubljević, V. (2019). AI assistants and the paradox of internal automaticity. Neuroethics. https://doi.org/10.1007/s12152-019-09423-6.

    Article  Google Scholar 

  • Chopra, A. K., & Singh, M. P. (2018). Sociotechnical systems and ethics in the large. In AIES ‘18, Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 48–53). https://doi.org/10.1007/s12152-019-09423-6.

  • Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538, 311–313. https://doi.org/10.1038/538311a.

    Article  Google Scholar 

  • Danaher, J. (2018). Toward an ethics of AI assistants: An initial framework. Journal of Philosophy and Technology, 31(4), 629–653. https://doi.org/10.1007/s13347-018-0317-3.

    Article  Google Scholar 

  • Dennett, D. C. (1995). Darwin’s dangerous idea: Evolution and the meaning of life. New York: Simon & Schuster.

    Google Scholar 

  • Dubljević, V. (2020). Toward implementing the ADC model of moral judgment in autonomous vehicles. Science and Engineering Ethics. https://doi.org/10.1007/s11948-020-00242-0.

  • Dubljević, V., & Bauer, W. (2020). Autonomous vehicles and the basic structure of society. In R. Jenkins, D. Černý, & T. Hříbek (Eds.), Autonomous vehicles ethics: Beyond the trolley problem. Oxford: Oxford University Press.

    Google Scholar 

  • Dubljević, V., & Racine, E. (2014). The ADC of moral judgment: Opening the black box of moral intuitions with heuristics about agents, deeds and consequences. AJOB Neuroscience, 5(4), 3–20.

    Article  Google Scholar 

  • Dubljević, V., Sattler, S., & Racine, E. (2018). Correction: Deciphering moral intuition: How agents, deeds and consequences influence moral judgment. PLoS ONE, 13(10), e0206750. https://doi.org/10.1371/journal.pone.0206750.

    Article  Google Scholar 

  • Ferrario, A., Loi, M., & Viganò, E. (2019). In AI we trust incrementally: A multi-layer model of trust to analyze human-artificial intelligence interactions. Philosophy & Technology. https://doi.org/10.1007/s13347-019-00378-3.

    Article  Google Scholar 

  • Grau, C. (2006). There is No “I” in “Robot”: Robots and utilitarianism. IEEE Intelligent Systems, 21(4), 52–55.

    Article  Google Scholar 

  • Hare, R. M. (1983). Moral thinking: Its levels, method, and point. Oxford: Oxford University Press.

    Google Scholar 

  • Himmelreich, J. (2020). Ethics of technology needs more political philosophy. Communications of the ACM, 63(1), 33–35.

    Article  Google Scholar 

  • Hooker, B. (2000). Ideal code, real world. Oxford: Oxford University Press.

    Google Scholar 

  • Howard, D., & Muntean, I. (2016). A minimalist model of the artificial autonomous moral agent (AAMA). Association for the Advancement of Artificial Intelligence.

  • Howard, D., & Muntean, I. (2017). Artificial moral cognition: Moral functionalism and autonomous moral agency. In T. M. Powers (Ed.), Philosophy and computing, philosophical studies series (Vol. 128, pp. 121–160). Berlin: Springer.

    Chapter  Google Scholar 

  • Leben, D. (2017). A Rawlsian algorithm for autonomous vehicles. Ethics and Information Technology, 19, 107–115.

    Article  Google Scholar 

  • Lucas, J., & Comstock, G. (2015). Do machines have prima facie duties? In S. P. van Rysewyk & M. Pontier (Eds.), Machine medical ethics (pp. 79–92). Berlin: Springer. https://doi.org/10.1007/978-3-319-08108-3_6.

    Chapter  Google Scholar 

  • Nallur, V. (2020). Landscape of Machine Implemented Ethics. Science and Engineering Ethics. https://doi.org/10.1007/s11948-020-00236-y.

    Article  Google Scholar 

  • Rahwan, I. (2017). Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology, 20, 5–14.

    Article  Google Scholar 

  • Rawls, J. (1971). A theory of justice. Cambridge, MA: Belknap.

    Google Scholar 

  • Ross, W. D. (1930). The right and the Good. Oxford: Oxford University Press.

    Google Scholar 

  • Thornton, S. M., Pan, S., Erlien, S. M., & Gerdes, J. C. (2017). Incorporating ethical considerations in automated vehicle control. IEEE Transactions on Intelligent Transportation Systems, 18(6), 1429–1439.

    Article  Google Scholar 

  • Varner, G. (2012). Personhood, ethics, and animal cognition: Situating animals in Hare's two level utilitarianism. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Wallach, W. (2008). Implementing moral decision making faculties in computers and robots. AI & Society, 22, 463–475.

    Article  Google Scholar 

  • Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Book  Google Scholar 

Download references

Acknowledgements

Thanks to Gary Comstock for helpful suggestions concerning several specific points, and especially concerning two-level utilitarianism. Thanks to Veljko Dubljević for feedback about the ADC model of moral judgment.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William A. Bauer.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original version of this article was revised: Due to an unfortunate miscommunication with the copy editor an important reference was omitted from this recently published article. The reference that should be included is:

Nallur, V. (2020). Landscape of Machine Implemented Ethics. Science and Engineering Ethics. https://doi.org/10.1007/s11948-020-00236-y

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bauer, W.A. Expanding Nallur's Landscape of Machine Implemented Ethics. Sci Eng Ethics 26, 2401–2410 (2020). https://doi.org/10.1007/s11948-020-00237-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-020-00237-x

Navigation