Skip to main content

Can N-Version Decision-Making Prevent the Rebirth of HAL 9000 in Military Camo? Using a “Golden Rule” Threshold to Prevent AI Mission Individuation

  • Chapter
  • First Online:
Policy-Based Autonomic Data Governance

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11550))

Abstract

The promise of AIs that can target, shoot at, and eliminate enemies in the blink of an eye, brings about the possibility that such AIs can turn rogue and create an adversarial “Skynet.” The main danger is not that AIs might turn against us because they hate us, but because they think they want to be like us: individuals. The solution might be to treat them like individuals. This should include the right and obligation to do unto others as any AI would want other AIs or humans to do unto them. Technically, this involves an N-version decision making process that takes into account not how good or efficient the decision of an AI is, but how likely the AI is to show algorithmic “respect” to other AIs or human rules and operators. In this paper, we discuss a possible methodology for deploying AI decision making that uses multiple AI actors to check on each other to prevent “mission individuation,” i.e., the AIs wanting to complete the mission even if the human operators are sacrificed. The solution envisages mechanisms that demand the AIs to “do unto others as others would do onto them” in making final solutions. This should encourage AIs to accept critique and censoring in certain situations and most important it should lead to decisions that protect both human operators and the final goal of the mission.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.wikihow.com.

  2. 2.

    https://stackoverflow.com.

References

  1. Avizienis, A.: The N-version approach to fault-tolerant software. IEEE Trans. Softw. Eng. 11(12), 1491–1501 (1985)

    Article  Google Scholar 

  2. Datta, A., Shayak, S., Zick, Y.: Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In: Proceedings of the 2016 IEEE Symposium on Security and Privacy. IEEE Computer Society (2016)

    Google Scholar 

  3. Reidl, M.O., Harrison, B.: Using stories to teach human values to artificial agents. In: The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence AI, Ethics, and Society. Technical report WS-16-02 (2016)

    Google Scholar 

  4. Deng, B.: Machine ethics: the robot’s dilemma. Nat. News 523(7558), 24–26 (2015). https://doi.org/10.1038/523024a

    Article  Google Scholar 

  5. Kuipers, B.: How can we trust a robot. Commun. ACM 61(3), 86–95 (2018)

    Article  Google Scholar 

  6. Bertino, E., de Mel, G., Russo, A., Calo, S., Verma, D.: Community-based self generation of policies and processes for assets: concepts and research directions. In: Proceedings of the 2017 IEEE International Conference on Big Data, BigData 2017. IEEE Computer Society (2017)

    Google Scholar 

  7. Calo, S., Verma, D., Bertino, E., Ingham, J., Cirincione, G.: How to prevent Skynet from forming (a perspective from policy-based autonomic device management). In: Proceedings of the 38th IEEE International Conference on Distributed Computing Systems, ICDCS 2018. IEEE Computer Society (2018)

    Google Scholar 

  8. Geist, E., Lohn, A.J.: How Might Artificial Intelligence Affect the Risk of Nuclear War? Rand Corporation. https://www.rand.org/pubs/perspectives/PE296.html. Accessed 27 Sept 2018

  9. Lundquist, G.R., Mohan, V., Hamlen, K.W.: Searching for software diversity: attaining artificial diversity through program synthesis. In: Proceedings of the 2016 New Security Paradigms Workshop, NSPW 2016. ACM (2016)

    Google Scholar 

  10. Castelvecchi, D.: Can we open the black box of AI. Nature 538, 20–23 (2016)

    Article  Google Scholar 

  11. Bertino, E., Merrill, S., Nesen, A., Utz, C.: Redefining data transparency - a multi-dimensional approach. IEEE Comput. 52, 16–26 (2019)

    Article  Google Scholar 

  12. Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016. ACM (2016)

    Google Scholar 

  13. Chabot, P.: The Philosophy of Simondon: Between Technology and Individuation. Bloomsbury Academic, London (2013)

    Google Scholar 

  14. Taylor, R.N., Medvidovic, N., Dashofy, E.M.: Software Architecture: Foundations, Theory, and Practice, 1st edn. Wiley, Hoboken (2009)

    Google Scholar 

  15. Gorton, I.: Essential Software Architecture, 2nd edn. Springer, Heidelberg (2011)

    Book  Google Scholar 

  16. Rabin, M.O., Scott, D.: Finite automata and their decision problems. IBM J. Res. Dev. 3(2), 114–125 (1959)

    Article  MathSciNet  Google Scholar 

  17. Hopcroft, J.E., Motwani, R., Ullman, J.D.: Introduction to Automata Theory, Languages, and Computation, 3rd edn. Pearson, Boston (2006)

    MATH  Google Scholar 

  18. Coleman, S., Coleman, N.: Military ethics. In: ten Have, H. (ed.) Encyclopedia of Global Bioethics. Springer, Heidelberg (2017)

    Google Scholar 

Download references

Acknowledgement

The work reported in this paper has been partially supported by NSF under grants ACI-1547358, and by the U.S. Army Research Laboratory and the U.K. Ministry of Defence under Agreement Number W911NF-16-3-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the U.K. Ministry of Defence or the U.K. Government. The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elisa Bertino .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Matei, S.A., Bertino, E. (2019). Can N-Version Decision-Making Prevent the Rebirth of HAL 9000 in Military Camo? Using a “Golden Rule” Threshold to Prevent AI Mission Individuation. In: Calo, S., Bertino, E., Verma, D. (eds) Policy-Based Autonomic Data Governance. Lecture Notes in Computer Science(), vol 11550. Springer, Cham. https://doi.org/10.1007/978-3-030-17277-0_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-17277-0_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-17276-3

  • Online ISBN: 978-3-030-17277-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics