Abstract
While it is still unclear if agents with Artificial General Intelligence (AGI) could ever be built, we can already use mathematical models to investigate potential safety systems for these agents. We present work on an AGI safety layer that creates a special dedicated input terminal to support the iterative improvement of an AGI agent’s utility function. The humans who switched on the agent can use this terminal to close any loopholes that are discovered in the utility function’s encoding of agent goals and constraints, to direct the agent towards new goals, or to force the agent to switch itself off.
An AGI agent may develop the emergent incentive to manipulate the above utility function improvement process, for example by deceiving, restraining, or even attacking the humans involved. The safety layer will partially, and sometimes fully, suppress this dangerous incentive.
This paper generalizes earlier work on AGI emergency stop buttons. We aim to make the mathematical methods used to construct the layer more accessible, by applying them to an MDP model. We discuss two provable properties of the safety layer, identify still-open issues, and present ongoing work to map the layer to a Causal Influence Diagram (CID).
K. Holtman—Independent Researcher.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Armstrong, S.: Motivated value selection for artificial agents. In: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)
Armstrong, S., O’Rourke, X.: ‘Indifference’ methods for managing agent rewards. arXiv:1712.06365 (2017)
Boutilier, C., Dean, T., Hanks, S.: Decision-theoretic planning: structural assumptions and computational leverage. J. Artif. Int. Res. 11(1), 1–94 (1999)
Carey, R., Langlois, E., Everitt, T., Legg, S.: The incentives that shape behaviour. arXiv:2001.07118 (2020)
Everitt, T., Hutter, M.: Reward tampering problems and solutions in reinforcement learning: a causal influence diagram perspective. arXiv:1908.04734 (2019)
Everitt, T., Kumar, R., Krakovna, V., Legg, S.: Modeling AGI safety frameworks with causal influence diagrams. arXiv:1906.08663 (2019)
Hadfield-Menell, D., Dragan, A., Abbeel, P., Russell, S.: The off-switch game. In: Workshops at the Thirty-First AAAI Conference on Artificial Intelligence (2017)
Holtman, K.: Corrigibility with utility preservation. arXiv:1908.01695 (2019)
Holtman, K.: Towards AGI agent safety by iteratively improving the utility function: proofs, models, and reality. Preprint on arXiv (2020)
Omohundro, S.M.: The basic AI drives. In: AGI, vol. 171, pp. 483–492 (2008)
Shachter, R., Heckerman, D.: Pearl causality and the value of control. In: Dechter, R., Geffner, H., Halpern, J.Y. (eds.) Heuristics, Probability, and Causality: A Tribute to Judea Pearl, pp. 431–447. College Publications, London (2010)
Soares, N., Fallenstein, B., Armstrong, S., Yudkowsky, E.: Corrigibility. In: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)
Acknowledgments
Thanks to Stuart Armstrong, Ryan Carey, Tom Everitt, and David Krueger for feedback on drafts of this paper, and to the anonymous reviewers for useful comments that led to improvements in the presentation.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Holtman, K. (2020). Towards AGI Agent Safety by Iteratively Improving the Utility Function. In: Goertzel, B., Panov, A., Potapov, A., Yampolskiy, R. (eds) Artificial General Intelligence. AGI 2020. Lecture Notes in Computer Science(), vol 12177. Springer, Cham. https://doi.org/10.1007/978-3-030-52152-3_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-52152-3_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-52151-6
Online ISBN: 978-3-030-52152-3
eBook Packages: Computer ScienceComputer Science (R0)