Abstract
A common reaction to first encountering the problem statement of Friendly AI (”Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”) is to propose a simple design which allegedly suffices; or to reject the problem by replying that ”constraining” our creations is undesirable or unnecessary. This paper briefly presents some of the reasoning which suggests that Friendly AI is solvable, but not simply or trivially so, and that a wise strategy would be to invoke detailed learning of and inheritance from human values as a basis for further normalization and reflection.
This is a much-shortened form of a longer paper which may be found at http://singinst.org/upload/complex-value-systems.pdf
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Rinkworks: ComputerStupidities:Programming, http://www.rinkworks.com/stupid/cs_programming.shtml
Kurzweil, R.: The Singularity is Near: When Humans Transcend Biology. Viking, New York (2005)
Omohundro, S.: The basic AI drives. In: Wang, P., Goertzel, B., Franklin, S. (eds.) Proceedings of the First AGI Conference, pp. 483–492. IOS Press, Amsterdam (2008)
Schmidhuber, J.: Gödel machines: Fully Self-Referential Optimal Universal Self-Improvers. In: Goertzel, B., Pennachin, C. (eds.) Artificial General Intelligence, pp. 119–226. Springer, Heidelberg (2006)
Hibbard, B.: Super-intelligent machines. ACM SIGGRAPH Computer Graphics 35(1) (2001)
Hibbard, B.: Message to the SL4 email list, archived at (2004), http://yudkowsky.net/singularity/AIRisk_Hibbard.html
McDermott, D.: Artificial intelligence meets natural stupidity. SIGART Newsletter 57, 4–9 (1976)
Frankena, W.: Ethics, 2nd edn. Prentice Hall, Englewood Cliffs (1973)
Tarleton, N.: Coherent extrapolated volition: A meta-level approach to machine ethics, http://singinst.org/upload/coherent-extrapolated-volition.pdf
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Yudkowsky, E. (2011). Complex Value Systems in Friendly AI. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds) Artificial General Intelligence. AGI 2011. Lecture Notes in Computer Science(), vol 6830. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-22887-2_48
Download citation
DOI: https://doi.org/10.1007/978-3-642-22887-2_48
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-22886-5
Online ISBN: 978-3-642-22887-2
eBook Packages: Computer ScienceComputer Science (R0)