Skip to main content

Growing Recursive Self-Improvers

  • Conference paper
  • First Online:
Artificial General Intelligence (AGI 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9782))

Included in the following conference series:

Abstract

Research into the capability of recursive self-improvement typically only considers pairs of \(\langle \)agent, self-modification candidate\(\rangle \), and asks whether the agent can determine/prove if the self-modification is beneficial and safe. But this leaves out the much more important question of how to come up with a potential self-modification in the first place, as well as how to build an AI system capable of evaluating one. Here we introduce a novel class of AI systems, called experience-based AI (expai), which trivializes the search for beneficial and safe self-modifications. Instead of distracting us with proof-theoretical issues, expai systems force us to consider their education in order to control a system’s growth towards a robust and trustworthy, benevolent and well-behaved agent. We discuss what a practical instance of expai looks like and build towards a “test theory” that allows us to gauge an agent’s level of understanding of educational material.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This work is motivated in part by the fact that human designers and teachers do not possess the full wisdom needed to implement and grow a flawlessly benevolent intelligence. We are therefore skeptical about the safety of formal proof-based approaches, where a system tries to establish the correctness—over the indefinite future—of self-modifications with respect to some initially imposed utility function: Such system might perfectly optimize themselves towards said utility function, but what if this utility function itself is flawed?

  2. 2.

    The system in the cited work, called aera, provides a proof of concept. We are urging research into expai precisely because aera turned out to be a particularly promising path [10] and we consider it likely to be superseded by even better and more powerful instances of expai.

  3. 3.

    The only way to avoid the autonomous generation of subgoals is to specify every action to be taken—but that amounts to total preprogamming, which, if it were possible, would mean that we need not impart any intelligence at all.

  4. 4.

    By definition, a granule is a very small object that still has some structure (larger than a grain).

  5. 5.

    In short, this statement just asserts the sufficient expressive power of granules.

  6. 6.

    By design such a drive cannot be deleted by the system itself. More sophisticated means of bypassing drives (e.g., through hardware self-surgery) cannot be prevented through careful implementation; indeed, the proposed Test Theory is exactly meant to gauge both the understanding of the imposed drives and constraints, and the development of value regarding those.

References

  1. Bloom, B., Engelhart, M.D., Furst, E.J., Hill, W.H., Krathwohl, D.R. (eds.): Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. David McKay, New York (1956)

    Google Scholar 

  2. Fallenstein, B., Soares, N.: Problems of self-reference in self-improving space-time embedded intelligence. In: Goertzel, B., Orseau, L., Snaider, J. (eds.) AGI 2014. LNCS, vol. 8598, pp. 21–32. Springer, Heidelberg (2014)

    Google Scholar 

  3. Goertzel, B., Bugaj, S.V.: AGI preschool. In: Proceedings of the Second Conference on Artificial General Intelligence (AGI-2009). Atlantis Press, Paris (2009)

    Google Scholar 

  4. Krathwohl, D.R.: A revision of bloom’s taxonomy: an overview. Theor. Pract. 41(4), 212–218 (2002)

    Article  Google Scholar 

  5. Marzano, R.J., Kendall, J.S.: The need for a revision of bloom’s taxonomy. In: Marzano, R., Kendall, J.S. (eds.) The New Taxonomy of Educational Objectives, pp. 1–20. Corwin Press, Thousand Oaks (2006)

    Google Scholar 

  6. Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images (2014). http://arXiv.org/abs/1412.1897

  7. Nikolić, D.: AI-Kindergarten: A method for developing biological-like artificial intelligence (2016, forthcoming). http://www.danko-nikolic.com/wp-content/uploads/2015/05/AI-Kindergarten-patent-pending.pdf. Accessed 1 April 2016

  8. Nivel, E., Thórisson, K.R.: Self-programming: operationalizing autonomy. In: Proceedings of the 2nd Conference on Artificial General Intelligence (AGI-2009) (2009)

    Google Scholar 

  9. Nivel, E., et al.: Bounded seed-AGI. In: Goertzel, B., Orseau, L., Snaider, J. (eds.) AGI 2014. LNCS, vol. 8598, pp. 85–96. Springer, Heidelberg (2014)

    Google Scholar 

  10. Nivel, E., Thórisson, K.R., Steunebrink, B.R., Dindo, H., Pezzulo, G., Rodríguez, M., Hernández, C., Ognibene, D., Schmidhuber, J., Sanz, R., Helgason, H.P., Chella, A., Jonsson, G.K.: Autonomous acquisition of natural language. In: Proceedings of the IADIS International Conference on Intelligent Systems & Agents, pp. 58–66 (2014)

    Google Scholar 

  11. Nivel, E., Thórisson, K.R., Steunebrink, B., Schmidhuber, J.: Anytime bounded rationality. In: Bieger, J., Goertzel, B., Potapov, A. (eds.) AGI 2015. LNCS, vol. 9205, pp. 121–130. Springer, Heidelberg (2015)

    Chapter  Google Scholar 

  12. Schmidhuber, J.: Gödel machines: fully self-referential optimal universal self-improvers. In: Goertzel, B., Pennachin, C. (eds.) Artificial General Intelligence. Cognitive Technologies, pp. 199–226. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  13. Schmidhuber, J.: Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts. Connection Sci. 18(2), 173–187 (2006)

    Article  Google Scholar 

  14. Schmidhuber, J.: Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Trans. Auton. Ment. Dev. 2(3), 230–247 (2010)

    Article  Google Scholar 

  15. Steunebrink, B.R., Schmidhuber, J.: A family of Gödel machine implementations. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) AGI 2011. LNCS, vol. 6830, pp. 275–280. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  16. Steunebrink, B.R., Koutník, J., Thórisson, K.R., Nivel, E., Schmidhuber, J.: Resource-bounded machines are motivated to be effective, efficient, and curious. In: Kühnberger, K.-U., Rudolph, S., Wang, P. (eds.) AGI 2013. LNCS, vol. 7999, pp. 119–129. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  17. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., Fergus, R.: Intriguing properties of neural networks (2013). http://arXiv.org/abs/1312.6199

  18. Thórisson, K.R., Bieger, J., Thorarensen, T., Sigurdardottir, J.S., Steunebrink, B.R.: Why artificial intelligence needs a task theory - and what it might look like. In: Proceedings of AGI-2016 (2016)

    Google Scholar 

  19. Thórisson, K.R., Kremelberg, D., Steunebrink, B.R., Nivel, E.: About understanding. In: Proceedings of AGI-2016 (2016)

    Google Scholar 

  20. Thórisson, K.R., Nivel, E.: Achieving artificial general intelligence through peewee granularity. In: Proceedings of AGI-2009, pp. 222–223 (2009)

    Google Scholar 

  21. Yudkowsky, E., Herreshoff, M.: Tiling agents for self-modifying AI, and the Löbian obstacle (2013). https://intelligence.org/files/TilingAgentsDraft.pdf

Download references

Acknowledgments

The authors would like to thank Eric Nivel and Klaus Greff for seminal discussions and helpful critique. This work has been supported by a grant from the Future of Life Institute.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bas R. Steunebrink .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Steunebrink, B.R., Thórisson, K.R., Schmidhuber, J. (2016). Growing Recursive Self-Improvers. In: Steunebrink, B., Wang, P., Goertzel, B. (eds) Artificial General Intelligence. AGI 2016. Lecture Notes in Computer Science(), vol 9782. Springer, Cham. https://doi.org/10.1007/978-3-319-41649-6_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-41649-6_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-41648-9

  • Online ISBN: 978-3-319-41649-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics