Abstract
Out of fear that artificial general intelligence (AGI) might pose a future risk to human existence, some have suggested slowing or stopping AGI research, to allow time for theoretical work to guarantee its safety. Since an AGI system will necessarily be a complex closed-loop learning controller that lives and works in semi-stochastic environments, its behaviors are not fully determined by its design and initial state, so no mathematico-logical guarantees can be provided for its safety. Until actual running AGI systems exist – and there is as of yet no consensus on how to create them – that can be thoroughly analyzed and studied, any proposal on their safety can only be based on weak conjecture. As any practical AGI will unavoidably start in a relatively harmless baby-like state, subject to the nurture and education that we provide, we argue that our best hope to get safe AGI is to provide it proper education.
This work is supported by Reykjavik University’s School of Computer Science and a Centers of Excellence grant of the Science & Technology Policy Council of Iceland.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Bieger, J., Thórisson, K.R., Garrett, D.: Raising AI: tutoring matters. In: Goertzel, B., Orseau, L., Snaider, J. (eds.) AGI 2014. LNCS, vol. 8598, pp. 1–10. Springer, Heidelberg (2014)
Bostrom, N.: Superintelligence: Paths, dangers, strategies. Oxford University Press (2014)
Dewey, D.: Learning what to value. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) AGI 2011. LNCS, vol. 6830, pp. 309–314. Springer, Heidelberg (2011)
Future of Life Institute: Research priorities for robust and beneficial artificial intelligence (January 2015)
Goertzel, B., Bugaj, S.V.: Stages of ethical development in artificial general intelligence systems. Frontiers in Artificial Intelligence and applications 171, 448 (2008)
Omohundro, S.M.: The basic AI drives. Frontiers in Artificial Intelligence and applications 171, 483 (2008)
Sotala, K., Yampolskiy, R.V.: Responses to catastrophic AGI risk: a survey. Physica Scripta 90(1), 018001 (2015)
Turing, A.M.: Computing machinery and intelligence. Mind 59(236), 433–460 (1950)
Waser, M.R.: Discovering the foundations of a universal system of ethics as a road to safe artificial intelligence. In: AAAI Fall Symposium: Biologically Inspired Cognitive Architectures, pp. 195–200 (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Bieger, J., Thórisson, K.R., Wang, P. (2015). Safe Baby AGI. In: Bieger, J., Goertzel, B., Potapov, A. (eds) Artificial General Intelligence. AGI 2015. Lecture Notes in Computer Science(), vol 9205. Springer, Cham. https://doi.org/10.1007/978-3-319-21365-1_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-21365-1_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-21364-4
Online ISBN: 978-3-319-21365-1
eBook Packages: Computer ScienceComputer Science (R0)