Abstract
Oxford physicist David Deutsch recently claimed that AI researchers had made no progress towards creating truly intelligent agents and were unlikely to do so until they began making machines that could produce creative explanations for themselves. Deutsch argued that AI must be possible because of the Universality of Computation, but that progress towards it would require nothing less than a new philosophical direction: a rejection of inductivism in favour of fallibilism. This paper sets out to review and respond to these claims. After first establishing a broad framework and terminology with which to discuss these questions, it examines the inductivist and fallibilist philosophies. It argues that Deutsch is right about fallibilism, not only because of the need for creative explanations but also because it makes it easier for agents to create and maintain models—a crucial ability for any sophisticated agent. However, his claim that AI research has made no progress is debatable, if not mistaken. The paper concludes with suggestions for ways in which agents might come up with truly creative explanations and looks briefly at the meaning of knowledge and truth in a fallibilist world.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
While agents commonly have well-defined physical boundaries, there is no obvious need for such a restriction, c.f. distributed agents and the extended mind hypothesis.
- 2.
Nothing in what follows actually depends on there being a physical reality; there is simply no way we can prove that an external world exists, or that it has existed for more than a few seconds prior to the present moment, nor that it is not a figment of the individual’s imagination.
- 3.
Agents may not know what these states are, and initially may have them fulfilled either accidentally or purposefully through the actions of other agents, such as parents.
- 4.
What the states are, how they are mapped, and how they are recognised and interpreted are also important questions.
- 5.
Of course, the model’s states and their interpretation may well change between implementations.
- 6.
Not every material is necessarily suitable.
- 7.
Some people include abduction and other uncertain inferences under the general heading of induction. In this paper I use induction in the narrow sense outlined above, rather than adopting this broader sense.
- 8.
There is an exact parallel here with the idea from the Philosophy of Science that all experimentation is carried out within the context of a particular theory.
- 9.
The inability to find a suitable analogy has led some physicists to suggest that intuitions provided by realist analogies (models) are unnecessary, and that the highly abstract mathematical formulations (also models) are sufficient in and of themselves. Deutsch disagrees and devotes a large part of his book to explaining his realist model of a quantum mechanical universe.
- 10.
Modern society promotes individual development by explicitly creating such encounters, in the form of schools, museums, exhibitions, concerts, etc.
References
Adriaans, P. (2013). Information. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2013 Edition). Retrieved from http://plato.stanford.edu/archives/fall2013/entries/information/
Bartha, P. (2013). Analogy and analogical reasoning. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2013 Edition). Retrieved from http://plato.stanford.edu/archives/fall2013/entries/reasoning-analogy/
Clark, A. (2013a). Are we predictive engines? Perils, prospects, and the puzzle of the porous perceiver. Behavioral and Brain Sciences, 36(3), 233–253. doi:10.1017/S0140525X12002440.
Clark, A. (2013b). Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36, 181–253. doi:10.1017/S0140525X12000477.
Davenport, D. (1997). Towards a computational account of the notion of truth. TAINN’97. Retrieved from http://www.cs.bilkent.edu.tr/~david/papers/truth.doc
Davenport, D. (2009). Revisited: A computational account of the notion of truth. In J. Vallverdu (Ed.), ecap09, Proceedings of the 7th European Conference on Philosophy and Computing. Barcelona: Universitat Autonoma de Barcelona.
Davenport, D. (2012a). Computationalism: Still the only game in town. Minds and Machines, 22(3), 183–190. doi:10.1007/s11023-012-9271-5.
Davenport, D. (2012b). The two (computational) faces of AI. In V. C. MĂĽller (Ed.), Theory and philosophy of artificial intelligence (SAPERE). Berlin: Springer.
Deutsch, D. (2011). The beginning of infinity. New York: Penguin.
Deutsch, D. (2012, October 3). Creative blocks the very laws of physics imply that artificial intelligence must be possible. What’s holding us up? Retrieved June 21, 2013, from Aeon Magazine: http://www.aeonmagazine.com/being-human/david-deutsch-artificial-intelligence/
Floridi, L. (2011). The philosophy of information. New York: Oxford University Press.
Friston, K. (2008). Hierarchical models in the brain. PLoS Computational Biology, 4(11). doi:10.1371/journal.pcbi.1000211.
Friston, K., Adams, R. A., Perrinet, L., & Breakspear, M. (2012). Perceptions as hypotheses: Saccades as experiments. Frontiers in Psychology, 3, 151. doi:10.3389/fpsyg.2012.00151.
Goertzel, B. (2012). The real reasons we don’t yet have AGI. Retrieved from http://www.kurzweilai.net/the-real-reasons-we-dont-have-agi-yet
Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Berlin: Springer. doi:10.1007/b138233.
Kaufman, A. S. (2000). Tests of intelligence. In R. J. Sternberg (Ed.), Handbook of intelligence. New York: Cambridge University Press.
Legg, S. (2008). Machine super intelligence (PhD. thesis). Lugano: Faculty of Informatics of the University of Lugano. Retrieved from http://www.vetta.org/documents/Machine_Super_Intelligence.pdf
Nagela, J., Marb, R., & Juan, V. S. (2013). Authentic Gettier cases: a reply to Starmans and Friedman. Cognition, 129(3), 666–669.
Popper, K. R. (1959). The logic of scientific discovery. New York: Basic Books.
Rathmanner, S., & Hutter, M. (2011). A philosophical treatise of universal induction. Entropy, 13, 1076–1136.
Vickers, J. (2013). The problem of induction. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2013 Edition). Retrieved from http://plato.stanford.edu/archives/fall2013/entries/reasoning-analogy/
Wiedermann, J. (2013). The creativity mechanisms in embodied agents: An explanatory model. IEEE Symposium Series on Computational Intelligence (SSCI 2013). Singapore: IEEE.
Woodward, J. (2011). Scientific explanation. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Winter 2011 Edition). Retrieved from http://plato.stanford.edu/archives/win2011/entries/scientific-explanation/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Davenport, D. (2016). Explaining Everything. In: MĂĽller, V.C. (eds) Fundamental Issues of Artificial Intelligence. Synthese Library, vol 376. Springer, Cham. https://doi.org/10.1007/978-3-319-26485-1_20
Download citation
DOI: https://doi.org/10.1007/978-3-319-26485-1_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-26483-7
Online ISBN: 978-3-319-26485-1
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)