Abstract
I provide what I believe is a definitive argument against strong classical representational AI—that branch of AI which believes that we can generate intelligence by giving computers representations that express the content of cognitive states. The argument comes in two parts. (1) There is a clear distinction between cognitive states (such as believing that the Earth is round) and the content of cognitive states (such as the belief that the Earth is round), yet strong representational AI tries to generate cognitive states by giving computers representations that express the content of cognitive states—representations, moreover, which we understand but which the computer does not. (2) The content of a cognitive state is the meaning of the sentence or other symbolism that expresses it. But if meanings were inner entities we would be unable to understand them. Consequently contents cannot be inner entities, so that we cannot generate cognitive states by giving computers inner representations that express the content of cognition. Moreover, since such systems are not even meant to understand the meanings of their representations, they cannot understand the content of their cognitive states. But not to understand the content of a cognitive state is not to have that cognitive state, so that, again, strong representational AI systems cannot have cognitive states and so cannot be intelligent.
Preview
Unable to display preview. Download preview PDF.
Bibliography
Amble, T. (1987), Logic Programming and Knowledge Engineering, Wokingham: Addison-Wesley.
Barr A & Feigenbaum, E. A. (1981), The Handbook of Artificial Intelligence, Vol. I, Reading Mass: Addison-Wesley.
Dennett, D. (1987), ‘Fast Thinking”, in D. Dennett, The Intentional Stance, Cambridge MA: MIT Press, pp. 324–337.
Haugeland, J. (1985), Artificial Intelligence: The Very Idea, Cambridge, Mass.: MIT/ Bradford Press.
Locke, J. (1960), An Essay Concerning Human Understanding. My edition is edited by A.D. Woozley, London: Fontana, 1964.
Mill, J. (1843), A System of Logic. London: Longmans, Green, Reader & Dyer.
Mill, J. (1865), Examination of Sir William Hamilton’s Philosophy, Boston: William V. Spencer.
Rich, E. (1983), Artificial Intelligence, Auckland: McGraw-Hill.
Ryle, G. (1949), The Concept of Mind, Hutchinson. Reprinted Harmondsworth: Penguin (1963).
Schank, R. C. & Abelson, R. P. (1977), Scripts, Plans, Goals and Understanding, Hillsdale: Laurence Erlbaum Associates.
Searle, J. (1980), ‘Minds, Brains, and Programs’, The Behavioral and Brain Sciences 3, pp. 417–427.
Searle, J. (1995), ‘mind, syntax, and semantics’, in T. Honderlich, ed., The Oxford Companion to Philosphy, Oxford: Oxford University Press, pp. 580–581.
Smith, B. C. (1985), ‘Prologue to Reflection and Semantics in a Procedural Languate’, in R. Brachman & H. Levesque, eds., Readings in Knowledge Representation, Los Altos: Morgan Kaufmann.
Smith, B. C. (1991), ‘The Owl and the Electric Encyclopedia’, Artificial Intelligence, 47, pp. 251–288.
Wittgenstein, L. (1953), Philosophical Investigations, Oxford: Basil Blackwell.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Dartnall, T. (1998). Why (a kind of) AI can’t be done. In: Antoniou, G., Slaney, J. (eds) Advanced Topics in Artificial Intelligence. AI 1998. Lecture Notes in Computer Science, vol 1502. Springer, Berlin, Heidelberg . https://doi.org/10.1007/BFb0095036
Download citation
DOI: https://doi.org/10.1007/BFb0095036
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65138-3
Online ISBN: 978-3-540-49561-1
eBook Packages: Springer Book Archive