Skip to main content

Why (a kind of) AI can’t be done

  • Philosophy of Artificial Intelligence
  • Conference paper
  • First Online:
Advanced Topics in Artificial Intelligence (AI 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1502))

Included in the following conference series:

Abstract

I provide what I believe is a definitive argument against strong classical representational AI—that branch of AI which believes that we can generate intelligence by giving computers representations that express the content of cognitive states. The argument comes in two parts. (1) There is a clear distinction between cognitive states (such as believing that the Earth is round) and the content of cognitive states (such as the belief that the Earth is round), yet strong representational AI tries to generate cognitive states by giving computers representations that express the content of cognitive states—representations, moreover, which we understand but which the computer does not. (2) The content of a cognitive state is the meaning of the sentence or other symbolism that expresses it. But if meanings were inner entities we would be unable to understand them. Consequently contents cannot be inner entities, so that we cannot generate cognitive states by giving computers inner representations that express the content of cognition. Moreover, since such systems are not even meant to understand the meanings of their representations, they cannot understand the content of their cognitive states. But not to understand the content of a cognitive state is not to have that cognitive state, so that, again, strong representational AI systems cannot have cognitive states and so cannot be intelligent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  • Amble, T. (1987), Logic Programming and Knowledge Engineering, Wokingham: Addison-Wesley.

    Google Scholar 

  • Barr A & Feigenbaum, E. A. (1981), The Handbook of Artificial Intelligence, Vol. I, Reading Mass: Addison-Wesley.

    MATH  Google Scholar 

  • Dennett, D. (1987), ‘Fast Thinking”, in D. Dennett, The Intentional Stance, Cambridge MA: MIT Press, pp. 324–337.

    Google Scholar 

  • Haugeland, J. (1985), Artificial Intelligence: The Very Idea, Cambridge, Mass.: MIT/ Bradford Press.

    Google Scholar 

  • Locke, J. (1960), An Essay Concerning Human Understanding. My edition is edited by A.D. Woozley, London: Fontana, 1964.

    Google Scholar 

  • Mill, J. (1843), A System of Logic. London: Longmans, Green, Reader & Dyer.

    Google Scholar 

  • Mill, J. (1865), Examination of Sir William Hamilton’s Philosophy, Boston: William V. Spencer.

    Google Scholar 

  • Rich, E. (1983), Artificial Intelligence, Auckland: McGraw-Hill.

    MATH  Google Scholar 

  • Ryle, G. (1949), The Concept of Mind, Hutchinson. Reprinted Harmondsworth: Penguin (1963).

    Google Scholar 

  • Schank, R. C. & Abelson, R. P. (1977), Scripts, Plans, Goals and Understanding, Hillsdale: Laurence Erlbaum Associates.

    MATH  Google Scholar 

  • Searle, J. (1980), ‘Minds, Brains, and Programs’, The Behavioral and Brain Sciences 3, pp. 417–427.

    Article  Google Scholar 

  • Searle, J. (1995), ‘mind, syntax, and semantics’, in T. Honderlich, ed., The Oxford Companion to Philosphy, Oxford: Oxford University Press, pp. 580–581.

    Google Scholar 

  • Smith, B. C. (1985), ‘Prologue to Reflection and Semantics in a Procedural Languate’, in R. Brachman & H. Levesque, eds., Readings in Knowledge Representation, Los Altos: Morgan Kaufmann.

    Google Scholar 

  • Smith, B. C. (1991), ‘The Owl and the Electric Encyclopedia’, Artificial Intelligence, 47, pp. 251–288.

    Article  MathSciNet  Google Scholar 

  • Wittgenstein, L. (1953), Philosophical Investigations, Oxford: Basil Blackwell.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Grigoris Antoniou John Slaney

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dartnall, T. (1998). Why (a kind of) AI can’t be done. In: Antoniou, G., Slaney, J. (eds) Advanced Topics in Artificial Intelligence. AI 1998. Lecture Notes in Computer Science, vol 1502. Springer, Berlin, Heidelberg . https://doi.org/10.1007/BFb0095036

Download citation

  • DOI: https://doi.org/10.1007/BFb0095036

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65138-3

  • Online ISBN: 978-3-540-49561-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics