Abstract
Integrating the processing of sensory information and natural language is not a homogeneous enterprise, and there are different proposals from both connectionism and symbolic AI on how to proceed. This paper considers one problematic part of the enterprise, what we call the internalist trap for the systems of symbolic AI and connectionism. Two kinds of computational mechanism are discussed, the Syntactic engine of symbolic AI and the more novel Spatial engine of connectionism, and the different solutions to the internalist trap that each machine requires is explored. What emerges from our discussion is the relative paucity of the representational resources available to the Syntactic engine, in comparison to those available to the Spatial engine. This inequality is important, because it is precisely these resources that, we argue, are crucial in hooking atomic representations to the world. That is, whilst it is possible for both kinds of computational engine to be hooked up to the world, it is only the Spatial engine which possesses the requisite resources in and of itself.
Similar content being viewed by others
References
Block, N. (1980). Are Absent Qualia Impossible? Philosophical Review 89: 257–272.
Chalmers, D. J. (1990). Syntactic Transformations on Distributed Representations. Connection Science 2: 53–62.
Chalmers, D. J. (1992). Subsymbolic Computation and the Chinese Room. In Dinsmore, J. (ed.) The Symbolic and connectionist paradigms: Closing the Gap. Lawrence Erlbaum: Hillsdale, NJ.
Churchland, Paul M. & Churchland, Patricia Smith (1990). Stalking the Wild Epistemic Engine. In Lycan, W. G. (ed.) Mind and Cognition. Basil Blackwell.
Clark, A. (1989). Microcognition: Philosophy, Cognitive Science and Parallel Distributed Processing. MIT Press: Cambridge, Mass.
Dennett, D. (1981). Three Kinds of Intentional Psychology. In Healey, R. A. (ed.) Reduction, Time and Reality: Studies in the Philosophy of the Natural Sciences. Cambridge University Press: Cambridge.
Fodor, J. A. (1975). The Language of Thought. Crowell: New York.
Fodor, J. A. (1980). Methodological Solipsism Considered as a Research Strategy in Cognitive Science. Behavioural and Brain Sciences, 3. Reprinted in Fodor, J. A. (1981) Representations. MIT Press: Cambridge, MA.
Fodor, J. A. & Pylyshyn, Z. W. (1988). Connectionism and Cognitive Architecture: A Critical Analysis. Cognition 28: 2–71.
Fodor, J. A. & McLaughlin, B. P. (1990). Connectionism and the Problem of Systematicity: Why Smolensky's Solution Doesn't Work. Cognition 35: 183–204.
van Gelder, T. (1989). Classical Questions, Radical Answers: Connectionism and the Structure of Mental Representations. In Horgan, T. & Tienson, J. (eds.) Connectionism and the Philosophy of Mind.
van Gelder, T. (1990). Compositionality: A Connectionist Variation on a Classical Theme. Cognitive Science 14: 355–384.
Harnad, S. (1990). The Symbol Grounding Problem. In Proceedings of The Ninth Annual International Conference of the Center for Non-linear Studies on Self-Organizing, Collective and Cooperative Phenomena in Natural and Artificial Computing Networks. Los Alamos, NM 87545, USA. StephanieForrest (ed.), North Holland: Amsterdam.
Harnad, S. (1992). Connecting Object to Symbol in Modeling Cognition. In Clark, A. & Lutz, R. (eds.) Connectionism in Context, Springer-Verlag.
Haugeland, J. (1981). Semantic Engines. In Haugeland, J. (ed.) Mind Design: Philosophy, Psychology and Artificial Intelligence. MIT Press: Cambridge, MA.
Jackendoff, R. (1984). Sense and Reference in a Psychologically Based Semantics. In Bever, T. G., Carral, J. M. & Miller, L. A. (eds.) Talking Minds. MIT Press.
Lloyd, D. (1989) Simple Minds. MIT Press: Cambridge, MA.
McCulloch, W. S. & Pitts, W. H. (1943). A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics 5: 115–133.
Miller, G. A. & Johnson-Laird, P. N. (1976) Language and Perception. Cambridge University Press: Cambridge.
Newell, A. (1982). Physical Symbol Systems. Cognitive Science 4: 135–183.
Niklassen, L. & Sharkey, N. E. (1992). Connectionism and the Issues of Compositionality and Systematicity. In Trappl, R. (ed.) Cybernetics and Systems Research. Kluwer Academic Publishers: Dordrecht.
Pollack, J. B. (1990). Recursive Distributed Representations. Artificial Intelligence 46: 77–105.
Putnam, H. (1975). The Meaning of ‘Meaning’. In Gunderson, K. (ed.) Language, Mind and Knowledge, 131–193. Minnesota Studies in the Philosophy of Science, Vol. 7. University of Minnesota Press: Minneapolis.
Pylyshyn, Z. W. (1984). Computation and Cognition: Towards a Foundation for Cognitive Science. MIT Press: Cambridge, Mass.
Searle, J. (1980). Minds, Brains and Programs. Behavioural and Brain Sciences 3: 417–424. Reprinted in Haugeland, J. (ed.) Mind Design: Philosophy, Psychology and Artificial Intelligence (1981). MIT Press: Cambridge, MA.
Sharkey, N. E. & Jackson, S. A. (1994). Three Horns of the Representational Trilemma. In Honavar, V. & Uhr, L. (eds.) Symbol Processing and connectionist Models for Artificial Intelligence and Cognitive Modeling: Steps Towards Integration. Academic Press, pp. 155–189.
Sharkey, N. E. & Jackson, S. A. (1994). An Internal Report for Connectionists. To appear in Sun, R. (ed.) Computational Architectures Integrating Neural and Symbolic Processes. Kluwer, pp. 223–224.
Stich, S. P. (1983). Folk Psychology and Cognitive Science: The Case Against Belief. MA. MIT Press: Cambridge.
Turing, A. M. (1936). On Computable Numbers with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, series 2, vol. 4, 230–265.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Jackson, S.A., Sharkey, N.E. Grounding computational engines. Artif Intell Rev 10, 65–82 (1996). https://doi.org/10.1007/BF00159216
Issue Date:
DOI: https://doi.org/10.1007/BF00159216