Abstract
Recently, many philosophers have been inclined to ascribe mentality to animals (including some insects) on the main grounds that they possess certain complex computational abilities. In this paper I contend that this view is misleading, since it wrongly assumes that those computational abilities demand a psychological explanation. On the contrary, they can be just characterised from a computational level of explanation, which picks up a domain of computation and information processing that is common to many computing systems but is autonomous from the domain of psychology. Thus, I propose that it is possible to conceive insects and other animals as mere computing agents, without having any commitment to ascribe mentality to them. I conclude by sketching a proposal about how to draw the line between mere computing and genuine mentality.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
In addition to mental states that play a causal role in behaviour, the mind is normally understood as involving consciousness. But for the purposes of this paper—and following many philosophers persuaded by the computational theory of mind, I shall assume that consciousness is not essential to psychological explanation, and that important progress can be made on the nature of the mind without addressing the phenomenal character of mental states.
I believe it innocuous to describe symbolic structures at the computational level for two reasons. First, computational theory is inherently symbolic. Second, I adopt an informational approach to symbols according to which computational structures can in some way refer to environmental properties by virtue of carrying and using information about them, without necessarily having fully-fledged mental content. I explain these points in Sects. 4 and 5.1.
It has been proposed that associative conditioning can be accommodated within a computational framework. For example, Gallistel and Gibbon (2001) suggest that associative learning can be better explained in terms of computational operations such as extracting temporal regularities between information-bearing structures. If this is the case, associationist explanations would be part of the computational level. But insofar as they can be formulated without appeal to psychological notions, this approach is compatible with the argument put forward in this paper.
Of course, the Mars rovers receive instructions from earth. But since those instructions take some time to reach the rover, they are designed to carry out many tasks in a rather autonomous way, such as self-monitoring, navigating and making some decisions without human intervention.
A second motivation can be internalism, the (more general) view that representational notions cannot play any genuine role in a scientific psychology (Kim 1982; Stich 1983). But for the purposes of this paper I assume externalism, according to which it is plausible to formulate psychological explanations that advert to representational contents.
To satisfy the demands of explaining how concrete computing agents behave in real environments, the computational level might have to be supplemented by computational frameworks distinct from Turing models, such as interactive computation or hypercomputation (see Dodic-Crnkovic 2011 for a review). Nothing in this paper depends crucially on which computational framework we adopt.
When discussing a Fodorian approach (called the “semantic account of computation”) Piccinini (2012) arrives at a similar conclusion: the dependency of this account on a psychological notion of content, makes it unfit for providing computational explanations beyond philosophy of mind, e.g. computer science.
Burge understands the term “representation” as strictly psychological, meaning that it can be equated with what I call “mental symbols” in this paper.
To determine which animals might actually satisfy these computational constraints exceeds the scope of this paper. However, I believe that these conditions are rather demanding and cast doubt about whether computing agents such as insects could satisfy them. For example, honeybees might fail to satisfy the generality constraint due to their massively modular computational architecture (Aguilera 2011; for a dissenting view, see Carruthers 2009).
References
Adams, F. (2003). The informational turn in philosophy. Minds and Machines, 13, 471–501.
Allen, C. (1997). Animal cognition and animal minds. Mindscapes: Philosophy, science and the mind (pp. 227–244). Pittsburg: University of Pttsburg press.
Allen, C., & Bekoff, M. (1997). Species of mind: The philosophy and biology of cognitive ethology. Cambridge, MA: MIT Press.
Aguilera, B. (2011). Do honeybees have concepts? Disputatio, 4(30), 1–19.
Bermúdez, J. L. (2005). Philosophy of psychology: A contemporary introduction. New York: Routledge.
Burge, T. (2010). Origins of objectivity. New York: Oxford University Press.
Carruthers, P. (2004). On being simple minded. American Philosophical Quaterly, 41(3), 205–220.
Carruthers, P. (2006). The architecture of the mind. New York: Oxford University Press.
Carruthers, P. (2009). Invertebrate concepts confront the generality constraint (and win). In R. Lurz (Ed.), The philosophy of animal minds (pp. 89–107). Cambridge: Cambridge University Press.
Chalmers, D. J. (1996). Does a rock implement every finite-state automaton? Synthese, 108(3), 309–333.
Chalmers, D. (2011). A computational foundation for the study of cognition. Journal of Cognitive Science, 12, 323–357.
Copeland, J. (1993). Artificial intelligence: A philosophical introduction. Oxford: Blackwell.
Crane, T. (1995). The mechanical mind: A philosophical introduction to minds, machines and mental representation. Harmondsworth: Penguin Books.
Crane, T. (2001). Elements of mind. New York: Oxford University Press.
Cummins, R. (1989). Meaning and mental representation. London: MIT Press.
Dennett, D. (1969). Content and consciousness. London: Routledge.
Dennett, D. (1979). Brainstorms. London: Penguin Books.
Dennett, D. (1984). Elbow room: The varieties of free will worth wanting. Cambridge, MA: MIT Press.
Dodig-Crnkovic, G. (2011). Significance of models of computation, from Turing model to natural computation. Minds and Machines, 21(2), 301–322.
Dretske, F. (1981). Knowledge and the flow of information. Cambridge, MA: MIT Press.
Dretske, F. (1999). Machines, plants and animals: The origins of agency. Erkenntnis, 51, 19–31.
Egan, F. (1995). Computation and content. The Philosophical Review, 104(2), 181–203.
Fitzpatrick, S. (2008). Doing away with Morgan’s canon. Mind and Language, 23(2), 224–246.
Fodor, J. (1975). The language of thought. Cambridge, MA: Harvard University Press.
Fodor, J. (1980). Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences, 3, 63–109.
Fodor, J. (1991). Replies. In B. Loewer & G. Rey (Eds.), Meaning in mind: Fodor and its critics (pp. 255–319). Cambridge: Blackwell.
Fodor, J. (2000). The mind doesn’t work that way. Cambridge, MA: MIT Press.
Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1–2), 3–71.
Frankish, K., & Evans, J. S. B. T. (2009). The duality of mind: An historical perspective. In J. S. B. T. Evans & K. Frankish (Eds.), In two minds: Dual processes and beyond (pp. 1–32). Oxford: Oxford University Press.
Gallistel, C. R. (1990). The organization of learning. Cambridge, MA: Bradford Books/MIT Press.
Gallistel, C. R., & Gibbon, J. (2001). Computational versus associative models of simple conditioning. Current Directions in Psychological Science, 10(4), 146–150.
Glymour, C. (1999). Realism and the nature of theories. In M. H. Salmon, C. Earlman, C. Glymour, J. G. Lennox, J. Machamer, J. McGuire, J. Norton, et al. (Eds.), Philosophy of science (pp. 104–131). Indianapolis: Hackett Pub Co Inc.
Gould, J. L., & Gould, C. G. (1994). Animal mind. New York: NY: Scientific American Library.
Hansen, M. B. (2003). The Enteric nervous system I: Organisation and classification. Pharmacology and Toxicology, 92, 105–113.
Haugeland, J. (1981). Semantic engines: An introduction to mind design. In J. Haugeland (Ed.), Mind design: Philosophy, psychology, and artificial intelligence (pp. 1–34). Cambridge, MA: MIT Press.
Haugeland, J. (1987). Artificial intelligence: The very idea. Cambridge, MA: MIT Press.
Haugeland, J. (2003). Syntax, semantics, physics. In J. M. Preston & M. A. Bishop (Eds.), Views into the Chinese room: New essays on Searle and artificial intelligence (pp. 379–392). New York: Oxford University Press.
Hawkins, R. D., & Kandel, E. R. (1984). Is there a cell-biological alphabet for simple forms of learning? Psychological Review, 91(3), 375–391.
Healy, S. D. (1998). Spatial representation in animals. Oxford: Oxford University Press.
Hornsby, J. (2000). Personal and sub-personal: A defence of Dennett’s early distinction. Philosophical Explorations, 1, 6–24.
Jamieson, D., & Beckoff, M. (1996). On aims and methods of cognitive ethology. In Readings in animal cognition (pp. 65–78). Cambridge, MA: MIT Press.
Kim, J. (1982). Psychophysical supervenience. Philosophical Studies, 41, 51–70.
Lurz, R. (2009). The philosophy of animal minds. Cambridge: Cambridge University Press.
Marr, D. (1982). Vision. New York: W. H. Freeman.
McClamrock, R. (1991). Marr’s three levels: A re-evaluation. Minds and Machines, 1, 185–196.
McDowell, J. (1994). The content of perceptual experience. The Philosophical Quaterly, 44(175), 190–205.
Millikan, R. (1984). Language, thought, and other biological categories. Cambridge: MIT Press.
Newell, A., & Simon, H. A. (1981). Computer science as empirical enquiry: Symbols and search. In Mind design: Philosophy, psychology, and artificial intelligence (pp. 35–66). Cambridge, MA: MIT Press.
Papineau, D. (1987). Reality and representation. Oxford: Blackwell.
Piccinini, G. (2012). Computation in physical systems. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. Retrieved from http://plato.stanford.edu/archives/fall2012/entries/computation-physicalsystems/.
Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge MA: MIT Press.
Salmon, W. C. (1989). Four decades of scientific explanation. In P. Kitcher & C. Salmon, Wesley (Eds.), Scientific explanation (pp. 3–219). Minneapolis: University of Minnesota Press.
Sterelny, K. (1990). The representational theory of mind: An introduction. Oxford: Basil Blackwell.
Stich, S. (1983). From folk psychology to cognitive science. Cambridge MA: MIT Press.
Thomas, E. A., Sjövall, H., & Bornstein, J. C. (2004). Computational model of the migrating motor complex of the small intestine. American Journal of Physiology. Gastrointestinal and Liver Physiology, 286(4), G564–G572.
Touretzky, D. S., & Saksida, L. M. (1997). Operant conditioning in Skinnerbots. Adaptive Behavior, 5(3), 219–247.
Turing, A. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-43, 544–546.
Von Eckardt, B. (1998). Folk psychology. In S. Guttenplan (Ed.), A companion to the philosophy of mind (pp. 300–307). Oxford: Blackwell.
Weng, J. (2004). Developmental robotics: Theory and experiments. International Journal of Humanoid Robotics, 1(2), 199–236.
Wood, J. D. (2011). Enteric nervous system (the brain-in-the-gut). Morgan & Claypool Life Sciences Series: Princeton.
Wooldridge, D. (1971). The machinery of the brain. New York: McGraw-Hill.
Acknowledgments
The author would like to thank Stephen Laurence, Dominic Gregory, Asa Cusack and anonymous referees for helpful comments on earlier drafts of this paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Aguilera, B. Behavioural Explanation in the Realm of Non-mental Computing Agents. Minds & Machines 25, 37–56 (2015). https://doi.org/10.1007/s11023-015-9362-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-015-9362-1