Drinking from the firehose of experience

https://doi.org/10.1016/j.artmed.2008.07.010Get rights and content

Summary

Objective

Computational concepts from robotics and computer vision hold great promise to account for major aspects of the phenomenon of consciousness, including philosophically problematical aspects such as the vividness of qualia, the first-person character of conscious experience, and the property of intentionality.

Methods

We present a dynamical systems model describing human or robotic agents and their interaction with the environment. In order to cope with the enormous information content of the sensory stream, this model includes trackers for selected coherent spatio–temporal portions of the sensory input stream, and a self-constructed plausible coherent narrative describing the recent history of the agent’s sensorimotor interaction with the world.

Results

We describe how an agent can autonomously learn its own intentionality by constructing computational models of hypothetical entities in the external world. These models explain regularities in the sensorimotor interaction, and serve as referents for the agent’s symbolic knowledge representation. The high information content of the sensory stream allows the agent to continually evaluate these hypothesized models, refuting those that make poor predictions. The high information content of the sensory input stream also accounts for the vividness and uniqueness of subjective experience. We then evaluate our account against 11 features of consciousness “that any philosophical–scientific theory should hope to explain”, according to the philosopher and prominent AI critic John Searle.

Conclusion

The essential features of consciousness can, in principle, be implemented on a robot with sufficient computational power and a sufficiently rich sensorimotor system, embodied and embedded in its environment.

Introduction

Consciousness is one of the most intriguing and mysterious aspects of the phenomenon of mind. Artificial Intelligence (AI) is a scientific field built around the creation of computational models of mind (using not only logic-based methods for knowledge representation and inference, but also such methods as probabilistic inference, dynamical systems, neural networks, and genetic algorithms). Computational approaches to understanding the phenomena of mind have been controversial, to say the least, but nowhere more than when applied to the problem of consciousness.

This paper discusses the problem of consciousness from the pragmatic perspective of a researcher in AI and robotics, where the focus is on how an intelligent agent can cope effectively with the overwhelming information content of the “pixel level” sensory input stream. Since humans are agents who cope successfully with overwhelming sensory information, we look to evidence from humans for clues about the more general problem and its possible solutions. Inspired by properties of human cognition, the paper proposes a computational model of consciousness and discusses its implications.

There have been a number of recent books on the problem of consciousness, many of them from a neurobiological perspective. The more clinically oriented books [3], [4], [5] often appeal to pathological cases, where consciousness is incomplete or distorted in various ways, to illuminate the structure of the phenomenon of human consciousness through its natural breaking points. Another approach, taken by Crick and Koch [6], [7], examines in detail the brain pathways that contribute to visual attention and visual consciousness in humans and in macaque monkeys. Minsky [8], Baars [9], and Dennett [10] propose architectures whereby consciousness emerges from the interactions among large numbers of simple modules.

Philosophical writings on consciousness are useful where they help define and clarify the different questions to be answered. A particularly important distinction is between the “Easy” and “Hard” problems of consciousness [11]. The “Easy Problem” is relatively congenial to AI/robotics researchers and sympathizers [8], [9], [10], since it asks, What does consciousness do for the agent, and how does it work? Neuroscientists [6], [7], [12] ask a more restricted version of this question, What are the neural correlates of consciousness? The “Hard Problem” is far less tractable than either of these, since it asks, Why does subjective experience feel like it does? In fact, how can it feel like anything at all? This is closely tied to the question of the nature of “qualia” or “raw feels” [10], [13]. The core issue behind the famous “Chinese Room” story [14] is the problem of Intentionality, which is, How can knowledge in the mind of an agent refer to objects in the external world?

In John Searle’s recent book on the philosophy of mind [15], he articulates a position he calls biological naturalism that describes the mind, and consciousness in particular, as “entirely caused by lower level neurobiological processes in the brain.” Although Searle rejects the idea that the mind’s relation to the brain is similar to a program’s relation to a computer, he explicitly endorses the notion that the body is a biological machine, and therefore that machines (at least biological ones) can have minds, and can even be conscious. In spite of being nothing beyond physical processes, Searle holds that consciousness is not reducible to those physical processes because consciousness “has a first-person ontology” while the description of physical processes occurring in the brain “has a third-person ontology.” He lays out 11 central features of consciousness “that any philosophical–scientific theory should hope to explain.”

In the following sections, I discuss the Easy Problem, the Intentionality Problem, and the Hard Problem of consciousness from perspective of the information-processing problems the robot must solve. I conclude after discussing how this approach responds to Searle’s 11 features of consciousness.

The key ideas here are the following:

  • 1.

    The sensory data stream presents information to the agent at an extremely high rate (gigabits/s).

  • 2.

    This information is managed and compressed by selecting, tracking, and describing spatio–temporal portions of the sensory input stream.

  • 3.

    A plausible coherent narrative is constructed to describe the recent history (500 ms or so) of the agent’s sensorimotor interaction with the world.

  • 4.

    An agent can autonomously learn its own intentionality by constructing computational models of hypothetical entities in the external world. These models explain regularities in the sensorimotor interaction, and serve as referents for the agent’s symbolic knowledge representation.

  • 5.

    The high information content of the sensory stream allows the agent to continually evaluate these hypothesized models, refuting those that make poor predictions.

  • 6.

    The high information content of the sensory input stream accounts for the vividness and uniqueness of subjective experience.

Section snippets

The Easy Problem

The “Easy Problem” is: What does consciousness do for us, and how does it work? Only a philosopher could call this problem “Easy”, since solving it will likely require decades at least, and dozens or hundreds of doctoral dissertations. What the name means is that scientists applying the methods of various disciplines have been able to formulate useful technical statements of the problem, and they have tools that apply to those problem statements. Progress may be difficult, but we know what it

The Intentionality Problem

The “Intentionality Problem” is: How can symbols in an internal cognitive knowledge representation refer to objects and events in the external world? Or equivalently, Where does meaning come from? The core of Searle’s “Chinese room” argument [14] is that the mind necessarily has intentionality (the ability to refer to objects in the world), while computation (the manipulation of formal symbols according to syntactic rules) necessarily lacks intentionality. Therefore (claims Searle), the mind

The Hard Problem

We are not zombies (or at least I am not). Why not? It is undeniable that many experiences “feel like” something. Pain hurts, sugar tastes sweet, the sight of a loved one raises feelings that are strong and real, even though they cannot be fully articulated. In the words of Francisco Varela, “…consciousness feels so personal, so intimate, so central to who we are, …” ([42], p. 226).

The “Hard Problem” of consciousness is: “Why does consciousness feel like anything at all?” Suppose we accept that

Evaluating a theory of consciousness

It is not yet possible to build a robot with sufficiently rich sensorimotor interaction with the physical environment, and a sufficiently rich capability for tracking and reasoning about its sensor and motor streams, to be comparable with human consciousness. The remaining barriers, however, appear to be technical rather than philosophical.

We begin evaluating this theory of consciousness by discussing how well such a computational model might account for 11 central features of consciousness

Conclusions

We approach the problem of consciousness from the pragmatic design perspective of AI and robotics. One of the major requirements on an embodied agent is the ability to cope with the overwhelming information content of its own sensory input (the “firehose of experience”). One cognitive architecture that meets this requirement includes trackers that ground dynamic symbolic descriptions in spatio–temporal regions of the sensory stream, and a plausible coherent narrative that explains the objects

Acknowledgements

This work has taken place in the Intelligent Robotics Lab at the Artificial Intelligence Laboratory, The University of Texas at Austin. Research of the Intelligent Robotics lab is supported in part by grants from the Texas Advanced Research Program (3658-0170-2007), from the National Science Foundation (IIS-0413257, IIS-0713150, and IIS-0750011), and from the National Institutes of Health (EY016089). Portions of this paper have been presented previously in [1], [2].

References (51)

  • O. Sacks

    The man who mistook his wife for a hat and other clinical tales

    (1985)
  • A. Damasio

    The feeling of what happens

    (1999)
  • V.S. Ramachandran

    A brief tour of human consciousness

    (2004)
  • F. Crick et al.

    A framework for consciousness

    Nat Neurosci

    (2003)
  • C. Koch

    The quest for consciousness: a neurobiological approach

    (2003)
  • M. Minsky

    The society of mind

    (1985)
  • B.J. Baars

    A cognitive theory of consciousness

    (1988)
  • D. Dennett

    Consciousness explained

    (1991)
  • D.J. Chalmers

    The conscious mind: in search of a fundamental theory

    (1996)
  • G. Tononi

    An information integration theory of consciousness

    BMC Neurosci

    (2004)
  • N. Humphrey

    Seeing red: a study in consciousness

    (2006)
  • J. Searle

    Minds, brains, and programs

    Behav Brain Sci

    (1980)
  • J.R. Searle

    Mind: a brief introduction

    (2004)
  • S. Harnad

    Minds, machines and Searle

    J Exp Theor Artif Intell

    (1989)
  • W.V.O. Quine

    Two dogmas of empiricism

  • Cited by (21)

    • What are the computational correlates of consciousness?

      2016, Biologically Inspired Cognitive Architectures
      Citation Excerpt :

      For example, one claim is that any system that maintains a correspondence between high-level, symbolically represented concepts and low-level data stream entities, and that has a reasoning system which makes use of these grounded symbols, has true subjective experiences corresponding to qualia and a sense of self-awareness (Kuipers, 2005). A physical machine that satisfies these requirements would therefore be phenomenally conscious (Kuipers, 2008). However, not all would agree with this viewpoint: It has recently been argued that the dimensionality reduction implied by symbol grounding is just the opposite of what is needed to create phenomenal machine consciousness (Chella & Gaglio, 2012).

    • Learning agent's spatial configuration from sensorimotor invariants

      2015, Robotics and Autonomous Systems
      Citation Excerpt :

      Thus, to make robotic systems truly autonomous and robust, we need to shed the biases imposed on us by our own perceptual system and let robots develop their own ways of perceiving the world. Few studies adopting such a radical approach to control robots have been proposed [23–25]. Although these studies are in line with our approach, they do not directly address the problem of space perception (or only implicitly through the robot’s ability to move in its environment).

    • The rise of machine consciousness: Studying consciousness with computational models

      2013, Neural Networks
      Citation Excerpt :

      While a system like this has not been implemented yet, the plausibility of its basic concepts have been evaluated by assessing their ability to meet the eleven criteria that Searle (2004) has argued any philosophical–scientific theory of consciousness should satisfy (Kuipers, 2008). The conclusion of this evaluation was that the essential features of consciousness can, in principle, be implemented in an embodied machine that has sufficient computational power (Kuipers, 2008). However, it has recently been argued that the dimensionality reduction implied by symbolic trackers is just the opposite of what is needed to create phenomenal machine consciousness (Chella & Gaglio, 2012).

    • A computational model of machine consciousness

      2011, International Journal of Machine Consciousness
    • The potential impact of machine consciousness in science and engineering

      2009, International Journal of Machine Consciousness
    • Artificial Conscious Intelligence: Why Machine Consciousness Matters to AI

      2023, Computational Approaches to Conscious Artificial Intelligence
    View all citing articles on Scopus
    View full text