Elsevier

Neural Networks

Volume 22, Issue 3, April 2009, Pages 305-315
Neural Networks

2009 Special Issue
Exploiting co-adaptation for the design of symbiotic neuroprosthetic assistants

https://doi.org/10.1016/j.neunet.2009.03.015Get rights and content

Abstract

The success of brain–machine interfaces (BMI) is enabled by the remarkable ability of the brain to incorporate the artificial neuroprosthetic ‘tool’ into its own cognitive space and use it as an extension of the user’s body. Unlike other tools, neuroprosthetics create a shared space that seamlessly spans the user’s internal goal representation of the world and the external physical environment enabling a much deeper human–tool symbiosis. A key factor in the transformation of ‘simple tools’ into ‘intelligent tools’ is the concept of co-adaptation where the tool becomes functionally involved in the extraction and definition of the user’s goals. Recent advancements in the neuroscience and engineering of neuroprosthetics are providing a blueprint for how new co-adaptive designs based on reinforcement learning change the nature of a user’s ability to accomplish tasks that were not possible using conventional methodologies. By designing adaptive controls and artificial intelligence into the neural interface, tools can become active assistants in goal-directed behavior and further enhance human performance in particular for the disabled population. This paper presents recent advances in computational and neural systems supporting the development of symbiotic neuroprosthetic assistants.

Introduction

The evolution of mankind is intrinsically coupled with the invention and the use of new tools to expand the richness of the interaction with other individuals and with the environment. However, tools have primarily served as passive instruments that enhance the brain–body system and do not shape goal-directed behavior as users express their intent. The concept of a “body schema” as classically used in psychology, neurology, and the cognitive sciences involves the development of specific internal mental structures that represents some aspect of the external world (Maravita & Iriki, 2004). To efficiently develop rich and meaningful interactions with the world our brains are dynamically involved in cyclical motor and sensory scenarios that report the outcomes of behavior (Grossberg, 1982). The “body schema” is the principal enabler of tool use to fulfill many everyday life activities (Johnson-Frey, 2003). Tool use is very unique in our development as a species because it has allowed us to extend the natural reaching space and induce further plastic changes in neural system representation (Holmes et al., 2004, Johnson-Frey, 2004, Maravita and Iriki, 2004). As a consequence, new stimuli which may have been out of reach from the body’s extremities became accessible, assimilated and increased the integration of new environmental diversity into our internal representation. Driving a car is the modern archetype example where through training the human adapts to speed, anticipation, and size that are far beyond natural body experiences. However, tools have primarily served as passive instruments that do not share the definition of the goal-directed behavior as users express their intent. Indeed, the relationship between user and the tool is inherently lopsided. On one end, users are intelligent and can use dynamic brain organization and specialization while tools are passive devices that enact commands. We submit that the nature of the interface with tools is one of the main limitations impeding the evolution of a more seamless binding between users and tools (and effectively richer environments); and it is perhaps one of the reasons for the chasm between artificial and natural intelligence, because we do not properly share our expectations with artificial systems. Ideally, interfaces with machines should be as active and bidirectional as the interactions with other human beings or animals where the connection between the user and tool is such that both can experience the unique abilities of the counterpart. This egocentric man–machine interface design methodology is the norm today because the interactions are indirectly controlled, communicate in a unidirectional manner, lack the ability to operate on the cognitive level of the user, and they are not adaptable.

Cognitive and computational neuroscience has utilized the universal computing power of Turing/Von Neumann machines to implement models of cognition. Perhaps fuelled by Newell’s work (Newell, 1990), cognitive architectures have implemented first principles that their designers believe are relevant for intelligent behavior. Two of the better known models in the engineering community are the ACT-R (Anderson, 1993) and SOAR (Laird et al., 1987) and they evolved from pure tools to model and simulate cognitive processes to general architectures that also support direct interaction with the external world through sensory inputs (Bugajska et al., 2002, Jones et al., 1999). When the interaction with an unknown stochastic world becomes center stage, different principles are required. Using ideas from Markov Decision Processes (MDPs), Weng proposed the Incremental Hierarchical Discriminant Regression (IHDR), which is a family of models of different complexity having at the core a self-aware self-effecting architecture (Weng & Hwang, 2006). Another approach closer to the biological reality has used neural dynamics and neural network principles to model brain subsystems as exemplified by the K set hierarchy (Freeman, 1975, Kozma and Freeman, 2009), action networks for the frontal cortical loop (Taylor & Taylor, 1999), working memory (Taylor & Taylor, 2000) the thalamic cortical loop (Hecht-Nielsen, 2007), visomotor transformations (Jeannerod et al., 1995), language (Arbib, 2005), dynamics of perception (Carpenter & Grossberg, 2003), up to consciousness (Edelman, 1990).

An area of engineering that has benefited from all this work has been robotic research because of the importance of autonomous behavior. In the last 10 years, several subfields in robotics have emerged, from behavior based robotics (Brooks, 1999), evolutionary robotics (Nolfi & Floreano, 2000), intentional robotics (Kozma & Fukuda, 2006), developmental robotics (Schmidhuber, 2006) and brain based systems (Krichmar & Edelman, 2005), just to name a few. These systems are being designed more and more using neurobiology knowledge. However, we submit that an important factor that is missing in robotics is the ability to interact with the human brain. The appeal is that these advanced robots already have the computational power, the sensors, and sophisticated architectures for processing and reasoning, but what is missing is a paradigm for co-adaptation with humans. This will be immensely important for neural rehabilitation and will open a new window for symbiotic human machine research. We believe that it is possible to establish a direct communication channel between the user’s brain and the machine with the goal of sharing the perception–action cycle of the user. This paper presents a new framework and experimental results that illustrate symbiosis between biological and artificial systems. In Section 2, we will briefly present the state of the art in brain–machine interfaces. Section 3 discusses the architectural prerequisites for co-adaptation and Section 4 develops how to deliver such requirements. Section 5 presents our experimental work on co-adaptive brain–machine interfaces, and Section 6 concludes the paper.

Section snippets

Review of brain–machine interface research

Brain–machine interfaces are creating new pathways to interact with the brain and they can be roughly divided into four categories: the sensory BMIs which substitute sensory inputs (like visual (Chelvanayagam et al., 2008, Zrenner, 2002) or auditory (Miller et al., 1995, Nie et al., 2006, Rouger et al., 2007) and are the most common (120,000 people have been implanted worldwide with cochlear implants)); the motor BMIs that substitute parts of the body to convey intent of motion to prosthetic

Minimal prerequisites for intelligent neuroprosthesis

The design of a new framework to transform BMIs begins with the view that intelligent tools emerge from the process where the user and tool cooperatively seek to maximize goals while interacting with a complex, dynamical environment. Emergence as discussed here and in the cognitive sciences depends on a series of events or elemental procedures that promote specific brain or behavioral syntax, feedback, and repetition over time (Calvin, 1990); hence, the sequential evaluative process is always

Deployment in a co-adaptive BMI architecture

Architecture: With the prerequisites of intelligent BMIs defined, the computational and engineering design challenge becomes one of architectural choices and integrating both the user’s and neuroprosthetic tools’ contributions into a cooperative structure. It is obvious that the symbioses will be easier to define and implement if both the user and neuroprosthetic share similar learning architectures. From a review of the literature, reinforcement learning (RL) became the natural choice since

Co-Adaptive BMI (CABMI) experiment

We have developed a CABMI experimental paradigm to demonstrate interactive learning where synergy among adaptive, intelligent entities facilitates learning. This paradigm provided a platform to study the machine and biological learning, as well as the mutual learning that happens in their interaction. We present a BMI that requires coordination between artificial and biological intelligence to solve a motor task for reaching and grasping. The experiment consisted of a two-target choice task

Conclusion

We have introduced here a transformative framework for goal-directed behavior that enables the co-adaptation between two learning systems; an artificial agent and a user’s brain. This framework is based on well-established concepts that include the perception–action cycle and value-based decision making. However, unlike traditional computational modeling or neurobiological study of these systems, we have presented a method that enables a direct, real-time dialogue between the biological and

References (95)

  • J.E. Laird et al.

    Soar: An architecture for general intelligence

    Artificial Intelligence

    (1987)
  • A.M. Lozano et al.

    Deep brain stimulation surgery for Parkinson’s disease: Mechanisms and consequences

    Pakinsonism and Related Disorders

    (2004)
  • A. Maravita et al.

    Tools for the body (schema)

    Trends in Cognitive Sciences

    (2004)
  • D.J. McFarland et al.

    Brain-computer interface signal processing at the Wadsworth Center: Mu and sensorimotor beta rhythms

    Event-Related Dynamics of Brain Oscillations

    (2006)
  • C. Mehring et al.

    Comparing information about arm movement direction in single channels of local and epicortical field potentials from monkey and human motor cortex

    Journal of Physiology (Paris)

    (2004)
  • C.A. Miller et al.

    Functional repsonses from guinea pigs with cochlear implants. I. Electrophysiological and psychophysical measures

    Hearing Research

    (1995)
  • J.R. Anderson

    Rules of the mind

    (1993)
  • M.A. Arbib

    From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics

    Behavioral and Brain Sciences

    (2005)
  • T.W. Berger et al.

    Brain-implantable biomimetic electronics as the next era in neural prosthetics

    Proceedings of the IEEE

    (2001)
  • N. Birbaumer et al.

    The thought translation device (TTD) for completely paralyzed patients

    IEEE Transactions on Rehabilitation Engineering

    (2000)
  • G.H. Bower

    Theories of learning

    (1981)
  • N. Brannon et al.

    Coordinated machine learning and decision support for situation awareness

    Neural Networks

    (2009)
  • R. Brooks

    Cambrian intelligence: The early history of the new AI

    (1999)
  • E.N. Brown et al.

    Multiple neural spike train data analysis: State-of-the-art and future challenges

    Nature Neuroscience

    (2004)
  • Bugajska, M. D., & Schultz, A. C. et al. (2002). A hybrid cognitive-reactive multiagent controller. In IEEE/RSJ...
  • G. Buzsáki

    Rhythms of the brain

    (2006)
  • W.H. Calvin

    The emergence of intelligence

    Scientific American

    (1990)
  • J.M. Carmena et al.

    Learning to control a brain–machine interface for reaching and grasping by primates

    PLoS Biology

    (2003)
  • G.A. Carpenter et al.
  • J.K. Chapin et al.

    Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex

    Nature Neuroscience

    (1999)
  • D.K. Chelvanayagam et al.

    Multichannel surface recordings on the visual cortex: Implications for a neuroprosthesis

    Journal of Neural Engineering

    (2008)
  • Deisenroth, M. P., & Peters, J. et al. (2008). Approximate dynamic programming with Gaussian processes. In American...
  • J. del R. Millan

    Adaptive brain interfaces

    Communications of the ACM

    (2003)
  • J. DiGiovanna et al.

    Co-adaptive brain machine interface via reinforcement learning [Special issue on Hybrid Bionics]

    IEEE Transactions on Biomedical Engineering

    (2009)
  • DiGiovanna, J., & Mahmoudi, B. et al. (2007). Brain–machine interface control via reinforcement learning. In 3rd...
  • J.P. Donoghue

    Connecting cortex to machines: Recent advances in brain interfaces

    Nature Neuroscience

    (2002)
  • J.P. Donoghue et al.

    The motor cortex of the rat: Cytoarchitecture and microstimulation mapping

    Journal of Computational Neurology

    (1982)
  • K. Doya et al.

    Multiple model-based reinforcement learning

    Neural Computation

    (2002)
  • G. Edelman

    The remembered present: A biological theory of consciousness

    (1990)
  • G.M. Edelman et al.

    The mindful brain: Cortical organization and the group-selective theory of higher brain function

    (1978)
  • W.J. Freeman

    Mass action in the nervous system: Examination of the neurophysiological basis of adaptive behavior through EEG

    (1975)
  • Y. Gao et al.

    A quantitative comparison of linear and non-linear models of motor cortical activity for the encoding and decoding of arm motions

  • A. Georgopoulos et al.

    On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex

    Journal of Neuroscience

    (1982)
  • A.P. Georgopoulos et al.

    Primate motor cortex and free arm movements to visual targets in three-dimensional space. II. Coding of the direction of movement by a neuronal population

    The Journal of Neuroscience: The Official Journal of the Society for Neuroscience

    (1988)
  • Grossberg, S. (1982). Studies of mind and brain: neural principles of learning, perception, development, cognition, and...
  • N.G. Hatsopoulos et al.

    Information about movement direction obtained from synchronous activity of motor cortical neurons

    Proceedings of the National Academy of Sciences of the United States of America

    (1998)
  • S. Haykin

    Neural networks: A comprehensive foundation

    (1994)
  • Cited by (42)

    • Geriatric Smart home technology implementation—are we really there?

      2022, Smart Home Technologies and Services for Geriatric Rehabilitation
    • Building an adaptive interface via unsupervised tracking of latent manifolds

      2021, Neural Networks
      Citation Excerpt :

      Another possible approach to adapt the interface so as to account for user’s motor strategies could stem from Reinforcement Learning (RL) (Sutton & Barto, 2018), where a software agent continually interacts with an environment and take actions in order to maximize some reward. Previous studies (DiGiovanna et al., 2009; Mahmoudi et al., 2008; Sanchez et al., 2009) have used RL-inspired algorithms to modify the agent’s (the interface) behaviour according to what was considered desirable for the user. The RL approach, however, still requires the definition of a value function in order to assign a reward to an observed action.

    • A new approach to detect the coding rule of the cortical spiking model in the information transmission

      2018, Neural Networks
      Citation Excerpt :

      Now, description of how LFP encodes the input stimuli is important in order to understand how the brain’s circuit is involved in the formation of the magnitude and time of local activity. Understanding this complex process can also create an evolution in comprehending how the neural signals interact effectively with the brain–machine interfaces prosthesis (Sanchez, Mahmoudi, DiGiovanna, & Principe, 2009). On the other hand, the translation of these found rules to artificial intelligence tools can create a new field for adding information to these systems and eventually adding information to the brain.

    • Brain-computer interfaces

      2013, Handbook of Clinical Neurology
      Citation Excerpt :

      This need for initial and continuing adaptation is present whether a person's intent is accomplished normally, that is, by muscles, or through a BCI, which uses brain signals instead of muscles. BCI operation depends on the interaction of two adaptive controllers: the user, who must produce brain signals that encode intent, and the BCI, which must translate these signals into commands that achieve the user's intent (e.g., Wolpaw et al., 2002; Rossini, 2009; Sanchez et al., 2009). As a result, BCI usage is basically a skill that user and system together acquire and maintain.

    View all citing articles on Scopus
    View full text