2009 Special IssueExploiting co-adaptation for the design of symbiotic neuroprosthetic assistants
Introduction
The evolution of mankind is intrinsically coupled with the invention and the use of new tools to expand the richness of the interaction with other individuals and with the environment. However, tools have primarily served as passive instruments that enhance the brain–body system and do not shape goal-directed behavior as users express their intent. The concept of a “body schema” as classically used in psychology, neurology, and the cognitive sciences involves the development of specific internal mental structures that represents some aspect of the external world (Maravita & Iriki, 2004). To efficiently develop rich and meaningful interactions with the world our brains are dynamically involved in cyclical motor and sensory scenarios that report the outcomes of behavior (Grossberg, 1982). The “body schema” is the principal enabler of tool use to fulfill many everyday life activities (Johnson-Frey, 2003). Tool use is very unique in our development as a species because it has allowed us to extend the natural reaching space and induce further plastic changes in neural system representation (Holmes et al., 2004, Johnson-Frey, 2004, Maravita and Iriki, 2004). As a consequence, new stimuli which may have been out of reach from the body’s extremities became accessible, assimilated and increased the integration of new environmental diversity into our internal representation. Driving a car is the modern archetype example where through training the human adapts to speed, anticipation, and size that are far beyond natural body experiences. However, tools have primarily served as passive instruments that do not share the definition of the goal-directed behavior as users express their intent. Indeed, the relationship between user and the tool is inherently lopsided. On one end, users are intelligent and can use dynamic brain organization and specialization while tools are passive devices that enact commands. We submit that the nature of the interface with tools is one of the main limitations impeding the evolution of a more seamless binding between users and tools (and effectively richer environments); and it is perhaps one of the reasons for the chasm between artificial and natural intelligence, because we do not properly share our expectations with artificial systems. Ideally, interfaces with machines should be as active and bidirectional as the interactions with other human beings or animals where the connection between the user and tool is such that both can experience the unique abilities of the counterpart. This egocentric man–machine interface design methodology is the norm today because the interactions are indirectly controlled, communicate in a unidirectional manner, lack the ability to operate on the cognitive level of the user, and they are not adaptable.
Cognitive and computational neuroscience has utilized the universal computing power of Turing/Von Neumann machines to implement models of cognition. Perhaps fuelled by Newell’s work (Newell, 1990), cognitive architectures have implemented first principles that their designers believe are relevant for intelligent behavior. Two of the better known models in the engineering community are the ACT-R (Anderson, 1993) and SOAR (Laird et al., 1987) and they evolved from pure tools to model and simulate cognitive processes to general architectures that also support direct interaction with the external world through sensory inputs (Bugajska et al., 2002, Jones et al., 1999). When the interaction with an unknown stochastic world becomes center stage, different principles are required. Using ideas from Markov Decision Processes (MDPs), Weng proposed the Incremental Hierarchical Discriminant Regression (IHDR), which is a family of models of different complexity having at the core a self-aware self-effecting architecture (Weng & Hwang, 2006). Another approach closer to the biological reality has used neural dynamics and neural network principles to model brain subsystems as exemplified by the K set hierarchy (Freeman, 1975, Kozma and Freeman, 2009), action networks for the frontal cortical loop (Taylor & Taylor, 1999), working memory (Taylor & Taylor, 2000) the thalamic cortical loop (Hecht-Nielsen, 2007), visomotor transformations (Jeannerod et al., 1995), language (Arbib, 2005), dynamics of perception (Carpenter & Grossberg, 2003), up to consciousness (Edelman, 1990).
An area of engineering that has benefited from all this work has been robotic research because of the importance of autonomous behavior. In the last 10 years, several subfields in robotics have emerged, from behavior based robotics (Brooks, 1999), evolutionary robotics (Nolfi & Floreano, 2000), intentional robotics (Kozma & Fukuda, 2006), developmental robotics (Schmidhuber, 2006) and brain based systems (Krichmar & Edelman, 2005), just to name a few. These systems are being designed more and more using neurobiology knowledge. However, we submit that an important factor that is missing in robotics is the ability to interact with the human brain. The appeal is that these advanced robots already have the computational power, the sensors, and sophisticated architectures for processing and reasoning, but what is missing is a paradigm for co-adaptation with humans. This will be immensely important for neural rehabilitation and will open a new window for symbiotic human machine research. We believe that it is possible to establish a direct communication channel between the user’s brain and the machine with the goal of sharing the perception–action cycle of the user. This paper presents a new framework and experimental results that illustrate symbiosis between biological and artificial systems. In Section 2, we will briefly present the state of the art in brain–machine interfaces. Section 3 discusses the architectural prerequisites for co-adaptation and Section 4 develops how to deliver such requirements. Section 5 presents our experimental work on co-adaptive brain–machine interfaces, and Section 6 concludes the paper.
Section snippets
Review of brain–machine interface research
Brain–machine interfaces are creating new pathways to interact with the brain and they can be roughly divided into four categories: the sensory BMIs which substitute sensory inputs (like visual (Chelvanayagam et al., 2008, Zrenner, 2002) or auditory (Miller et al., 1995, Nie et al., 2006, Rouger et al., 2007) and are the most common (120,000 people have been implanted worldwide with cochlear implants)); the motor BMIs that substitute parts of the body to convey intent of motion to prosthetic
Minimal prerequisites for intelligent neuroprosthesis
The design of a new framework to transform BMIs begins with the view that intelligent tools emerge from the process where the user and tool cooperatively seek to maximize goals while interacting with a complex, dynamical environment. Emergence as discussed here and in the cognitive sciences depends on a series of events or elemental procedures that promote specific brain or behavioral syntax, feedback, and repetition over time (Calvin, 1990); hence, the sequential evaluative process is always
Deployment in a co-adaptive BMI architecture
Architecture: With the prerequisites of intelligent BMIs defined, the computational and engineering design challenge becomes one of architectural choices and integrating both the user’s and neuroprosthetic tools’ contributions into a cooperative structure. It is obvious that the symbioses will be easier to define and implement if both the user and neuroprosthetic share similar learning architectures. From a review of the literature, reinforcement learning (RL) became the natural choice since
Co-Adaptive BMI (CABMI) experiment
We have developed a CABMI experimental paradigm to demonstrate interactive learning where synergy among adaptive, intelligent entities facilitates learning. This paradigm provided a platform to study the machine and biological learning, as well as the mutual learning that happens in their interaction. We present a BMI that requires coordination between artificial and biological intelligence to solve a motor task for reaching and grasping. The experiment consisted of a two-target choice task
Conclusion
We have introduced here a transformative framework for goal-directed behavior that enables the co-adaptation between two learning systems; an artificial agent and a user’s brain. This framework is based on well-established concepts that include the perception–action cycle and value-based decision making. However, unlike traditional computational modeling or neurobiological study of these systems, we have presented a method that enables a direct, real-time dialogue between the biological and
References (95)
- et al.
Selecting the signals for a brain–machine interface
Current Opinion in Neurobiology
(2004) - et al.
The misbehavior of value and the discipline of the will
Neural Networks
(2006) Upper processing stages of the perception-action cycle
Trends in Cognitive Sciences
(2004)- et al.
Extending or projecting peripersonal space with tools? Multisensory interactions highlight only the distal and proximal ends of tools
Neuroscience Letters
(2004) - et al.
Grasping objects: The cortical mechanisms of visuomotor transformation
Trends in Neurosciences
(1995) What’s so special about human tool use?
Neuron
(2003)The neural bases of complex tool use in humans
Trends in Cognitive Sciences
(2004)- et al.
Efficient reinforcement learning: Computational theories, neuroscience and robotics
Current Opinion in Neurobiology
(2007) - et al.
Evolving neural networks for strategic decision-making problems
Neural Networks
(2009) - et al.
The KIV model of intentional dynamics and decision making
Neural Networks
(2009)
Soar: An architecture for general intelligence
Artificial Intelligence
Deep brain stimulation surgery for Parkinson’s disease: Mechanisms and consequences
Pakinsonism and Related Disorders
Tools for the body (schema)
Trends in Cognitive Sciences
Brain-computer interface signal processing at the Wadsworth Center: Mu and sensorimotor beta rhythms
Event-Related Dynamics of Brain Oscillations
Comparing information about arm movement direction in single channels of local and epicortical field potentials from monkey and human motor cortex
Journal of Physiology (Paris)
Functional repsonses from guinea pigs with cochlear implants. I. Electrophysiological and psychophysical measures
Hearing Research
Rules of the mind
From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics
Behavioral and Brain Sciences
Brain-implantable biomimetic electronics as the next era in neural prosthetics
Proceedings of the IEEE
The thought translation device (TTD) for completely paralyzed patients
IEEE Transactions on Rehabilitation Engineering
Theories of learning
Coordinated machine learning and decision support for situation awareness
Neural Networks
Cambrian intelligence: The early history of the new AI
Multiple neural spike train data analysis: State-of-the-art and future challenges
Nature Neuroscience
Rhythms of the brain
The emergence of intelligence
Scientific American
Learning to control a brain–machine interface for reaching and grasping by primates
PLoS Biology
Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex
Nature Neuroscience
Multichannel surface recordings on the visual cortex: Implications for a neuroprosthesis
Journal of Neural Engineering
Adaptive brain interfaces
Communications of the ACM
Co-adaptive brain machine interface via reinforcement learning [Special issue on Hybrid Bionics]
IEEE Transactions on Biomedical Engineering
Connecting cortex to machines: Recent advances in brain interfaces
Nature Neuroscience
The motor cortex of the rat: Cytoarchitecture and microstimulation mapping
Journal of Computational Neurology
Multiple model-based reinforcement learning
Neural Computation
The remembered present: A biological theory of consciousness
The mindful brain: Cortical organization and the group-selective theory of higher brain function
Mass action in the nervous system: Examination of the neurophysiological basis of adaptive behavior through EEG
A quantitative comparison of linear and non-linear models of motor cortical activity for the encoding and decoding of arm motions
On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex
Journal of Neuroscience
Primate motor cortex and free arm movements to visual targets in three-dimensional space. II. Coding of the direction of movement by a neuronal population
The Journal of Neuroscience: The Official Journal of the Society for Neuroscience
Information about movement direction obtained from synchronous activity of motor cortical neurons
Proceedings of the National Academy of Sciences of the United States of America
Neural networks: A comprehensive foundation
Cited by (42)
Geriatric Smart home technology implementation—are we really there?
2022, Smart Home Technologies and Services for Geriatric RehabilitationBuilding an adaptive interface via unsupervised tracking of latent manifolds
2021, Neural NetworksCitation Excerpt :Another possible approach to adapt the interface so as to account for user’s motor strategies could stem from Reinforcement Learning (RL) (Sutton & Barto, 2018), where a software agent continually interacts with an environment and take actions in order to maximize some reward. Previous studies (DiGiovanna et al., 2009; Mahmoudi et al., 2008; Sanchez et al., 2009) have used RL-inspired algorithms to modify the agent’s (the interface) behaviour according to what was considered desirable for the user. The RL approach, however, still requires the definition of a value function in order to assign a reward to an observed action.
A new approach to detect the coding rule of the cortical spiking model in the information transmission
2018, Neural NetworksCitation Excerpt :Now, description of how LFP encodes the input stimuli is important in order to understand how the brain’s circuit is involved in the formation of the magnitude and time of local activity. Understanding this complex process can also create an evolution in comprehending how the neural signals interact effectively with the brain–machine interfaces prosthesis (Sanchez, Mahmoudi, DiGiovanna, & Principe, 2009). On the other hand, the translation of these found rules to artificial intelligence tools can create a new field for adding information to these systems and eventually adding information to the brain.
Brain-computer interfaces
2013, Handbook of Clinical NeurologyCitation Excerpt :This need for initial and continuing adaptation is present whether a person's intent is accomplished normally, that is, by muscles, or through a BCI, which uses brain signals instead of muscles. BCI operation depends on the interaction of two adaptive controllers: the user, who must produce brain signals that encode intent, and the BCI, which must translate these signals into commands that achieve the user's intent (e.g., Wolpaw et al., 2002; Rossini, 2009; Sanchez et al., 2009). As a result, BCI usage is basically a skill that user and system together acquire and maintain.
Co-Adaptive Control of Bionic Limbs via Unsupervised Adaptation of Muscle Synergies
2022, IEEE Transactions on Biomedical Engineering