Elsevier

Neural Networks

Volume 23, Issue 10, December 2010, Pages 1155-1163
Neural Networks

Modeling cognitive and emotional processes: A novel neural network architecture

https://doi.org/10.1016/j.neunet.2010.07.004Get rights and content

Abstract

In our continuous attempts to model natural intelligence and emotions in machine learning, many research works emerge with different methods that are often driven by engineering concerns and have the common goal of modeling human perception in machines. This paper aims to go further in that direction by investigating the integration of emotion at the structural level of cognitive systems using the novel emotional DuoNeural Network (DuoNN). This network has hidden layer DuoNeurons, where each has two embedded neurons: a dorsal neuron and a ventral neuron for cognitive and emotional data processing, respectively. When input visual stimuli are presented to the DuoNN, the dorsal cognitive neurons process local features while the ventral emotional neurons process the entire pattern. We present the computational model and the learning algorithm of the DuoNN, the input information–cognitive and emotional–parallel streaming method, and a comparison between the DuoNN and a recently developed emotional neural network. Experimental results show that the DuoNN architecture, configuration, and the additional emotional information processing, yield higher recognition rates and faster learning and decision making.

Introduction

Cognition, or understanding of the world, is due to mechanisms of concepts, also referred to as internal representations or models (Perlovsky, 2009a). The term ‘cognition’ refers to all processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used cognition is involved in everything a human being might possibly do (Josephs, 2000). Emotion tends to be used in clinical terminology for what a person is feeling at a given moment. Joy, sadness, anger, fear, disgust, and surprise are often considered the six most basic emotions, and other well-known human emotions (e.g., pride, shame, regret, elation, etc.) are often treated as elaborations or specializations of these six to complex social situations (Levine, 2007). Other definitions for cognitive and emotional processes have also been suggested in works such as (Pessoa, 2008, Prinz, 2004, Ziemke and Lowe, 2009).

Over the past few decades, research works related to emotion-inspired computing have emphasized the role of emotions in both natural and artificial intelligent behavior (Canamero, 2005, Coutinho and Cangelosi, 2007, Coutinho et al., 2005, Fragopanagos and Taylor, 2006, Gnadt and Grossberg, 2008, Grossberg and Merrill, 1992, Khashman, 2008, Martinez-Miranda and Aldea, 2005, Perlovsky, 2006a, Perlovsky, 2006b, Taylor et al., 2005). Several attempts have been made to provide computational models of emotion such as the works in Abu Maria and Abu Zitar (2007), Bates (1994), Canamero (2003), Coutinho and Cangelosi (2007), Doya (2002), El-Nasr, Yen, and Ioerger (2000), Gratch (2000), Grossberg and Gutowski (1987), Grossberg and Seidman (2006), Gobbini and Haxby (2007), Khashman (2008), Kort, Reilly, and Picard (2001), Poel, den Akker, Nijholt, and van Kesteren (2002), Ushida, Hirayama, and Nakajima (1998) and Ziemke and Lowe (2009). These works have inspired many researchers to continue working on the difficult, and for some people radical, task of developing machines that ‘feel’.

For our work which we present here, and for the idea of the novel ‘DuoNeruon’, we have been particularly inspired by the work of Fragopanagos and Taylor (2006) on modeling attention and emotion; where they distinctly associate the brain ventral network with emotion, and the dorsal network with cognition. They stated: “Brain imaging/deficit results in depressives indicate the division of processing into a ventral network (for emotion) and a dorsal one (for cognition), where the imbalance between the two leads to reduction of the cognitive activity and an excess of limbic activity”. The authors proposed in this work a control model of attention with the addition of valence as an emotional component. We do not have the space to further describe this model in detail, however, the outcome of their work can be summarized by their overall conclusion which stated: “it is not possible to consider emotion processing without inclusion of attention, and conversely that emotion functions importantly to help guide attention to emotionally valuable stimuli” (Fragopanagos & Taylor, 2006).

The distinction between emotional and cognitive processing has been emphasized in many works which confirm the differences between the two processes, the parallelism of the two processes, and in some works, questions the interaction of the two processes (Blangero et al., 2008, Chokshi et al., 2004, Drevets and Raichle, 1998, Koshizen and Tsujino, 2001, Litt et al., 2008, Mériau et al., 2006, Mitchell et al., 2003, Morey et al., 2008, Poggio, 2007, Taylor and Liberzon, 2007, Xu and Wang, 2007). One particular recent work, which has also inspired our work in this paper, was by Perlovsky (2009b) who proposed a dual model for integrating language and cognition; “Every model in the human mind is not separately cognitive or linguistic, and still cognitive and linguistic contents are separate to a significant extent” (Perlovsky, 2009b). The dualism of two processes which are separate and yet inseparable, together with the association of the ventral neural network to emotion, and dorsal one to cognition are the basis for our work in this paper.

Our hypothesis is that the emotional and cognitive processes which are activated by visual stimuli, such as an image, can be modeled in machines, and the way to model these two kinds of processes, is to clearly define separate ‘routes’ for emotional and cognitive information within the model, while processing the information collectively. Once the two routes or information streams are defined, there is a need to define a way to represent the information flowing through these two streams.

While complete understanding of the brain and how it works is yet to be achieved, some works have given us an insight onto how to represent the input information and how the emotional (ventral) and cognitive (dorsal) networks are organized in the brain, in particular when responding to a visual input stimulus. According to Lamme, Super, and Spekreijse (1998), the human primary visual system consists of numerous diverse areas of the cerebral cortex called the visual cortex, where visual processing occurs in the brain. The visual cortex is divided into the ventral stream (associated with object recognition and form representation), and the dorsal stream (which deals with object locations and motion). The dorsal stream is defined as a pathway for visual information that flows through the visual cortex, which provides detailed or local representation of information such as the “where” stream or the “how” stream. The ventral stream gets its main input from the parvocellular layer of the lateral geniculate nucleus of the thalamus, where each visual area contains a full representation of visual space, thus providing a global or overall representation of information, such as “what” stream. Therefore, we associate the ventral/emotional information with global representation of input visual stimuli, and the dorsal/cognitive information with the local representation.

It should be noted here that in order to make the distinction between the ventral and the dorsal data streams, the what/where distinction has been more commonly used in comparison to the emotional/cognitive distinction. However, the use of one distinction does not contradict the other, as the choice of distinction is usually driven by the engineering concerns and objectives of the model designer. In general we can say that although the two parallel emotional and cognitive processes can be distinguished upon initialization by input stimuli (in this work we focus on visual stimuli), their interaction is inevitable, and in fact necessary when making decisions.

All the previous attempts to model cognitive and emotional information processing, have certainly contributed to the on-going research into “emotional machines”, and our presented work in this paper goes a step further in that direction with the proposal of the DuoNeural Network (DuoNN).

The concept of the DuoNN and its unique architecture using DuoNeurons in the hidden processing layer, has been motivated by our belief that in order to closely mimic human perception in machine learning, both emotional and cognitive information should be processed, and that a functional mechanism and structure which allows efficient parallel processing of emotional and cognitive information streams must be provided. When designing the DuoNN, we made sure to distinguish between the emotional and the cognitive elements of the network starting with the input visual information, its flow or streaming, the processing neurons and the mechanism of information processing. In general, we associate the emotional elements with the ventral stream, and the cognitive elements with the dorsal stream.

The neuroscientific motivation behind the DuoNeuron is that emotional and cognitive information processing is inseparable and parallel, and is activated upon receiving a visual stimulus. Scaling down these processes which also occur within subsections of the brain; such as the hippocampus, the amygdala, and the hypothalamus, into a single neuron may seem radical, however, the DuoNeuron does not operate as a single neuron, as it is the building block of the hidden layer of the DuoNN which models the emotion–cognition processing layer.

The paper is organized as follows: Section 2 introduces and describes the DuoNN novel structure and learning algorithm. Section 3 presents an application of the DuoNN to a facial recognition problem, describes the implementation results, and provides a comparison to a recently developed emotional neural network. Finally, Section 4 concludes the work that is presented within this paper and suggests further work.

Section snippets

DuoNN architecture and learning algorithm

In this section, the novel DuoNeuron, the structure, and the learning algorithm of the DuoNN are introduced and described in detail.

DuoNN application to face recognition

The complexity of a human face arises from the continuous changes in the facial features that take place over time. Despite these changes, we humans are still able to recognize faces and identify the persons. Unfortunately, this natural ability does not exist in machines, thus the continuous attempts by many researchers to provide methods and models that could recognize human faces; early and recent examples include the works in Belhumeur, Hespanha, and Kriegman (1997), He, Yan, Hu, Niyogi, and

Conclusion

This paper introduces a novel concept in machine learning and artificial intelligence generally, and in neural networks particularly, with the broad aim of further motivating more theoretical developments on the integration of emotions in cognitive modeling. The concept here is to artificially model emotional–cognitive information processing within one neuron as part of a larger hidden layer in neural networks. In our attempt to transform the concept into a functional model, we investigate the

Adnan Khashman received the B.Eng. degree in Electronic and Communication Engineering from Birmingham University, England, UK, in 1991, and the M.S and Ph.D. degrees in Electronic Engineering from Nottingham University, England, UK, in 1992 and 1997. During 1998–2001 he was an Assistant Professor and Chairman of the Computer Engineering Department, Near East University, Lefkosa, Turkey. During 2001–2009 he was an Associate Professor and Chairman of the Electrical and Electronic Engineering

References (55)

  • K. Mériau et al.

    A neural network reflecting individual differences in cognitive processing of emotions during perceptual decision making

    NeuroImage

    (2006)
  • R.L.C. Mitchell et al.

    The neural response to emotional prosody, as revealed by functional magnetic resonance imaging

    Neuropsychologia

    (2003)
  • R.A. Morey et al.

    Neural systems for executive and emotional processing are modulated by symptoms of posttraumatic stress disorder in Iraq war veterans

    Psychiatry Research: Neuroimaging

    (2008)
  • L.I. Perlovsky

    Toward physics of the mind: concepts, emotions, consciousness, and symbols

    Physics of Life Reviews

    (2006)
  • L. Perlovsky

    Language and emotions: emotional Sapir–Whorf hypothesis

    Neural Networks

    (2009)
  • L. Perlovsky

    Language and cognition

    Neural Networks

    (2009)
  • J.G. Taylor et al.

    Emotion and brain: understanding emotions and modelling their recognition

    Neural Networks

    (2005)
  • S.F. Taylor et al.

    Neural correlates of emotion regulation in psychopathology

    Trends in Cognitive Sciences

    (2007)
  • K. Abu Maria et al.

    Emotional agents: a modelling and an application

    Information and Software Technology

    (2007)
  • AT & T (2009). Laboratories Cambridge. The ORL database of faces. Available:...
  • J. Bates

    The role of emotion in believable agents

    Communications of the ACM

    (1994)
  • P.N. Belhumeur et al.

    Eigenfaces vs. fisherfaces: recognition using class specific linear projection

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (1997)
  • A. Blangero et al.

    Dorsal and ventral stream interactions: evidence from optic ataxia. Symposium I: illusions in action: what have illusory contexts told us about the neural control of action?

    Brain and Cognition

    (2008)
  • D. Canamero

    Designing emotions for activity selection in autonomous agents

  • Chokshi, K., Panchev, C., Wermter, S., & Taylor, J. G. (2004). Knowing what and where: a computational model for visual...
  • Coutinho, E., & Cangelosi, A. (2007). Emotion and embodiment in cognitive agents: from instincts to music. In...
  • Coutinho, E., Miranda, E. R., & Cangelosi, A. (2005). Towards a model for embodied emotions. In C. Bento, A. Cardoso, &...
  • Cited by (0)

    Adnan Khashman received the B.Eng. degree in Electronic and Communication Engineering from Birmingham University, England, UK, in 1991, and the M.S and Ph.D. degrees in Electronic Engineering from Nottingham University, England, UK, in 1992 and 1997. During 1998–2001 he was an Assistant Professor and Chairman of the Computer Engineering Department, Near East University, Lefkosa, Turkey. During 2001–2009 he was an Associate Professor and Chairman of the Electrical and Electronic Engineering Department at the same university. From 2007 till 2008 he was also the Vice-Dean of the Engineering Faculty. Since 2009 he is a full Professor and the Head of the Intelligent Systems Research Group (ISRG) which he founded in 2001 at the same university. Current research interests include emotion modeling in neural networks and their engineering applications.

    View full text