Distinct neural substrates for the perception of real and virtual visual worlds
Introduction
Virtual reality is increasingly used for training in a wide range of contexts. For example, virtual human agents simulated using cartoons have been used to help students learn to perform physical, procedural tasks (Rickel and Johnson, 1999). Animated agents in virtual environments have also been used for training skills that require a high level of flexible, interpersonal interactions such as psychotherapy (Beutler and Harwood, 2004). However, whether the human brain differentially perceives and interacts with agents in the real and virtual worlds has been poorly understood. Recent functional magnetic resonance imaging (fMRI) studies have shown that, when we deal with actions assumed to come from real human agents, specific brain regions, such as the medial prefrontal cortex (MPFC), show stronger activation compared with when we assume the actions come from animated agents simulated by computers (Gallagher et al., 2002, Ramnani and Miall, 2004), suggesting that specific neural substrates may be involved in discrimination between human and non-human agents.
The current study assessed whether, when we simply perceive human agents in the real world, different brain regions are engaged compared with when we perceive agents in virtual worlds. To investigate this, we used fMRI to measure brain activations when participants observed movie and cartoon clips, which presented brief sequences of actions involving humans in real-life situations (movie clips) or actions involving either human or non-human agents in virtual worlds (cartoon clips). Movies present real images (photographs of a physical environment) whereas cartoons present virtual images (a simulation on physical principles of that environment). Brain activity when watching the clips was compared with random order static images from the movie and cartoon clips to control for any differences in low level visual feature processing. Relative to the baseline with static random images, movie or cartoon clips presented continuous and coherent visual events that induced explanatory predictions of behaviour. We aimed to identify if there are neural substrates differentiating the perception of human agents in the real visual world (in movie clips) from the perception of human or non-human characters in a virtual world (in cartoons).
Section snippets
Subjects
Twelve adults (6 male; 21–41 years of age, mean 25.5) with no neurological or psychiatric history participated in this study. All participants were right-handed, had normal or corrected-to-normal vision, and were not colour blind. Informed consent was obtained from all participants prior to scanning. This study was approved by the Academic Committee of Department of Psychology, Peking University.
Stimuli and procedure
The stimuli were presented through a LCD projector onto a rear-projection screen located at a
Results
In Condition A, we recorded fMRI signals from subjects who freely viewed silent movie clips depicting real-life situations, such as human activities at a subway station or in a classroom (Fig. 1a). The contrast of movies–random static images revealed activation in bilateral middle temporal cortex (MT) and the posterior superior temporal sulcus (STS) (centred at −51, −68, 5, Z = 4.65, P < 0.03, corrected; and 51, −68, 3, Z = 4.62, P < 0.001, corrected, see Fig. 3a), and the occipital cortex
Discussion
Our functional neuroimaging findings provide important clues about the way we perceive characters within coherent successive events in real and virtual worlds. A number of common areas were activated by all the conditions with movie and cartoon clips, relative to their static image baselines. The medial occipital cortex and MT are likely engaged by the processing of low-level visual features of the moving images, such as changes in the shape, colour (Livingstone and Hubel, 1998), and motion
Acknowledgments
This work was supported by National Natural Science Foundation of China (Project 30225026 and 30328016), the Ministry of Science and Technology of China (Project 2002CCA01000), the Ministry of Education of China (02170), the Medical Research Council (UK), and the Royal Society (UK).
References (33)
- et al.
A PET investigation of the attribution of intentions with a nonverbal task
NeuroImage
(2000) - et al.
Neuroimaging studies of the cerebellum: language, learning and memory
Trends Cogn. Sci.
(1998) - et al.
Other minds in the brain: a functional imaging study of “theory of mind” in story comprehension
Cognition
(1995) - et al.
Reading the mind in cartoons and stories: an fMRI study of ‘theory of mind’ in verbal and nonverbal tasks
Neuropsychologia
(2000) - et al.
Imaging the intentional stance in a competitive game
NeuroImage
(2002) - et al.
Watching social interactions produces dorsomedial prefrontal and medial parietal BOLD fMRI signal increases compared to a resting baseline
NeuroImage
(2004) - et al.
The effect of face inversion on the human fusiform face area
Cognition
(1998) - et al.
Brain activations during visual search: contributions of search efficiency versus feature binding
NeuroImage
(2003) - et al.
Attention to action: specific modulation of corticocortical interactions in humans
NeuroImage
(2002) - et al.
Virtual reality in psychotherapy training
J. Clin. Psychol.
(2004)
Movement and mind: a functional imaging study of perception and interpretation of complex intentional movement patterns
NeuroImage
What causes the face inversion effect?
J. Exp. Psychol. Hum. Percep. Perform.
Interacting minds—a biological basis
Science
The effects of learning and intention on the neural network involved in the perception of meaningless actions
Brain
Searching for a baseline: functional imaging and the resting human brain
Nat. Rev., Neurosci.
The neural mechanisms of top-down attentional control
Nat. Neurosci.
Cited by (0)
- 1
Present address: Department of Psychology, University of Minnesota, 75 East River Road, Minneapolis, MN 55455.