Abstract
The brain often integrates multisensory sources of information in a way that is close to the optimal according to Bayesian principles. Since sensory modalities are grounded in different, body-relative frames of reference, multisensory integration requires accurate transformations of information. We have shown experimentally, for example, that a rotating tactile stimulus on the palm of the right hand can influence the judgment of ambiguously rotating visual displays. Most significantly, this influence depended on the palm orientation: when facing upwards, a clockwise rotation on the palm yielded a clockwise visual judgment bias; when facing downwards, the same clockwise rotation yielded a counterclockwise bias. Thus, tactile rotation cues biased visual rotation judgment in a head-centered reference frame. Recently, we have generated a modular, multimodal arm model that is able to mimic aspects of such experiments. The model co-represents the state of an arm in several modalities, including a proprioceptive, joint angle modality as well as head-centered orientation and location modalities. Each modality represents each limb or joint separately. Sensory information from the different modalities is exchanged via local forward and inverse kinematic mappings. Also, re-afferent sensory feedback is anticipated and integrated via Kalman filtering. Information across modalities is integrated probabilistically via Bayesian-based plausibility estimates, continuously maintaining a consistent global arm state estimation. This architecture is thus able to model the described effect of posture-dependent motion cue integration: tactile and proprioceptive sensory information may yield top–down biases on visual processing. Equally, such information may influence top–down visual attention, expecting particular arm-dependent motion patterns. Current research implements such effects on visual processing and attention.
Similar content being viewed by others
References
Adelson EH, Bergen JR (1985) Spatiotemporal energy models for the perception of motion. J Opt Soc Am A 2(2):284–299
Butz MV, Herbort O, Hoffmann J (2007) Exploiting redundancy for flexible behavior: unsupervised learning in a modular sensorimotor control architecture. Psychol Rev 114:1015–1046
Butz MV, Thomaschke R, Linhardt MJ, Herbort O (2010) Remapping motion across modalities: tactile rotations influence visual motion judgments. Exp Brain Res 207:1–11
Doya K, Ishii S, Pouget A, Rao RPN (2007) Bayesian brain: probabilistic approaches to neural coding. The MIT Press, Cambridge
Ehrenfeld S, Butz MV (2011) A modular, redundant, multi-frame of reference representation for kinematic chains. In: Proceedings of the IEEE ICRA 2011, pp 141–147
Gratal X, Romero J, Kragic D (2011) Virtual visual servoing for real-time robot pose estimation. In: Proceedings of the 18th IFAC world congress
Graziano MSA (2006) The organization of behavioral repertoire in motor cortex. Annu Rev Neurosci 29:105–134
Herbart JF (1825) Psychologie als Wissenschaft neu gegründet auf Erfahrung, Metaphysik und Mathematik. Zweiter, analytischer Teil. August Wilhem Unzer, Königsberg
Holmes NP, Spence C (2004) The body schema and multisensory representation(s) of peripersonal space. Cogn Process 5:94–105
Kemp CC, Edsinger A (2006) What can i control?: the development of visual categories for a robot’s body and the world that it influences. In: Proceedings of the fifth international conference on development and learning
Körding KP, Wolpert DM (2004) Bayesian integration in sensorimotor learning. Nature 427:244–247
Oldfield SR, Phillips JR (1983) The spatial characteristics of tactile form perception. Perception 12:615–626
Saegusa R, Metta G, Sandini G (2010) Own body perception based on visuomotor correlation. Proc IEEE IROS 2010:1044–1051
Sekiyama K (1991) Importance of head axes in perception of cutaneous patterns drawn on vertical body surfaces. Percept Psychophys 49:481–492
Stock A, Stock C (2004) A short history of ideo-motor action. Psychol Res 68:176–188
Vaughan J, Rosenbaum DA, Meulenbroek RGJ (2006) Modeling reaching and manipulating in 2- and 3-D workspaces: the posture-based model. In: Proceedings of the fifth international conference on learning and development, pp 1–6
Wischnewski M, Belardinelli A, Schneider WX, Steil JJ (2010) Where to look next? Combining static and dynamic proto-objects in a TVA-based model of visual attention. Cogn Comput 2(4):326–343
Wolpert DM, Kawato M (1998) Multiple paired forward and inverse models for motor control. Neural Netw 11:1317–1329
Conflict of interest
This supplement was not sponsored by outside commercial interests. It was funded entirely by ECONA, Via dei Marsi, 78, 00185 Roma, Italy.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Butz, M.V., Belardinelli, A. & Ehrenfeld, S. Modeling body state-dependent multisensory integration. Cogn Process 13 (Suppl 1), 113–116 (2012). https://doi.org/10.1007/s10339-012-0471-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10339-012-0471-y