Abstract
In this paper, the Bayesian fusion of auditory and visual spatial cues has been implemented in a humanoid robot aiming to increase the accuracy of localization, given a situation that an audiovisual stimulus was presented. The performance of auditory and visual localization was tested under two conditions: fixation and saccade. In this experiment, we proved that saccade did greatly reduce the accuracy of auditory localization in the humanoid robot. The Bayesian model became not reliable when the results of auditory and visual localization were not reliable, particularly during saccade. During the tests, localization in two conditions (saccade onset and changing of direction of motion) has been ignored and only azimuth position has been considered.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Binda, P., Bruno, A., Burr, D.C., Morone, M.C.: Fusion of Visual and Audio Stimuli during Saccades: A Bayesian Explanation for Perisaccadic Distortions. The Journal of Neuroscience 27(32), 8525–8532 (2007)
Burr, D., Alais, D.: Combining Visual and Auditory Information. Progress in Brain Research 155, 243–255 (2006)
Körding, K.P., Beierholm, U., Ma, W.J., Quartz, S., Tenenbaum, J.B., Shams, L.: Causal Inference in Multisensory Perception. PLoS ONE 2(9), e943 (2007)
Wallace, M.T., Roberson, G.E., Hairston, W.D., Stein, B.E., Vaughan, J.W., Schirillo, J.A.: Unifying Multisensory Signals across Time and Space. Exp. Brain Res. 158, 252–258 (2004)
Steenken, R., Colonius, H., Diederich, A., Rach, S.: Visual-Auditory Interaction in Saccadic Reaction Time: Effects of Auditory Masker Level. BRES-37260 4C, 3–4 (2007)
Bolognini, N., Rasi, F., Là davas, E.: Visual Localization of Sound. Neuropsychologia 43, 1655–1661 (2005)
Mishra, J., Martinez, A., Sejnowski, T.J., Hillyard, S.A.: Early Cross-Model Interactions in Auditory and Visual Cortex underlie a Sound-Induced Visual Illusion. The Journal of Neuroscience 27(15), 4120–4131 (2007)
Natale, L., Metta, G., Sandini, G.: Development of Auditory-Evoked Reflexes: Visuo-Acoustic Cues Integration in a Binocular Head. Robotics and Autonomous System 39, 87–106 (2002)
Beltrán-González, C., Sandini, G.: Visual Attension Priming Based on Crossmodal Expectation. Laboratory for Integrated Advance Robotics, University of Gonova
Bothe, H.-H., Persson, M., Biel, L., Rosenholm, M.: Multivariate Sensor Fusion by a Neural Network Model. Applied Autonomous Sensor Systems Laboratory (AASS), Örebro University
Kee, K.C., Loo, C.K., Khor, S.E.: Sound Localization using Generalized Cross Correlation: Performance Comparison of Pre-Filter. Center of Robotics and Automation, Multimedia University (2008)
Metta, G., Gasteratos, A., Sandini, G.: Learning to Track Colored Objects with Log-Polar Vision. Mechatronics 14, 989–1006 (2004)
Berton, F.: A Brief Introduction to Log-Polar Mapping (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wong, W.K., Neoh, T.M., Loo, C.K., Ong, C.P. (2009). Bayesian Fusion of Auditory and Visual Spatial Cues during Fixation and Saccade in Humanoid Robot. In: Köppen, M., Kasabov, N., Coghill, G. (eds) Advances in Neuro-Information Processing. ICONIP 2008. Lecture Notes in Computer Science, vol 5506. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02490-0_134
Download citation
DOI: https://doi.org/10.1007/978-3-642-02490-0_134
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-02489-4
Online ISBN: 978-3-642-02490-0
eBook Packages: Computer ScienceComputer Science (R0)