Skip to main content

Bayesian Fusion of Auditory and Visual Spatial Cues during Fixation and Saccade in Humanoid Robot

  • Conference paper
Advances in Neuro-Information Processing (ICONIP 2008)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5506))

Included in the following conference series:

Abstract

In this paper, the Bayesian fusion of auditory and visual spatial cues has been implemented in a humanoid robot aiming to increase the accuracy of localization, given a situation that an audiovisual stimulus was presented. The performance of auditory and visual localization was tested under two conditions: fixation and saccade. In this experiment, we proved that saccade did greatly reduce the accuracy of auditory localization in the humanoid robot. The Bayesian model became not reliable when the results of auditory and visual localization were not reliable, particularly during saccade. During the tests, localization in two conditions (saccade onset and changing of direction of motion) has been ignored and only azimuth position has been considered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Binda, P., Bruno, A., Burr, D.C., Morone, M.C.: Fusion of Visual and Audio Stimuli during Saccades: A Bayesian Explanation for Perisaccadic Distortions. The Journal of Neuroscience 27(32), 8525–8532 (2007)

    Article  Google Scholar 

  2. Burr, D., Alais, D.: Combining Visual and Auditory Information. Progress in Brain Research 155, 243–255 (2006)

    Article  Google Scholar 

  3. Körding, K.P., Beierholm, U., Ma, W.J., Quartz, S., Tenenbaum, J.B., Shams, L.: Causal Inference in Multisensory Perception. PLoS ONE 2(9), e943 (2007)

    Google Scholar 

  4. Wallace, M.T., Roberson, G.E., Hairston, W.D., Stein, B.E., Vaughan, J.W., Schirillo, J.A.: Unifying Multisensory Signals across Time and Space. Exp. Brain Res. 158, 252–258 (2004)

    Article  Google Scholar 

  5. Steenken, R., Colonius, H., Diederich, A., Rach, S.: Visual-Auditory Interaction in Saccadic Reaction Time: Effects of Auditory Masker Level. BRES-37260 4C, 3–4 (2007)

    Google Scholar 

  6. Bolognini, N., Rasi, F., Làdavas, E.: Visual Localization of Sound. Neuropsychologia 43, 1655–1661 (2005)

    Article  Google Scholar 

  7. Mishra, J., Martinez, A., Sejnowski, T.J., Hillyard, S.A.: Early Cross-Model Interactions in Auditory and Visual Cortex underlie a Sound-Induced Visual Illusion. The Journal of Neuroscience 27(15), 4120–4131 (2007)

    Article  Google Scholar 

  8. Natale, L., Metta, G., Sandini, G.: Development of Auditory-Evoked Reflexes: Visuo-Acoustic Cues Integration in a Binocular Head. Robotics and Autonomous System 39, 87–106 (2002)

    Article  Google Scholar 

  9. Beltrán-González, C., Sandini, G.: Visual Attension Priming Based on Crossmodal Expectation. Laboratory for Integrated Advance Robotics, University of Gonova

    Google Scholar 

  10. Bothe, H.-H., Persson, M., Biel, L., Rosenholm, M.: Multivariate Sensor Fusion by a Neural Network Model. Applied Autonomous Sensor Systems Laboratory (AASS), Örebro University

    Google Scholar 

  11. Kee, K.C., Loo, C.K., Khor, S.E.: Sound Localization using Generalized Cross Correlation: Performance Comparison of Pre-Filter. Center of Robotics and Automation, Multimedia University (2008)

    Google Scholar 

  12. Metta, G., Gasteratos, A., Sandini, G.: Learning to Track Colored Objects with Log-Polar Vision. Mechatronics 14, 989–1006 (2004)

    Article  Google Scholar 

  13. Berton, F.: A Brief Introduction to Log-Polar Mapping (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wong, W.K., Neoh, T.M., Loo, C.K., Ong, C.P. (2009). Bayesian Fusion of Auditory and Visual Spatial Cues during Fixation and Saccade in Humanoid Robot. In: Köppen, M., Kasabov, N., Coghill, G. (eds) Advances in Neuro-Information Processing. ICONIP 2008. Lecture Notes in Computer Science, vol 5506. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02490-0_134

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-02490-0_134

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-02489-4

  • Online ISBN: 978-3-642-02490-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics