Abstract
Mobile robots with auditory perception usually adopt “stop- perceive-act” principle to avoid sounds made during moving due to motor noises or bumpy roads. Although this principle reduces the complexity of the problems involved auditory processing for mobile robots, it restricts their capabilities of auditory processing. In this paper, sound and visual tracking is investigated to attain robust object tracking by compensating each drawbacks in tracking objects. Visual tracking may be difficult in case of occlusion, while sound tracking may be ambiguous in localization due to the nature of auditory processing. For this purpose, we present an active audition system for a humanoid robot. The audition system of the intelligent humanoid requires localization of sound sources and identifica- tion of meanings of the sound in the auditory scene. The active audition reported in this paper focuses on improved sound source tracking by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG the humanoid actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. The system adaptively cancels motor noise using motor control signals. The experimental result demonstrates the effectiveness and robustness of sound and visual tracking.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
J. Huang, N. Ohnishi, and N. Sugie: Building ears for robots: sound localization and separation, Artificial Life and Robotics, Vol. 1, No. 4 (1997) 157–163.
Y. Matsusaka, T. Tojo, S. Kuota, K. Furukawa, D. Tamiya, K. Hayata, Y. Nakano, and T. Kobayashi: Multi-person conversation via multi-modal interface — a robot who communicates with multi-user, Proceedings of 6th European Conference on Speech Communication Technology (EUROSPEECH-99), 1723–1726, ESCA, 1999.
A. Takanishi, S. Masukawa, Y. Mori, and T. Ogawa: Development of an anthropomorphic auditory robot that localizes a sound direction (in japanese), Bulletin of the Centre for Informatics, Vol. 20 (1995) 24–32.
G. J. Brown: Computational auditory scene analysis: A representational approach. University of Sheffield, 1992.
M. P. Cooke, G. J. Brown, M. Crawford, and P. Green: Computational auditory scene analysis: Listening to several things at once, Endeavour, Vol. 17, No. 4 (1993) 186–190.
T. Nakatani, H. G. Okuno, and T. Kawabata: Auditory stream segregation in auditory scene analysis with a multi-agent system, Proceedings of 12th National Conference on Artificial Intelligence (AAAI-94), 100–107, AAAI, 1994.
D. Rosenthal and H. G. Okuno (eds.): Computational Auditory Scene Analysis. Mahwah, New Jersey: Lawrence Erlbaum Associates, 1998.
K. Nakadai, T. Lourens, H. G. Okuno, and H. Kitano: Active audition for humanoid, Proceedings of 17th National Conference on Artificial Intelligence (AAAI-2000), 832–839, AAAI, 2000.
T. Nakatani, H. G. Okuno, and T. Kawabata: Residue-driven architecture for computational auditory scene analysis, Proceedings of 14th International Joint Conference on Artificial Intelligence (IJCAI-95), vol. 1, 165–172, AAAI, 1995.
S. Cavaco and J. Hallam: A biologically plausible acoustic azimuth estimation system, Proceedings of IJCAI-99 Workshop on Computational Auditory Scene Analysis (CASA’99), 78–87, IJCAI, 1999.
Y. Nakagawa, H. G. Okuno, and H. Kitano: Using vision to improve sound source separation, Proceedings of 16th National Conference on Artificial Intelligence (AAAI-99), 768–775, AAAI, 1999.
H. Kitano, H. G. Okuno, K. Nakadai, I. Fermin, T. Sabish, Y. Nakagawa, and T. Matsui: Designing a humanoid head for robocup challenge, Proceedings of the Fourth International Conference on Autonomous Agents (Agents 2000), ACM, 2000.
K. Nakadai, T. Lourens, H. G. Okuno, and H. Kitano: Humanoid active audition system improved by the cover acoustics, PRICAI-2000 Topics in Artificial Intelligence (Sixth Pacific Rim International Conference on Artificial Intelligence), Lecture Notes in Computer Science, No. 1886, 544–554, Springer Verlag, 2000.
T. Lourens, K. Nakadai, H. G. Okuno, and H. Kitano: Selective attention by integration of vision and audition, Proceedings of First IEEE-RAS International Conference on Humanoid Robot (Humanoid-2000), IEEE/RSJ, 2000.
R. Brooks, C. Breazeal, M. Marjanovie, B. Scassellati, and M. Williamson: The cog project: Building a humanoid robot, Computation for metaphors, analogy, and agents (C. Nehaniv, ed.), 52–87, Spriver-Verlag, 1999.
K. Nakadai, K. Hidai, H. Mizoguchi, H. G. Okuno, and H. Kitano: Real-time auditory and visual multiple-object tracking for robots, submitted, 2001.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Okuno, H.G., Nakadai, K., Lourens, T., Kitano, H. (2001). Sound and Visual Tracking for Humanoid Robot. In: Monostori, L., Váncza, J., Ali, M. (eds) Engineering of Intelligent Systems. IEA/AIE 2001. Lecture Notes in Computer Science(), vol 2070. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45517-5_71
Download citation
DOI: https://doi.org/10.1007/3-540-45517-5_71
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42219-8
Online ISBN: 978-3-540-45517-2
eBook Packages: Springer Book Archive