Skip to main content

Real-time Sound Source Localization and Separation based on Active Audio-Visual Integration

  • Conference paper
  • First Online:
Computational Methods in Neural Modeling (IWANN 2003)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2686))

Included in the following conference series:

Abstract

Robot audition in the real world should cope with environment noises and reverberation and motor noises caused by the robot’s own movements. This paper presents the active direction-pass filter (ADPF) to separate sounds originating from the specified direction with a pair of microphones. The ADPF is implemented by hierarchical integration of visual and auditory processing with hypothetical reasoning on interaural phase difference (IPD) and interaural intensity difference (IID) for each subband. In creating hypotheses, the reference data of IPD and IID is calculated by the auditory epipolar geometry on demand. Since the performance of the ADPF depends on the direction, the ADPF controls the direction by motor movement. The human tracking and sound source separation based on the ADPF is implemented on an upper-torso humanoid and runs in realtime with 4 PCs connected over Gigabit ethernet. The signal-to-noise ratio (SNR) of each sound separated by the ADPF from a mixture of two speeches with the same loudness is improved to about 10 dB from 0 dB.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aloimonos, Y., Weiss, I., AND Bandyopadhyay., A. Active vision. International Journal of Computer Vision 1, 4 (1987), 333–356.

    Google Scholar 

  2. Asano, F., Goto, M., Itou, K., AND Asoh, H. Real-time sound source localization and separation system and its application to automatic speech recognition. Proc. of International Conf. on Speech Processing (Eurospeech 2001), ESCA, 1013–1016.

    Google Scholar 

  3. Bell, A.J., AND Sejnowski, T.J. An information-maximization approach to blind separation and blind deconvolution. Neural Computation 7, 6 (Jun 1995), 1129–1159.

    Google Scholar 

  4. Breazeal, C., AND Scassellati, B. A context-dependent attention system for a social robot. Proc. of 16th Intern’l Joint Conf. on Atificial Intelligence (IJCAI-1999), 1146–1151.

    Google Scholar 

  5. Hidai, K., Mizoguchi, H., Hiraoka, K., Tanaka, M., Shigehara, T., AND Mishima, T. Robust face detection against brightness fluctuation and size variation. Proc. of IEEE/RAS Intern’l Conf. on Intelligent Robots and Systems (IROS 2000), IEEE, 1397–1384.

    Google Scholar 

  6. Hiraoka, K., Hamahira, M., Hidai, K., Mizoguchi, H., Yoshizawa, S. Fast algorithm for online linear discriminant analysis. Proc. of ITC-2000, IEEE, 274–277.

    Google Scholar 

  7. Matsusaka, Y., Tojo, T., Kuota, S., Furukawa, K., Tamiya, D., Hayata, K., Nakano, Y., AND Kobayashi, T. Multi-person conversation via multi-modal interface—a robot who communicates with multi-user. Proc. of 6th European Conf. on Speech Communication Technology (EUROSPEECH-1999), ESCA, 1723–1726.

    Google Scholar 

  8. Mizumachi, M., AND Akagi, M. Noise reduction by paired-microphones using spectral subtraction. Proc. of 1998 Intern’l Conf. on Acoustics, Speech, and Signal Processing (ICASSP-98) (1998), IEEE, 1113–1116.

    Google Scholar 

  9. Nakadai, K., Lourens, T., Okuno, H.G., AND Kitano, H. Active audition for humanoid. Proc. of 17th National Conf. on Artificial Intelligence (AAAI-2000), 832–839.

    Google Scholar 

  10. Nakadai, K., Hidai, K., Mizoguchi, H., Okuno, H.G., AND Kitano, H. Real-time auditory and visual multiple-object tracking for robots. Proc. of 17th Intern’l Joint Conf. on Artificial Intelligence (IJCAI-2001), IJCAI, 1425–1432.

    Google Scholar 

  11. Nakadai, K., Okuno, H.G., Kitano, H. Exploiting auditory fovea in humanoid-human interaction. Proc. of 18th National Conf. on Artificial Intelligence (AAAI-2002), 431–438.

    Google Scholar 

  12. Okuno, H., Nakadai, K., Hidai, K., Mizoguchi, H., AND Kitano, H. Human-robot non-verbal interaction empowered by real-time auditory and visual multiple-talker tracking. Advanced Robotics 17, 2 (2003), in print.

    Google Scholar 

  13. Saruwatari, H., Kajita, S., Takeda, K., AND Itakura, F. Speech enhancement using nonlinear microphone array based on complementary beamforming. IEICE Trans. Fundamentals E82-E, 8 (Aug. 1999), 1501–1510.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Okuno, H.G., Nakadai, K. (2003). Real-time Sound Source Localization and Separation based on Active Audio-Visual Integration. In: Mira, J., Álvarez, J.R. (eds) Computational Methods in Neural Modeling. IWANN 2003. Lecture Notes in Computer Science, vol 2686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44868-3_16

Download citation

  • DOI: https://doi.org/10.1007/3-540-44868-3_16

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40210-7

  • Online ISBN: 978-3-540-44868-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics