Skip to main content

Environmental Sound Recognition for Robot Audition Using Matching-Pursuit

  • Conference paper
Modern Approaches in Applied Intelligence (IEA/AIE 2011)

Abstract

Our goal is to achieve a robot audition system that is capable of recognizing multiple environmental sounds and making use of them in human-robot interaction. The main problems in environmental sound recognition in robot audition are: (1) recognition under a large amount of background noise including the noise from the robot itself, and (2) the necessity of robust feature extraction against spectrum distortion due to separation of multiple sound sources. This paper presents the environmental recognition of two sound sources fired simultaneously using matching pursuit (MP) with the Gabor wavelet, which extracts salient audio features from a signal. The two environmental sounds come from different directions, and they are localized by multiple signal classification and, using their geometric information, separated by geometric source separation with the aid of measured head-related transfer functions. The experimental results show the noise-robustness of MP although the performance depends on the properties of the sound sources.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Rosenthal, D.F., Okuno, H.G.: Computational auditory scene analysis. L. Erlbaum Associates Inc., Mahwah (1998)

    Google Scholar 

  2. Brown, G., Cooke, M.: Computational auditory scene analysis. Computer Speech and Language 8(4), 297–336 (1994)

    Article  Google Scholar 

  3. Okuno, H.G., Ogata, T., Komatani, K.: Computational Auditory Scene Analysis and Its Application to Robot Audition: Five Years Experience. In: ICKS 2007, pp. 69–76 (2007)

    Google Scholar 

  4. Matsusaka, Y., Tojo, T., Kuota, S., Furukawa, K., Tamiya, D., Hayata, K., Nakano, Y., Kobayashi, T.: Multi-person conversation via multi-modal interface — a robot who communicates with multi-user. In: EUROSPEECH 1999, pp. 1723–1726 (1999)

    Google Scholar 

  5. Nishimura, R., Uchida, T., Lee, A., Saruwatari, H., Shikano, K.: Aska: Receptionist robot with speech dialogue system. In: IROS 2002, pp. 1308–1313 (2002)

    Google Scholar 

  6. Brooks, R., Breazeal, C., Marjanovie, M., Scassellati, B., Williamson, M.: The cog project: Building a humanoid robot. In: Computation for Metaphors, Analogy, and Agents, pp. 52–87 (1999)

    Google Scholar 

  7. Nakadai, K., Takahashi, T., Okuno, H.G., Nakajima, H., Hasegawa, Y., Tsujino, H.: Design and Implementation of Robot Audition System’HARK’Open Source Software for Listening to Three Simultaneous Speakers. Advanced Robotics 24 5(6), 739–761 (2010)

    Article  Google Scholar 

  8. Ikeda, Y., Jahns, G., Kowalczyk, W., Walter, K.: Acoustic Analysis to Recognize Individuals and Animal Conditions. In: The XIV Memorial CIGR World Congress, vol. 8206 (2000)

    Google Scholar 

  9. Jahns, G.: Call recognition to identify cow conditions–A call-recogniser translating calls to text. Computers and Electronics in Agriculture 62(1), 54–58 (2008)

    Article  Google Scholar 

  10. Eronen, A.J., Peltonen, V.T., Tuomi, J.T., Klapuri, A.P., Fagerlund, S., Sorsa, T., Lorho, G., Huopaniemi, J.: Audio-based context recognition. IEEE TASLP 14(1), 321–329 (2005)

    Google Scholar 

  11. Chu, S., Narayanan, S., Kuo, C.: Environmental sound recognition with timefrequency audio features. IEEE TASL 17(6), 1142 (2009)

    Google Scholar 

  12. Ntalampiras, S., Potamitis, I., Fakotakis, N.: Sound classification based on temporal feature integration. ISCCSP 2010, 1–4 (2010)

    Google Scholar 

  13. Mallat, S.G., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE TSP 41(12), 3397–3415 (1993)

    MATH  Google Scholar 

  14. Schmidt, R.: Multiple emitter location and signal parameter estimation. IEEE TAP 34(3), 276–280 (1986)

    Google Scholar 

  15. Parra, L.C., Alvino, C.V.: Geometric source separation: Mergin convolutive source separation with geometric beamforming. IEEE TSALP 10(6), 352–362 (2002)

    Google Scholar 

  16. Real World Computing Partnership, Rwcp sound scene database in real acoustical environments, http://tosa.mri.co.jp/sounddb/indexe.htm

  17. Yamakawa, N., Kitahara, T., Takahashi, T., Komatani, K., Ogata, T., Okuno, H.G.: Effects of modelling within-and between-frame temporal variations in power spectra on non-verbal sound recognition. In: INTERSPEECH 2010, pp. 2342–2345 (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yamakawa, N., Takahashi, T., Kitahara, T., Ogata, T., Okuno, H.G. (2011). Environmental Sound Recognition for Robot Audition Using Matching-Pursuit. In: Mehrotra, K.G., Mohan, C.K., Oh, J.C., Varshney, P.K., Ali, M. (eds) Modern Approaches in Applied Intelligence. IEA/AIE 2011. Lecture Notes in Computer Science(), vol 6704. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21827-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21827-9_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21826-2

  • Online ISBN: 978-3-642-21827-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics