Skip to main content

Monitoring Blind Regions with Prior Knowledge Based Sound Localization

  • Conference paper
  • First Online:
Social Robotics (ICSR 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11876))

Included in the following conference series:

  • 2357 Accesses

Abstract

This paper presents a sound localization method designed for dealing with blind regions. The proposed approach mimics Human’s ability of guessing what is happening in the blind regions by using prior knowledge. A user study was conducted to demonstrate the usefulness of the proposed method for human-robot interaction in environments with blind regions. The subjects participated in a shoplifting scenario during which the shop clerk was a robot that has to rely on its hearing to monitor a blind region. The participants understood the enhanced capability of the robot and it favorably affected the rating of the robot using the proposed method.

This research was supported by JST CREST Grant Number JPMJCR17A2, Japan.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Argentieri, S., Danès, P., Souères, P.: A survey on sound source localization in robotics: from binaural to array processing methods. Comput. Speech Lang. 34(1), 87–112 (2015)

    Article  Google Scholar 

  2. Brandstein, M., Silverman, H.: A robust method for speech signal time-delay estimation in reverberant rooms. IEEE Conference on Acoustics, Speech, and Signal Processing, ICASSP 1997, pp. 375–378 (1997)

    Google Scholar 

  3. Cha, E., Fitter, N.T., Kim, Y., Fong, T., Matarić, M.J.: Effects of robot sound on auditory localization in human-robot collaboration. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction HRI 2018, pp. 434–442 (2018)

    Google Scholar 

  4. Dellaert, F., Fox, D., Burgard, W., Thrun, S.: Monte carlo localization for mobile robots. In: IEEE International Conference on Robotics and Automation (ICRA99), May 1999

    Google Scholar 

  5. DiBiase, J., Silverman, H., Brandstein, M.: Microphone Arrays : Signal Processing Techniques and Applications. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-662-04619-7

    Google Scholar 

  6. Even, J., Morales, Y., Kallakuri, N., Ishi, C., Hagita, N.: Audio ray tracing for position estimation of entities in blind regions. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014, pp. 1920–1925 (2014)

    Google Scholar 

  7. Gaver, W.W.: What in the world do we hear?: an ecological approach to auditory event perception. Ecol. Psychol. 5(1), 1–29 (1993)

    Article  Google Scholar 

  8. Grisetti, G., Stachniss, C., Burgard, W.: Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Rob. 23(1), 34–46 (2007)

    Article  Google Scholar 

  9. Hershey, S., et al.: CNN architectures for large-scale audio classification. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 131–135 (2017)

    Google Scholar 

  10. Mak, L.C., Furukawa, T.: Non-line-of-sight localization of a controlled sound source. In: 2009 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 475–480 (2009)

    Google Scholar 

  11. Shelton, B.R., Searle, C.L.: The influence of vision on the absolute identification of sound-source position. Percept. Psychophysics 28(6), 589–596 (1980)

    Article  Google Scholar 

  12. Takami, K., Liu, H., Furukawa, T., Kumon, M., Dissanayake, G.: Non-field-of-view sound source localization using diffraction and reflection signals. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 157–162 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jani Even .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Even, J., Satake, S., Kanda, T. (2019). Monitoring Blind Regions with Prior Knowledge Based Sound Localization. In: Salichs, M., et al. Social Robotics. ICSR 2019. Lecture Notes in Computer Science(), vol 11876. Springer, Cham. https://doi.org/10.1007/978-3-030-35888-4_64

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-35888-4_64

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-35887-7

  • Online ISBN: 978-3-030-35888-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics