Abstract
This paper presents a sound localization method designed for dealing with blind regions. The proposed approach mimics Human’s ability of guessing what is happening in the blind regions by using prior knowledge. A user study was conducted to demonstrate the usefulness of the proposed method for human-robot interaction in environments with blind regions. The subjects participated in a shoplifting scenario during which the shop clerk was a robot that has to rely on its hearing to monitor a blind region. The participants understood the enhanced capability of the robot and it favorably affected the rating of the robot using the proposed method.
This research was supported by JST CREST Grant Number JPMJCR17A2, Japan.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Argentieri, S., Danès, P., Souères, P.: A survey on sound source localization in robotics: from binaural to array processing methods. Comput. Speech Lang. 34(1), 87–112 (2015)
Brandstein, M., Silverman, H.: A robust method for speech signal time-delay estimation in reverberant rooms. IEEE Conference on Acoustics, Speech, and Signal Processing, ICASSP 1997, pp. 375–378 (1997)
Cha, E., Fitter, N.T., Kim, Y., Fong, T., Matarić, M.J.: Effects of robot sound on auditory localization in human-robot collaboration. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction HRI 2018, pp. 434–442 (2018)
Dellaert, F., Fox, D., Burgard, W., Thrun, S.: Monte carlo localization for mobile robots. In: IEEE International Conference on Robotics and Automation (ICRA99), May 1999
DiBiase, J., Silverman, H., Brandstein, M.: Microphone Arrays : Signal Processing Techniques and Applications. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-662-04619-7
Even, J., Morales, Y., Kallakuri, N., Ishi, C., Hagita, N.: Audio ray tracing for position estimation of entities in blind regions. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014, pp. 1920–1925 (2014)
Gaver, W.W.: What in the world do we hear?: an ecological approach to auditory event perception. Ecol. Psychol. 5(1), 1–29 (1993)
Grisetti, G., Stachniss, C., Burgard, W.: Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Rob. 23(1), 34–46 (2007)
Hershey, S., et al.: CNN architectures for large-scale audio classification. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 131–135 (2017)
Mak, L.C., Furukawa, T.: Non-line-of-sight localization of a controlled sound source. In: 2009 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 475–480 (2009)
Shelton, B.R., Searle, C.L.: The influence of vision on the absolute identification of sound-source position. Percept. Psychophysics 28(6), 589–596 (1980)
Takami, K., Liu, H., Furukawa, T., Kumon, M., Dissanayake, G.: Non-field-of-view sound source localization using diffraction and reflection signals. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 157–162 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Even, J., Satake, S., Kanda, T. (2019). Monitoring Blind Regions with Prior Knowledge Based Sound Localization. In: Salichs, M., et al. Social Robotics. ICSR 2019. Lecture Notes in Computer Science(), vol 11876. Springer, Cham. https://doi.org/10.1007/978-3-030-35888-4_64
Download citation
DOI: https://doi.org/10.1007/978-3-030-35888-4_64
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-35887-7
Online ISBN: 978-3-030-35888-4
eBook Packages: Computer ScienceComputer Science (R0)