Skip to main content
Log in

On the human ability to discriminate audio ambiances from similar locations of an urban environment

  • Original Article
  • Published:
Personal and Ubiquitous Computing Aims and scope Submit manuscript

Abstract

When developing advanced location-based systems augmented with audio ambiances, it would be cost-effective to use a few representative samples from typical environments for describing a larger number of similar locations. The aim of this experiment was to study the human ability to discriminate audio ambiances recorded in similar locations of the same urban environment. A listening experiment consisting of material from three different environments and nine different locations was carried out with nineteen subjects to study the credibility of audio representations for certain environments which would diminish the need for collecting huge audio databases. The first goal was to study to what degree humans are able to recognize whether the recording has been made in an indicated location or in another similar location, when presented with the name of the place, location on a map, and the associated audio ambiance. The second goal was to study whether the ability to discriminate audio ambiances from different locations is affected by a visual cue, by presenting additional information in form of a photograph of the suggested location. The results indicate that audio ambiances from similar urban areas of the same city differ enough so that it is not acceptable to use a single recording as ambience to represent different yet similar locations. Including an image was found to increase the perceived credibility of all the audio samples in representing a certain location. The results suggest that developers of audio-augmented location-based systems should aim at using audio samples recorded on-site for each location in order to achieve a credible impression.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Bonebright TL (1998) Perceptual structure of everyday sounds: a multidimensional scaling approach. In: Proceedings of the 7th international conference on auditory display. ICAD, Laboratory of Acoustics and Audio Signal Processing and the Telecommunications Software and Multimedia Laboratory, Helsinki University, pp 73–78

  2. Bonebright TL, Miner NE, Goldsmith TE, Caudell TP (1998) Data collection and analysis techniques for evaluating the perceptual qualities of auditory stimuli. In: Proceedings of the ICAD ’98. ICAD, British Computer Society

  3. Burr D, Alais D (2006) Combining visual and auditory information. Prog Brain Res 155:243–258

    Article  Google Scholar 

  4. Chrisler J, McCreary D (2010) Handbook of gender research in psychology, vol 1. Gender research in general and experimental psychology. Springer, New York

    Book  Google Scholar 

  5. Frassinetti F, Bolognini N, Ladavas E (2002) Enhancement of visual perception by crossmodal visuo-auditory interaction. Exp Brain Res 147(3):332–343

    Article  Google Scholar 

  6. Google: Street view. URL http://maps.google.com/ (online). Accessed 10 Oct 2011

  7. Green PD, Cooke MP, Crawford MD (1995) Auditory scene analysis and HMM recognition of speech in noise. In: Proceedings of the ICASSP ’95, pp 401–404

  8. Hong JY, Lee PJ, Jeon JY (2010) Evaluation of urban soundscape using soundwalking. In: Proceedings of 20th international congress on acoustics. Sydney

  9. Klabbers E, Veldhuis R (2001) Reducing audible spectral discontinuities. IEEE Trans Speech Audio Process 9(1):39–51

    Article  Google Scholar 

  10. Lakatos S, McAdams S, Causs R (1997) The representation of auditory source characteristics: simple geometric form. Percept Psychophys 59(8):1180–1190

    Article  Google Scholar 

  11. MacEachren AM, Taylor DRF (1994) Visualization in modern cartography. Pergamon, New York

    Google Scholar 

  12. MacVeigh R, Jacobson RD (2007) Increasing the dimensionality of a geographic information system (GIS) using auditory display. In: Scavone GP (ed) Proceedings of the 13th international conference on auditory display (ICAD 2007). Schulich School of Music, McGill University, Montreal, Canada, pp 530–535

  13. Miner NE (1998) Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds. PhD thesis, University of New Mexico

  14. Nokia: Maps 3d. URL http://maps.nokia.com/3D (online). Accessed 10 Oct 2011

  15. Peltonen VTK, Eronen AJ, Parviainen MP, Klapuri AP (2001) Recognition of everyday auditory scenes: potentials, latencies and cues. In: Proceedings of the 110th audio engineering society convention. Hall, Amsterdam

  16. Philips S, Pitton J, Atlas L (2006) Perceptual feature identification for active sonar echoes. In: Proceedings of the IEEE OCEANS conference, pp 1–6

  17. Schiewe J, Kornfeld AL (2009) Framework and potential implementations of urban sound cartography. In: Proceedings of the 12th AGILE international conference on geographic information science. Hannover

  18. Storms RL (1998) Auditory-visual cross-modal perception phenomena. PhD thesis, Naval Postgraduate School, Monterey, CA, USA

  19. Urban remix project. URL http://urbanremix.gatech.edu. Online accessed 10 Oct 2011

  20. Viollon S, Lavandier C, Drake C (2002) Influence of visual setting on sound ratings in an urban environment. Appl Acoust 63(5):493–511

    Article  Google Scholar 

  21. Vroomen J, Gelder BD (2000) Sound enhances visual perception: cross-modal effects of auditory organization on vision. J Exp Psychol Hum Percept Perform 26(5):1583–1590

    Article  Google Scholar 

Download references

Acknowledgments

Funding from the Live Mixed Reality 2011 project by the Finnish Funding Agency for Technology and Innovation (TEKES) and Nokia Research Center is gratefully acknowledged. The authors would also like to thank all the test subjects who participated in the listening experiment.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dani Korpi.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Korpi, D., Heittola, T., Partala, T. et al. On the human ability to discriminate audio ambiances from similar locations of an urban environment. Pers Ubiquit Comput 17, 761–769 (2013). https://doi.org/10.1007/s00779-012-0625-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00779-012-0625-z

Keywords

Navigation