skip to main content
10.1145/3266037.3266104acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
poster

Augmenting Human Hearing Through Interactive Auditory Mediated Reality

Published: 11 October 2018 Publication History

Abstract

To filter and shut out an increasingly loud environment, many resort to the use of personal audio technology. They drown out unwanted sounds, by wearing headphones. This uniform interaction with all surrounding sounds can have a negative impact on social relations and situational awareness. Leveraging mediation through smarter headphones, users gain more agency over their sense of hearing: For instance by being able to selectively alter the volume and other features of specific sounds, without losing the ability to add media. In this work, we propose the vision of interactive auditory mediated reality (AMR). To understand users' attitude and requirements, we conducted a week-long event sampling study (n = 12), where users recorded and rated sources (n = 225) which they would like to mute, amplify or turn down. The results indicate that besides muting, a distinct, "quiet-but-audible" volume exists. It caters to two requirements at the same time: aesthetics/comfort and information acquisition.

References

[1]
Benjamin B. Bederson. 1995. Audio Augmented Reality: A Prototype Automated Tour Guide. In Conference Companion on Human Factors in Computing Systems (CHI '95). ACM, New York, NY, USA, 210--211.
[2]
BRAGI GmbH. 2018. Wireless Earphones - The Dash Pro. (2018). https://www.bragi.com/thedashpro/ (Accessed 16.08.2018).
[3]
Michael Bull. 2008. Sounding Out The City: Personal Stereos and The Management of Everyday Life; Music in Everyday Life. In Sounding Out the City: Personal Stereos and the Management of Everyday Life (Materializing Culture). Berg Publishers, Oxford, UK.
[4]
Peter J. Catalano and Stephen M. Levin. 1985. Noise-Induced Hearing Loss and Portable Radios with Headphones. International Journal of Pediatric Otorhinolaryngology 9, 1 (June 1985), 59--67.
[5]
Doppler Labs Inc. 2018. Here One - Wireless Smart Earbuds. (2018). https://hereplus.me/products/here-one/ (Accessed 15.08.2018).
[6]
Aki H"arm"a, Julia Jakka, Miikka Tikander, Matti Karjalainen, Tapio Lokki, Jarmo Hiipakka, and Gaëtan Lorho. 2004. Augmented Reality Audio for Mobile and Wearable Appliances. Journal of the Audio Engineering Society 52, 6 (2004), 618--639. http://www.aes.org/e-lib/browse.cfm?elib=13010
[7]
Jochen Huber, Roy Shilkrot, Pattie Maes, and Suranga Nanayakkara. 2017. Assistive Augmentation. Springer, Singapore, New York, NY.
[8]
Jian Kang and Mei Zhang. 2010. Semantic Differential Analysis of the Soundscape in Urban Open Public Spaces. Building and environment 45, 1 (2010), 150--157. http://www.sciencedirect.com/science/article/pii/S0360132309001309
[9]
Richard Lichenstein, Daniel Clarence Smith, Jordan Lynne Ambrose, and Laurel Anne Moody. 2012. Headphone Use and Pedestrian Injury and Death in the United States: 2004textendash2011. Injury prevention (2012), 18:287--290.
[10]
Nicolas Maisonneuve, Matthias Stevens, Maria E. Niessen, and Luc Steels. 2009. NoiseTube: Measuring and Mapping Noise Pollution with Mobile Phones. In Information Technologies in Environmental Engineering. Springer, Berlin, Heidelberg, Berlin, Heidelberg, 215--228.
[11]
Aadil Mamuji, Roel Vertegaal, Changuk Sohn, and Daniel Cheng. 2005. Attentive Headphones: Augmenting Conversational Attention with a Real World TiVo. In Extended Abstracts of CHI, Vol. 5. ACM, Portland, Oregon, USA, 2223--.
[12]
Steve Mann. 1999. Mediated Reality. Linux Journal 1999, 59es (March 1999), 5. http://dl.acm.org/citation.cfm?id=327697.327702
[13]
Florian Mueller and Matthew Karau. 2002. Transparent Hearing. In CHI '02 Extended Abstracts on Human Factors in Computing Systems (CHI EA '02). ACM, New York, NY, USA, 730--731.
[14]
Rajib Rana, Chun Tung Chou, Nirupama Bulusu, Salil Kanhere, and Wen Hu. 2013. Ear-Phone: A Context-Aware Noise Mapping Using Smart Phones. arXiv:1310.4270 {cs} (Oct. 2013). http://arxiv.org/abs/1310.4270
[15]
Spencer Russell, Gershon Dublon, and Joseph A. Paradiso. 2016. HearThere: Networked Sensory Prosthetics Through Auditory Augmented Reality. In Proceedings of the 7th Augmented Human International Conference 2016 (AH '16). ACM, New York, NY, USA, 20:1--20:8.
[16]
R. Murray Schafer. 1993. The Soundscape: Our Sonic Environment and the Tuning of the World. Destiny Books, Rochester, Vermont.
[17]
Christoph Stahl. 2007. The Roaring Navigator: A Group Guide for the Zoo with Shared Auditory Landmark Display. In Proceedings of the 9th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI '07). ACM, New York, NY, USA, 383--386.
[18]
Yuichiro Takeuchi. 2010. Weightless Walls and the Future Office. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10). ACM, New York, NY, USA, 619--628.
[19]
Selman Yücetürk, Mohammad Obaid, and As$ıota$m Evren Yantac c. 2016. Probing Human-Soundscape Interaction Using Observational User Experience Methods. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI '16). ACM, New York, NY, USA, 33:1--33:4.

Cited By

View all
  • (2020)Acoustic Transparency and the Changing Soundscape of Auditory Mixed RealityProceedings of the 2020 CHI Conference on Human Factors in Computing Systems10.1145/3313831.3376702(1-16)Online publication date: 21-Apr-2020

Index Terms

  1. Augmenting Human Hearing Through Interactive Auditory Mediated Reality

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    UIST '18 Adjunct: Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology
    October 2018
    251 pages
    ISBN:9781450359498
    DOI:10.1145/3266037
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 October 2018

    Check for updates

    Author Tags

    1. auditory mediated reality
    2. augmented hearing
    3. hearables

    Qualifiers

    • Poster

    Conference

    UIST '18

    Acceptance Rates

    UIST '18 Adjunct Paper Acceptance Rate 80 of 375 submissions, 21%;
    Overall Acceptance Rate 355 of 1,733 submissions, 20%

    Upcoming Conference

    UIST '25
    The 38th Annual ACM Symposium on User Interface Software and Technology
    September 28 - October 1, 2025
    Busan , Republic of Korea

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)13
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 17 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2020)Acoustic Transparency and the Changing Soundscape of Auditory Mixed RealityProceedings of the 2020 CHI Conference on Human Factors in Computing Systems10.1145/3313831.3376702(1-16)Online publication date: 21-Apr-2020

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media