skip to main content
10.1145/3472749.3474739acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

SoundsRide: Affordance-Synchronized Music Mixing for In-Car Audio Augmented Reality

Published: 12 October 2021 Publication History

Abstract

Music is a central instrument in video gaming to attune a player’s attention to the current atmosphere and increase their immersion in the game. We transfer the idea of scene-adaptive music to car drives and propose SoundsRide, an in-car audio augmented reality system that mixes music in real-time synchronized with sound affordances along the ride. After exploring the design space of affordance-synchronized music, we design SoundsRide to temporally and spatially align high-contrast events on the route, e. g., highway entrances or tunnel exits, with high-contrast events in music, e. g., song transitions or beat drops, for any recorded and annotated GPS trajectory by a three-step procedure. In real-time, SoundsRide 1) estimates temporal distances to events on the route, 2) fuses these novel estimates with previous estimates in a cost-aware music-mixing plan, and 3) if necessary, re-computes an updated mix to be propagated to the audio output. To minimize user-noticeable updates to the mix, SoundsRide fuses new distance information with a filtering procedure that chooses the best updating strategy given the last music-mixing plan, the novel distance estimations, and the system parameterization. We technically evaluate SoundsRide and conduct a user evaluation with 8 participants to gain insights into how users perceive SoundsRide in terms of mixing, affordances, and synchronicity. We find that SoundsRide can create captivating music experiences and positively as well as negatively influence subjectively perceived driving safety, depending on the mix and user.

References

[1]
Robert Albrecht, Riitta Väänänen, and Tapio Lokki. 2016. Guided by music: pedestrian and cyclist navigation with route and beacon guidance. Personal and Ubiquitous Computing 20, 1 (2016), 121–145. https://doi.org/10.1007/s00779-016-0906-z
[2]
Linas Baltrunas, Marius Kaminskas, Bernd Ludwig, Omar Moling, Francesco Ricci, Aykan Aydin, Karl Heinz Lüke, and Roland Schwaiger. 2011. InCarMusic: Context-aware music recommendations in a car. Lecture Notes in Business Information Processing 85 LNBIP (2011), 89–100. https://doi.org/10.1007/978-3-642-23014-1_8
[3]
Benjamin B. Bederson. 1995. Audio augmented reality. (1995), 210–211. https://doi.org/10.1145/223355.223526
[4]
Matthias Braunhofer, Marius Kaminskas, and Francesco Ricci. 2013. Location-aware music recommendation. International Journal of Multimedia Information Retrieval 2, 1(2013), 31–44. https://doi.org/10.1007/s13735-012-0032-2
[5]
Warren Brodsky. 2001. The effects of music tempo on simulated driving performance and vehicular control. Transportation Research Part F: Traffic Psychology and Behaviour 4, 4(2001), 219–241. https://doi.org/10.1016/S1369-8478(01)00025-0
[6]
E Brown and P Cairns. 2004. A grounded investigation of immersion in games. Proc. CHI EA ’04 (2004), 31–32. http://discovery.ucl.ac.uk/55390/
[7]
Gary Burnett, Adrian Hazzard, Elizabeth Crundall, and David Crundall. 2017. Altering speed perception through the subliminal adaptation of music within a vehicle. AutomotiveUI 2017 - 9th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, Proceedings (2017), 164–172. https://doi.org/10.1145/3122986.3122990
[8]
Zhiyong Cheng and Jialie Shen. 2016. On effective location-aware music recommendation. ACM Transactions on Information Systems 34, 2 (2016). https://doi.org/10.1145/2846092
[9]
Karen Collins. 2009. An introduction to procedural music in video games. Contemporary Music Review 28, 1 (2009), 5–15. https://doi.org/10.1080/07494460802663983
[10]
Stefan K. Ehrlich, Kat R. Agres, Cuntai Guan, and Gordon Cheng. 2019. A closed-loop, music-based brain-computer interface for emotion mediation. PLoS ONE 14, 3 (2019), 1–24. https://doi.org/10.1371/journal.pone.0213516
[11]
Greg T. Elliott and Bill Tomlinson. 2006. PersonalSoundtrack: Context-aware playlists that adapt to user pace. CHI EA ’06: CHI ’06 Extended Abstracts on Human Factors in Computing Systems (2006), 736–741. https://doi.org/10.1145/1125451.1125599
[12]
Seyedeh Maryam Fakhrhosseini, Steven Landry, Yin Yin Tan, Saru Bhattarai, and Myounghoon Jeon. 2014. If you’re angry, turn the music on: Music can mitigate anger effects on driving performance. AutomotiveUI 2014 - 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, in Cooperation with ACM SIGCHI - Proceedings(2014). https://doi.org/10.1145/2667317.2667410
[13]
Emma Frid, Celso Gomes, and Zeyu Jin. 2020. Music Creation by Example. CHI ’20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020), 1–13. https://doi.org/10.1145/3313831.3376514
[14]
Yiannis Georgiou and Eleni A. Kyza. 2017. The development and validation of the ARI questionnaire: An instrument for measuring immersion in location-based augmented reality settings. International Journal of Human Computer Studies 98, December 2015(2017), 24–37. https://doi.org/10.1016/j.ijhcs.2016.09.014
[15]
Gerhard Johann Hagerer, Michael Lux, Stefan Ehrlich, and Gordon Cheng. 2015. Augmenting affect from speech with generative music. CHI EA ’15: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems 18 (2015), 977–982. https://doi.org/10.1145/2702613.2732792
[16]
Patrick Helmholz, Sebastian Vetter, and Susanne Robra-Bissantz. 2014. AmbiTune: Bringing context-awareness to music playlists while driving. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 8463 LNCS (2014), 393–397. https://doi.org/10.1007/978-3-319-06701-8_32
[17]
Patrick Helmholz, Edgar Ziesmann, and Susanne Robra-Bissantz. 2013. Context-awareness in the car: Prediction, evaluation and usage of route trajectories. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 7939 LNCS (2013), 413–419. https://doi.org/10.1007/978-3-642-38827-9_30
[18]
Xiping Hu, Junqi Deng, Jidi Zhao, Wenyan Hu, Edith C.H. Ngai, Renfei Wang, Johnny Shen, Min Liang, Xitong Li, Victor C.M. Leung, and Yu Kwong Kwok. 2015. SAfeDJ: A crowd-cloud codesign approach to situation-aware music delivery for drivers. ACM Transactions on Multimedia Computing, Communications and Applications 12, 1(2015). https://doi.org/10.1145/2808201
[19]
Haikun Huang, Michael Solah, Dingzeyu Li, and Lap Fai Yu. 2019. Audible panorama: Automatic spatial audio generation for panorama imagery. CHI ’19: Proceedings of CHI Conference on Human Factors in Computing SystemsChi(2019), 1–11. https://doi.org/10.1145/3290605.3300851
[20]
Victor Kaptelinin and Bonnie Nardi. 2012. Affordances in HCI: Toward a mediated action perspective. CHI ’12: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2012), 967–976. https://doi.org/10.1145/2207676.2208541
[21]
Michael Krzyzaniak, David Frohlich, and Philip J.B. Jackson. 2019. Six types of audio that DEFY reality!: A taxonomy of audio augmented reality with examples. AM’19: Proceedings of the 14th International Audio Mostly Conference: A Journey in Sound (2019), 160–167. https://doi.org/10.1145/3356590.3356615
[22]
Michael Z. Land and McConnell Peter N.1994. Method and apparatus for dynamically composing music and sound effects using a computer entertainment system.
[23]
Wei Po Lee, Chun Ting Chen, Jhih Yuan Huang, and Jhen Yi Liang. 2017. A smartphone-based activity-aware system for music streaming recommendation. Knowledge-Based Systems 131 (2017), 70–82. https://doi.org/10.1016/j.knosys.2017.06.002
[24]
Mark McGill, Stephen Brewster, David McGookin, and Graham Wilson. 2020. Acoustic Transparency and the Changing Soundscape of Auditory Mixed Reality. CHI ’20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020), 1–16. https://doi.org/10.1145/3313831.3376702
[25]
Oriol Nieto and Juan Pablo Bello. 2016. Systematic exploration of computational music structure research. ISMIR ’16: Proceedings of the 17th International Society for Music Information Retrieval Conference(2016), 547–553.
[26]
Adrian C. North and David J. Hargreaves. 1999. Music and driving game performance. Scandinavian Journal of Psychology 40, 4 (1999), 285–292. https://doi.org/10.1111/1467-9450.404128
[27]
Peter Peerdeman. 2006. Sound and Music in Games. And Haugehåtveit, OApril (2006), 1–18. http://www.peterpeerdeman.nl/vu/ls/peerdeman_sound_and_music_in_games.pdf
[28]
Hua Qin, Pei Luen Patrick Rau, and Gavriel Salvendy. 2009. Measuring player immersion in the computer game narrative. International Journal of Human-Computer Interaction 25, 2(2009), 107–133. https://doi.org/10.1080/10447310802546732
[29]
Steve Rubin and Maneesh Agrawala. 2014. Generating emotionally relevant musical scores for audio stories. UIST ’14: Proceedings of the 27th annual ACM symposium on User Interface Software and Technology (2014), 439–448. https://doi.org/10.1145/2642918.2647406
[30]
Steve Rubin, Floraine Berthouzoz, Gautham J. Mysore, Wilmot Li, and Maneesh Agrawala. 2012. UnderScore: Musical underlays for audio stories. UIST ’12: Proceedings of the 25th annual ACM symposium on User interface Software and Technology (2012), 359–366.
[31]
Haruki Sato, Tatsunori Hirai, Tomoyasu Nakano, Masataka Goto, and Shigeo Morishima. 2015. A music video authoring system synchronizing climax of video clips and music via rearrangement of musical bars. ACM SIGGRAPH 2015 Posters, SIGGRAPH 2015(2015), 2010. https://doi.org/10.1145/2787626.2792608
[32]
Norma Saiph Savage, Maciej Baranski, Norma Elva Chavez, and Tobias Höllerer. 2012. I’m feeling LoCo: A location based context aware recommendation system. Lecture Notes in Geoinformation and Cartography199599 (2012), 37–54. https://doi.org/10.1007/978-3-642-24198-7_3
[33]
Eldon Schoop, James Smith, and Bjoern Hartmann. 2018. HindSight: Enhancing spatial awareness by sonifying detected objects in real-time 360-degree video. CHI ’18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems 2018-April (2018), 1–12. https://doi.org/10.1145/3173574.3173717
[34]
Marjolein D. van der Zwaag, Chris Dijksterhuis, Dick de Waard, Ben L.J.M. Mulder, Joyce H.D.M. Westerink, and Karel A. Brookhuis. 2012. The influence of music on mood and performance while driving. Ergonomics 55, 1 (2012), 12–22. https://doi.org/10.1080/00140139.2011.638403
[35]
Len Vande Veire and Tijl De Bie. 2018. From raw audio to a seamless mix: creating an automated DJ system for Drum and Bass. Eurasip Journal on Audio, Speech, and Music Processing 2018, 1(2018). https://doi.org/10.1186/s13636-018-0134-8
[36]
Yujia Wang, Wei Liang, Wanwan Li, Dingzeyu Li, and Lap-Fai Yu. 2020. Scene-Aware Background Music Synthesis. (2020), 1162–1170. https://doi.org/10.1145/3394171.3413894
[37]
Huiying Wen, N. N. Sze, Qiang Zeng, and Sangen Hu. 2019. Effect of music listening on physiological condition, mental workload, and driving performance with consideration of driver temperament. International Journal of Environmental Research and Public Health 16, 15(2019). https://doi.org/10.3390/ijerph16152766
[38]
Zach Whalen. 2004. Play along - An approach to videogame music. Game Studies 4, 1 (2004), 1–28.
[39]
Sebastian Zepf, Javier Hernandez, Monique Dittrich, and Alexander Schmitt. 2019. Towards empathetic car interfaces: Emotional triggers while driving. CHI EA ’19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems(2019), 1–6. https://doi.org/10.1145/3290607.3312883
[40]
Yueyan Zhu, Ying Wang, Guofa Li, and Xiang Guo. 2016. Recognizing and releasing drivers’ negative emotions by using music: Evidence from driver anger. AutomotiveUI ’16 Adjunct: Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (2016), 173–178. https://doi.org/10.1145/3004323.3004344

Cited By

View all
  • (2024)Move, Connect, Interact: Introducing a Design Space for Cross-Traffic InteractionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785808:3(1-40)Online publication date: 9-Sep-2024
  • (2024)Towards Music-Aware Virtual AssistantsProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676416(1-14)Online publication date: 13-Oct-2024
  • (2024)Story-Driven: Exploring the Impact of Providing Real-time Context Information on Automated StorytellingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676372(1-15)Online publication date: 13-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UIST '21: The 34th Annual ACM Symposium on User Interface Software and Technology
October 2021
1357 pages
ISBN:9781450386357
DOI:10.1145/3472749
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 October 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. auditory augmented reality
  2. context-adaptive music
  3. sound affordances

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

UIST '21

Acceptance Rates

Overall Acceptance Rate 561 of 2,567 submissions, 22%

Upcoming Conference

UIST '25
The 38th Annual ACM Symposium on User Interface Software and Technology
September 28 - October 1, 2025
Busan , Republic of Korea

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)244
  • Downloads (Last 6 weeks)26
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Move, Connect, Interact: Introducing a Design Space for Cross-Traffic InteractionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785808:3(1-40)Online publication date: 9-Sep-2024
  • (2024)Towards Music-Aware Virtual AssistantsProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676416(1-14)Online publication date: 13-Oct-2024
  • (2024)Story-Driven: Exploring the Impact of Providing Real-time Context Information on Automated StorytellingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676372(1-15)Online publication date: 13-Oct-2024
  • (2024)SoundShift: Exploring Sound Manipulations for Accessible Mixed-Reality AwarenessProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661556(116-132)Online publication date: 1-Jul-2024
  • (2024)AdaptiveVoice: Cognitively Adaptive Voice Interface for Driving AssistanceProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642876(1-18)Online publication date: 11-May-2024
  • (2024)MARingBA: Music-Adaptive Ringtones for Blended Audio Notification DeliveryProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642376(1-15)Online publication date: 11-May-2024
  • (2024)Portobello: Extending Driving Simulation from the Lab to the RoadProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642341(1-13)Online publication date: 11-May-2024
  • (2024)User Experience Research Play Card in Augmented Reality: A sensemaking case study on designing Visibility and Modality2024 10th International Conference on Virtual Reality (ICVR)10.1109/ICVR62393.2024.10868803(127-132)Online publication date: 24-Jul-2024
  • (2023)Hybrid Vibration Reduction System for a Vehicle Suspension under Deterministic and Random ExcitationsEnergies10.3390/en1605220216:5(2202)Online publication date: 24-Feb-2023
  • (2023)Velocity control of mobile robots using Pace Maker Light systemペースメーカーライトを利用した移動ロボットの速度制御Transactions of the JSME (in Japanese)10.1299/transjsme.22-0025089:918(22-00250-22-00250)Online publication date: 2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media