skip to main content
10.1145/3543174.3545257acmconferencesArticle/Chapter ViewAbstractPublication PagesautomotiveuiConference Proceedingsconference-collections
research-article
Public Access

Gesture and Voice Commands to Interact With AR Windshield Display in Automated Vehicle: A Remote Elicitation Study

Published: 17 September 2022 Publication History

Abstract

Augmented reality (AR) windshield display (WSD) offers promising ways to engage in non-driving tasks in automated vehicles. Previous studies explored different ways WSD can be used to present driving and other tasks-related information and how that can affect driving performance, user experience, and performance in secondary tasks. Our goal for this study was to examine how drivers expect to use gesture and voice commands for interacting with WSD for performing complex, multi-step personal and work-related tasks in an automated vehicle. In this remote unmoderated online elicitation study, 31 participants proposed 373 gestures and 373 voice commands for performing 24 tasks. We analyzed the elicited interactions, their preferred modality of interaction, and the reasons behind this preference. Lastly, we discuss our results and their implications for designing AR WSD in automated vehicles.

References

[1]
Abdullah X Ali, Meredith Ringel Morris, and Jacob O Wobbrock. 2019. Crowdlicit: A system for conducting distributed end-user elicitation and identification studies. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–12.
[2]
Micah Alpern and Katie Minardo. 2003. Developing a car gesture interface for use as a secondary task. In CHI’03 extended abstracts on Human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 932–933.
[3]
Leonardo Angelini, Jürgen Baumgartner, Francesco Carrino, Stefano Carrino, Maurizio Caon, Omar Khaled, Jürgen Sauer, Denis Lalanne, Elena Mugellini, and Andreas Sonderegger. 2016. Comparing gesture, speech and touch interaction modalities for in-vehicle infotainment systems. In Actes de la 28ième conférence francophone sur l’Interaction Homme-Machine. HAL, Lyon, France, 188–196.
[4]
Leonardo Angelini, Francesco Carrino, Stefano Carrino, Maurizio Caon, Omar Abou Khaled, Jürgen Baumgartner, Andreas Sonderegger, Denis Lalanne, and Elena Mugellini. 2014. Gesturing on the steering wheel: A user-elicited taxonomy. In Proceedings of the 6th international conference on automotive user interfaces and interactive vehicular applications. Association for Computing Machinery, New York, NY, USA, 1–8.
[5]
Lisa Anthony, Quincy Brown, Jaye Nias, Berthel Tate, and Shreya Mohan. 2012. Interaction and recognition challenges in interpreting children’s touch and gesture input on mobile devices. In Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces. Association for Computing Machinery, New York, NY, USA, 225–234.
[6]
J Alejandro Betancur, Nicolás Gómez, Mario Castro, Frederic Merienne, and Daniel Suárez. 2018. User experience comparison among touchless, haptic and voice Head-Up Displays interfaces in automobiles. International Journal on Interactive Design and Manufacturing (IJIDeM) 12, 4(2018), 1469–1479.
[7]
Laura-Bianca Bilius and Radu-Daniel Vatavu. 2020. A multistudy investigation of drivers and passengers’ gesture and voice input preferences for in-vehicle interactions. Journal of Intelligent Transportation Systems 25, 2 (2020), 197–220.
[8]
Daniel Brand, Kevin Büchele, and Alexander Meschtscherjakov. 2016. Pointing at the HUD: Gesture interaction using a leap motion. In Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 167–172.
[9]
Charlynn Burd, Michael Burrows, and Brian McKenzie. 2021. Travel time to work in the united states: 2019. American Community Survey Reports, United States Census Bureau 2 (2021), 2021.
[10]
Gary Burnett, Elizabeth Crundall, David Large, Glyn Lawson, and Lee Skrypchuk. 2013. A study of unidirectional swipe gestures on in-vehicle touch screens. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 22–29.
[11]
Edwin Chan, Teddy Seyed, Wolfgang Stuerzlinger, Xing-Dong Yang, and Frank Maurer. 2016. User elicitation on single-hand microgestures. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 3403–3414.
[12]
Vassilis Charissis and Stylianos Papanastasiou. 2010. Human–machine collaboration through vehicle head up display interface. Cognition, Technology & Work 12, 1 (2010), 41–50.
[13]
On-Road Automated Driving (ORAD) committee. 2021. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. https://saemobilus.sae.org/content/j3016_202104. Accessed: 2022-01-5.
[14]
Henrik Detjen, Sarah Faltaous, Stefan Geisler, and Stefan Schneegass. 2019. User-defined voice and mid-air gesture commands for maneuver-based interventions in automated vehicles. In Proceedings of Mensch und Computer 2019. Association for Computing Machinery, New York, NY, USA, 341–348.
[15]
Julien Epps, Serge Lichman, and Mike Wu. 2006. A study of hand shape use in tabletop gesture interaction. In CHI’06 extended abstracts on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 748–753.
[16]
Hessam Jahani Fariman, Hasan J Alyamani, Manolya Kavakli, and Len Hamey. 2016. Designing a user-defined gesture vocabulary for an in-vehicle climate control system. In Proceedings of the 28th Australian Conference on Computer-Human Interaction. Association for Computing Machinery, New York, NY, USA, 391–395.
[17]
Alexander Feierle, Fabian Schlichtherle, and Klaus Bengler. 2021. Augmented Reality Head-Up Display: A Visual Support During Malfunctions in Partially Automated Driving?IEEE Transactions on Intelligent Transportation Systems 23, 5(2021), 4853–4865.
[18]
Leah Findlater, Ben Lee, and Jacob Wobbrock. 2012. Beyond QWERTY: augmenting touch screen keyboards with multi-touch gestures for non-alphanumeric input. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 2679–2682.
[19]
Kikuo Fujimura, Lijie Xu, Cuong Tran, Rishabh Bhandari, and Victor Ng-Thow-Hing. 2013. Driver queries using wheel-constrained finger pointing and 3-D head-up display visual feedback. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 56–62.
[20]
Joseph L Gabbard, Gregory M Fitch, and Hyungil Kim. 2014. Behind the glass: Driver challenges and opportunities for AR automotive applications. Proc. IEEE 102, 2 (2014), 124–136.
[21]
Michael A Gerber, Ronald Schroeter, Li Xiaomeng, and Mohammed Elhenawy. 2020. Self-interruptions of non-driving related tasks in automated vehicles: Mobile vs head-up display. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–9.
[22]
José Ignacio Giménez-Nadal, José Alberto Molina, and Jorge Velilla. 2022. Trends in commuting time of European workers: A cross-country analysis. Transport Policy 116(2022), 327–342.
[23]
Yves Guiard. 1987. Asymmetric division of labor in human skilled bimanual action: The kinematic chain as a model. Journal of motor behavior 19, 4 (1987), 486–517.
[24]
Renate Haeuslschmid, Bastian Pfleging, and Florian Alt. 2016. A design space to support the development of windshield applications for the car. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 5076–5091.
[25]
Renate Häuslschmid, Sven Osterwald, Marcus Lang, and Andreas Butz. 2015. Augmenting the driver’s view with peripheral information on a windshield display. In Proceedings of the 20th International Conference on Intelligent User Interfaces. Association for Computing Machinery, New York, NY, USA, 311–321.
[26]
Fabian Hoffmann, Miriam-Ida Tyroller, Felix Wende, and Niels Henze. 2019. User-defined interaction for smart homes: voice, touch, or mid-air gestures?. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia. Association for Computing Machinery, New York, NY, USA, 1–7.
[27]
Hessam Jahani, Hasan J Alyamani, Manolya Kavakli, Arindam Dey, and Mark Billinghurst. 2017. User evaluation of hand gestures for designing an intelligent in-vehicle interface. In International Conference on Design Science Research in Information System and Technology. Springer, Springer, Berlin/Heidelberg, Germany, 104–121.
[28]
Dagmar Kern and Albrecht Schmidt. 2009. Design space for driver-based automotive user interfaces. In Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 3–10.
[29]
SeungJun Kim and Anind K Dey. 2009. Simulated augmented reality windshield display as a cognitive mapping aid for elder driver navigation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 133–142.
[30]
Anne Köpsel and Nikola Bubalo. 2015. Benefiting from legacy bias. interactions 22, 5 (2015), 44–47.
[31]
Andrew L Kun. 2018. Human-machine interaction for vehicles: Review and outlook. Foundations and Trends® in Human–Computer Interaction 11, 4(2018), 201–293.
[32]
Andrew L Kun, Orit Shaer, Andreas Riener, Stephen Brewster, and Clemens Schartmüller. 2019. AutoWork 2019: workshop on the future of work and well-being in automated vehicles. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings. Association for Computing Machinery, New York, NY, USA, 56–62.
[33]
Alexander Kunze, Stephen J Summerskill, Russell Marshall, and Ashleigh J Filtness. 2018. Augmented reality displays for communicating uncertainty information in automated driving. In Proceedings of the 10th international conference on automotive user interfaces and interactive vehicular applications. Association for Computing Machinery, New York, NY, USA, 164–175.
[34]
Byron Lahey, Audrey Girouard, Winslow Burleson, and Roel Vertegaal. 2011. PaperPhone: understanding the use of bend gestures in mobile devices with flexible electronic paper displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1303–1312.
[35]
Jae Yeol Lee, Gue Won Rhee, and Dong Woo Seo. 2010. Hand gesture-based tangible interactions for manipulating virtual objects in a mixed reality environment. The International Journal of Advanced Manufacturing Technology 51, 9(2010), 1069–1082.
[36]
Sang Hun Lee, Se-One Yoon, and Jae Hoon Shin. 2015. On-wheel finger gesture control for in-vehicle systems on central consoles. In Adjunct Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 94–99.
[37]
Xiaomeng Li, Ronald Schroeter, Andry Rakotonirainy, Jonny Kuo, and Michael G Lenné. 2020. Effects of different non-driving-related-task display modes on drivers’ eye-movement patterns during take-over in an automated vehicle. Transportation research part F: traffic psychology and behaviour 70 (2020), 135–148.
[38]
Xiaomeng Li, Ronald Schroeter, Andry Rakotonirainy, Jonny Kuo, and Michael G Lenné. 2021. Get Ready for Take-Overs: Using Head-Up Display for Drivers to Engage in Non–Driving-Related Tasks in Automated Vehicles. Human factors 0, 0 (2021), 00187208211056200.
[39]
Hongnan Lin. 2019. Using passenger elicitation for developing gesture design guidelines for adjusting highly automated vehicle dynamics. In Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion. Association for Computing Machinery, New York, NY, USA, 97–100.
[40]
Patrick Lindemann, Tae-Young Lee, and Gerhard Rigoll. 2018. Catch my drift: Elevating situation awareness for highly automated driving with an explanatory windshield display user interface. Multimodal Technologies and Interaction 2, 4 (2018), 71.
[41]
Patrick Lindemann, Tae-Young Lee, and Gerhard Rigoll. 2018. Supporting driver situation awareness for autonomous urban driving with an augmented-reality windshield display. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, IEEE, Piscataway, NJ, 358–363.
[42]
Lutz Lorenz, Philipp Kerschbaum, and Josef Schumann. 2014. Designing take over scenarios for automated driving: How does augmented reality support the driver to get back into the loop?. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 58. Sage Publications Sage CA: Los Angeles, CA, Sage Publications, Thousand Oaks, CA, 1681–1685.
[43]
Keenan R May, Thomas M Gable, and Bruce N Walker. 2017. Designing an in-vehicle air gesture set using elicitation methods. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 74–83.
[44]
Christophe Mignot, Claude Valot, and Noelle Carbonell. 1993. An experimental study of future “natural” multimodal human-computer interaction. In INTERACT’93 and CHI’93 Conference Companion on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 67–68.
[45]
W Thomas Miller and Andrew L Kun. 2013. Using speech, GUIs and buttons in police vehicles: field data on user preferences for the Project54 system. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 108–113.
[46]
Meredith Ringel Morris. 2012. Web on the wall: insights from a multimodal interaction elicitation study. In Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces. Association for Computing Machinery, New York, NY, USA, 95–104.
[47]
Meredith Ringel Morris, Andreea Danielescu, Steven Drucker, Danyel Fisher, Bongshin Lee, MC Schraefel, and Jacob O Wobbrock. 2014. Reducing legacy bias in gesture elicitation studies. interactions 21, 3 (2014), 40–45.
[48]
Meredith Ringel Morris, Jacob O Wobbrock, and Andrew D Wilson. 2010. Understanding users’ preferences for surface gestures. In Proceedings of graphics interface 2010. Canadian Information Processing Society, 403 King Street West, Suite 205 Toronto, Ont. M5U 1LSCanada, 261–268.
[49]
Divyabharathi Nagaraju, Alberta Ansah, Nabil Al Nahin Ch, Caitlin Mills, Christian P Janssen, Orit Shaer, and Andrew L Kun. 2021. How Will Drivers Take Back Control in Automated Vehicles? A Driving Simulator Test of an Interleaving Framework. In 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 20–27.
[50]
Vijayakumar Nanjappan, Rongkai Shi, Hai-Ning Liang, Kim King-Tong Lau, Yong Yue, and Katie Atkinson. 2019. Towards a taxonomy for in-vehicle interactions using wearable smart textiles: insights from a user-elicitation study. Multimodal Technologies and Interaction 3, 2 (2019), 33.
[51]
Brian Normile and Jane Ulitskaya. 2021. Which Cars Have Head-Up Displays?cars. https://www.cars.com/articles/which-cars-have-head-up-displays-434824/
[52]
Stefan Palan and Christian Schitter. 2018. Prolific. ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance 17 (2018), 22–27.
[53]
Ekaterina Peshkova, Martin Hitz, and David Ahlström. 2016. Exploring user-defined gestures and voice commands to control an unmanned aerial vehicle. In International Conference on Intelligent Technologies for Interactive Entertainment. Springer, Springer, Berlin/Heidelberg, Germany, 47–62.
[54]
L Petersen, L Robert, X Yang, and D Tilbury. 2019. Situational Awareness, Driver’s Trust in Automated Driving Systems and Secondary Task Performance. SAE Int. J. of CAV 2, 2 (2019), 0.
[55]
Bastian Pfleging, Maurice Rang, and Nora Broy. 2016. Investigating user needs for non-driving-related activities during automated driving. In Proceedings of the 15th international conference on mobile and ubiquitous multimedia. Association for Computing Machinery, New York, NY, USA, 91–99.
[56]
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013. User-defined gestures for augmented reality. In IFIP Conference on Human-Computer Interaction. Springer, Springer, Berlin/Heidelberg, Germany, 282–299.
[57]
Jonas Radlmayr, Christian Gold, Lutz Lorenz, Mehdi Farid, and Klaus Bengler. 2014. How traffic situations and non-driving related tasks affect the take-over quality in highly automated driving. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 58. Sage Publications Sage CA: Los Angeles, CA, Sage Publications, Thousand Oaks, CA, 2063–2067.
[58]
Andreas Riegler, Bilal Aksoy, Andreas Riener, and Clemens Holzmann. 2020. Gaze-based Interaction with Windshield Displays for Automated Driving: Impact of Dwell Time and Feedback Design on Task Performance and Subjective Workload. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 151–160.
[59]
Andreas Riegler, Philipp Wintersberger, Andreas Riener, and Clemens Holzmann. 2019. Augmented Reality Windshield Displays and Their Potential to Enhance User Experience in Automated Driving. i-com 18, 2 (2019), 127–149.
[60]
Sandrine Robbe. 1998. An empirical study of speech and gesture interaction: Toward the definition of ergonomic design guidelines. In CHI 98 Conference Summary on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 349–350.
[61]
Sandrine Robbe-Reiter, Noëlle Carbonell, and Pierre Dauchy. 2000. Expression constraints in multimodal human-computer interaction. In Proceedings of the 5th international conference on Intelligent user interfaces. Association for Computing Machinery, New York, NY, USA, 225–228.
[62]
Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-defined motion gestures for mobile interaction. In Proceedings of the SIGCHI conference on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 197–206.
[63]
Clemens Schartmüller, Andreas Riener, Philipp Wintersberger, and Anna-Katharina Frison. 2018. Workaholistic: on balancing typing-and handover-performance in automated driving. In Proceedings of the 20th international conference on human-computer interaction with mobile devices and services. Association for Computing Machinery, New York, NY, USA, 1–12.
[64]
Albrecht Schmidt, Anind K Dey, Andrew L Kun, and Wolfgang Spiessl. 2010. Automotive user interfaces: human computer interaction in the car. In CHI’10 Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 3177–3180.
[65]
Orit Shaer and Eva Hornecker. 2010. Tangible user interfaces: past, present, and future directions. Now Publishers Inc, Hanover, MA 02339, USA.
[66]
Yasuhiro Takaki, Yohei Urano, Shinji Kashiwada, Hiroshi Ando, and Koji Nakamura. 2011. Super multi-view windshield display for long-distance image information presentation. Optics express 19, 2 (2011), 704–716.
[67]
Thomaz Teodorovicz, Andrew L Kun, Raffaella Sadun, and Orit Shaer. 2022. Multitasking while driving: A time use study of commuting knowledge workers to assess current and future uses. International Journal of Human-Computer Studies 162 (2022), 102789.
[68]
Kathryn G Tippey, Elayaraj Sivaraj, and Thomas K Ferris. 2017. Driving while interacting with Google Glass: Investigating the combined effect of head-up display and hands-free input on driving safety and multitask performance. Human factors 59, 4 (2017), 671–688.
[69]
Consuelo Valdes, Diana Eastman, Casey Grote, Shantanu Thatte, Orit Shaer, Ali Mazalek, Brygg Ullmer, and Miriam K Konkel. 2014. Exploring the design space of gestural interaction with active tokens through user-defined gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 4107–4116.
[70]
Emma van Amersfoorth, Lotte Roefs, Quinta Bonekamp, Laurent Schuermans, and Bastian Pfleging. 2019. Increasing driver awareness through translucency on windshield displays. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings. Association for Computing Machinery, New York, NY, USA, 156–160.
[71]
Radu-Daniel Vatavu and Jacob O Wobbrock. 2015. Formalizing agreement analysis for elicitation studies: new measures, significance test, and toolkit. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1325–1334.
[72]
Sarah Theres Völkel, Daniel Buschek, Malin Eiband, Benjamin R Cowan, and Heinrich Hussmann. 2021. Eliciting and Analysing Users’ Envisioned Dialogues with Perfect Voice Assistants. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–15.
[73]
Ying Wang, Shengfan He, Zuerhumuer Mohedan, Yueyan Zhu, Lijun Jiang, and Zhelin Li. 2014. Design and evaluation of a steering wheel-mount speech interface for drivers’ mobile use in car. In 17th International IEEE Conference on Intelligent Transportation Systems (ITSC). IEEE, IEEE, Piscataway, NJ, 673–678.
[74]
Florian Weidner and Wolfgang Broll. 2019. Interact with your car: a user-elicited gesture set to inform future in-car user interfaces. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia. Association for Computing Machinery, New York, NY, USA, 1–12.
[75]
Garrett Weinberg, Bret Harsham, and Zeljko Medenica. 2011. Evaluating the usability of a head-up display for selection from choice lists in cars. In Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Association for Computing Machinery, New York, NY, USA, 39–46.
[76]
Philipp Wintersberger, Tamara von Sawitzky, Anna-Katharina Frison, and Andreas Riener. 2017. Traffic augmentation as a means to increase trust in automated driving systems. In Proceedings of the 12th biannual conference on italian sigchi chapter. Association for Computing Machinery, New York, NY, USA, 1–7.
[77]
Jacob O Wobbrock, Htet Htet Aung, Brandon Rothrock, and Brad A Myers. 2005. Maximizing the guessability of symbolic input. In CHI’05 extended abstracts on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1869–1872.
[78]
Jacob O Wobbrock, Meredith Ringel Morris, and Andrew D Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI conference on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 1083–1092.
[79]
Huiyue Wu, Yu Wang, Jiayi Liu, Jiali Qiu, and Xiaolong Luke Zhang. 2020. User-defined gesture interaction for in-vehicle information systems. Multimedia Tools and Applications 79, 1 (2020), 263–288.
[80]
Huiyue Wu, Yu Wang, Jiali Qiu, Jiayi Liu, and Xiaolong Zhang. 2019. User-defined gesture interaction for immersive VR shopping applications. Behaviour & Information Technology 38, 7 (2019), 726–741.

Cited By

View all
  • (2024)Move, Connect, Interact: Introducing a Design Space for Cross-Traffic InteractionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785808:3(1-40)Online publication date: 9-Sep-2024
  • (2024)Approaching Intelligent In-vehicle Infotainment Systems through Fusion Visual-Speech Multimodal Interaction: A State-of-the-Art ReviewProceedings of the European Conference on Cognitive Ergonomics 202410.1145/3673805.3673818(1-7)Online publication date: 8-Oct-2024
  • (2024)Exploring Methods to Optimize Gesture Elicitation Studies: A Systematic Literature ReviewIEEE Access10.1109/ACCESS.2024.338726912(64958-64979)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. Gesture and Voice Commands to Interact With AR Windshield Display in Automated Vehicle: A Remote Elicitation Study

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      AutomotiveUI '22: Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
      September 2022
      371 pages
      ISBN:9781450394154
      DOI:10.1145/3543174
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 17 September 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Windshield display
      2. automated driving
      3. gesture
      4. head-up display
      5. voice commands

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      Conference

      AutomotiveUI '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 248 of 566 submissions, 44%

      Upcoming Conference

      AutomotiveUI '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)304
      • Downloads (Last 6 weeks)37
      Reflects downloads up to 20 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Move, Connect, Interact: Introducing a Design Space for Cross-Traffic InteractionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785808:3(1-40)Online publication date: 9-Sep-2024
      • (2024)Approaching Intelligent In-vehicle Infotainment Systems through Fusion Visual-Speech Multimodal Interaction: A State-of-the-Art ReviewProceedings of the European Conference on Cognitive Ergonomics 202410.1145/3673805.3673818(1-7)Online publication date: 8-Oct-2024
      • (2024)Exploring Methods to Optimize Gesture Elicitation Studies: A Systematic Literature ReviewIEEE Access10.1109/ACCESS.2024.338726912(64958-64979)Online publication date: 2024
      • (2023)An Empirical Comparison of Moderated and Unmoderated Gesture Elicitation Studies on Soft Surfaces and Objects for Smart Home ControlProceedings of the ACM on Human-Computer Interaction10.1145/36042457:MHCI(1-24)Online publication date: 13-Sep-2023
      • (2023)A Qualitative Study on the Expectations and Concerns Around Voice and Gesture Interactions in VehiclesProceedings of the 2023 ACM Designing Interactive Systems Conference10.1145/3563657.3596040(2155-2171)Online publication date: 10-Jul-2023
      • (2023)Factors Affecting the Results of Gesture Elicitation: A Review2023 11th International Conference in Software Engineering Research and Innovation (CONISOFT)10.1109/CONISOFT58849.2023.00030(169-176)Online publication date: 6-Nov-2023
      • (2023)Cueing Car Drivers with Ultrasound Skin StimulationHCI in Mobility, Transport, and Automotive Systems10.1007/978-3-031-35908-8_16(224-244)Online publication date: 23-Jul-2023

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media