ABSTRACT
Various on-the-road situations can make additional demands on the driver that go beyond the basic demands of driving. Thereby, they influence the appropriateness of in-vehicle input modalities to operate secondary tasks in the car. In this work, we assess the specific impacts of situational demands on gaze, gesture and speech input regarding driving performance, interaction efficiency and subjective ratings. An experiment with 29 participants in a driving simulator revealed significant interactions between situational demands and the input modality on secondary task completion times, perceived suitability and cognitive workload. Impairments were greatest when the situational demand addressed the same sensory channel as the used input modality. This was reflected differently in objective and subjective data depending on the used input modality. With this work, we explore the performance of natural input modalities across different situations and thereby support interaction designers that plan to integrate these modalities in automotive interaction concepts.
- Jonathan F. Antin, Thomas A. Dingus, Melissa C. Hulse, and Walter W. Wierwille. 1990. An evaluation of the effectiveness and efficiency of an automobile moving-map navigational display. International Journal of Man-Machine Studies 33, 5 (1990), 581--594. Google ScholarDigital Library
- Jacob Cohen. 1988. Statistical power analysis for the behavioral sciences. (1988).Google Scholar
- Johan Engström, Emma Johansson, and Joakim Östlund. 2005. Effects of visual and cognitive load in real and simulated motorway driving. Transportation Research Part F: Traffic Psychology and Behaviour 8, 2 SPEC. ISS. (2005), 97--120.Google Scholar
- Anke Huckauf and Mario H. Urbina. 2011. On Object selection in gaze controlled systems. ACM Transactions on Applied Perception 8, 4 (2011), 1--14. Google ScholarDigital Library
- Dagmar Kern, Angela Mahr, Sandro Castronovo, Albrecht Schmidt, and Christian Müller. 2010. Making use of drivers' glances onto the screen for explicit gaze-based interaction. In Proceedings of the 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications -- AutomotiveUI '10. ACM, New York, NY, USA, 110--116. Google ScholarDigital Library
- Sang Hun Lee, Se-One Yoon, and Jae Hoon Shin. 2015. On-wheel finger gesture control for in-vehicle systems on central consoles. In Adjunct Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI '15. ACM, New York, NY, USA, 94--99. Google ScholarDigital Library
- Keenan R. May, Thomas M. Gable, and Bruce N. Walker. 2014. A Multimodal Air Gesture Interface for In Vehicle Menu Navigation. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI '14. ACM, New York, NY, USA, 1--6. Google ScholarDigital Library
- Bruce Mehler, David Kidd, Bryan Reimer, Ian Reagan, Jonathan Dobres, and Anne McCartt. 2016. Multi-modal assessment of on-road demand of voice and manual phone calling and voice navigation entry across two embedded vehicle systems. Ergonomics 59, 3 (mar 2016), 344--367.Google ScholarCross Ref
- Christian Müller, Garrett Weinberg, and Anthony Vetro. 2011. Multimodal input in the car, today and tomorrow. IEEE Multimedia 18, 1 (2011), 98--103. Google ScholarDigital Library
- Sharon Oviatt. 1999. Ten myths of multimodal interaction. Commun. ACM 42, 11 (1999), 74--81. Google ScholarDigital Library
- Annie Pauzié. 2008. A method to assess the driver mental workload: The driving activity load index (DALI). IET Intelligent Transport Systems 2, 4 (2008), 315.Google ScholarCross Ref
- Bastian Pfleging, Stefan Schneegass, and Albrecht Schmidt. 2012. Multimodal interaction in the car - combining speech and gestures on the steering wheel. In Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI 12. ACM, New York, NY, USA, 155--162. Google ScholarDigital Library
- David L. Strayer, Jason M. Watson, and Frank A. Drews. 2011. Cognitive Distraction While Multitasking in the Automobile. Vol. 54. 29--58 pages.Google Scholar
- Christopher D. Wickens. 1980. The Structure of Attentional Resources. Attention and Performance VIII 8 (1980), 239--257.Google Scholar
- Christopher D. Wickens. 2002. Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science 3, 2 (2002), 159--177.Google ScholarCross Ref
- Christopher D. Wickens, Diane L. Sandry, and Michael Vidulich. 1983. Compatibility and resource competition between modalities of input, central processing, and output. Human factors 25, 2 (1983), 227--248.Google Scholar
- Walter W. Wierwille. 1993. Demands on driver resources associated with introducing advanced technology into the vehicle. Transportation Research Part C: Emerging Technologies 1, 2 (jun 1993), 133--142.Google ScholarCross Ref
Index Terms
- The Effects of Situational Demands on Gaze, Speech and Gesture Input in the Vehicle
Recommendations
Facilitating multiparty dialog with gaze, gesture, and speech
ICMI-MLMI '10: International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal InteractionWe study how synchronized gaze, gesture and speech rendered by an embodied conversational agent can influence the flow of conversations in multiparty settings. We begin by reviewing a computational framework for turn-taking that provides the foundation ...
Interacting with objects in the environment using gaze tracking glasses and speech
OzCHI '14: Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: the Future of DesignThis work explores the combination of gaze and speech to interact with objects in the environment. A head-mounted wireless gaze tracker in the form of gaze tracking glasses is used for mobile monitoring of a subject's point of regard on the surrounding ...
Cross-Modal Repair: Gaze and Speech Interaction for List Advancement
CUI '22: Proceedings of the 4th Conference on Conversational User InterfacesInteracting with long lists of instructions or ingredients continues to be a challenge for conversational interaction. In this paper, we conducted a user study to experiment with the use of ‘cued-gaze’ – waiting for the user’s visual attention – to ...
Comments