ABSTRACT
Mid-air pointing gestures enable drivers to interact with a wide range of vehicle functions, without requiring drivers to learn a specific set of gestures. A sufficient pointing accuracy is needed, so that targeted elements can be correctly identified. However, people make relatively large pointing errors, especially in demanding situations such as driving a car. Eye-gaze provides additional information about the drivers' focus of attention that can be used to compensate imprecise pointing. We present a practical implementation of an algorithm that integrates gaze data, in order to increase the accuracy of pointing gestures. A user experiment with 91 participants showed that our approach led to an overall increase of pointing accuracy. However, the benefits depended on the participants' initial gesture performance and on the position of the target elements. The results indicate a great potential to support gesture accuracy, but also the need for a more sophisticated fusion algorithm.
- Bashar I. Ahmad and Patrick Langdon. 2018. How Does Eye-Gaze Relate to Gesture Movement in an Automotive Pointing Task? Advances in Intelligent Systems and Computing 597, August 2017 (2018), 423--134.Google Scholar
- Bashar I. Ahmad, Patrick M. Langdon, Simon J. Godsill, Richard Donkor, and Rebecca Wilde. 2016. You Do Not Have to Touch to Select: A Study on Predictive In-car Touchscreen with Mid-air Selection. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 113--120. Google ScholarDigital Library
- Bashar I. Ahmad, Patrick M. Langdon, Simon J. Godsill, Robert Hardy, Lee Skrypchuk, and Richard Donkor. 2015. Touchscreen usability and input performance in vehicles under different road conditions. Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (2015), 47--54. Google ScholarDigital Library
- Bashar I. Ahmad, James Kevin Murphy, Simon Godsill, Patrick M. Langdon, and Robery Hardy. 2017. Intelligent Interactive Displays in Vehicles with Intent Prediction: A Bayesian framework. IEEE Signal Processing Magazine 34, 2 (2017), 82--94.Google ScholarCross Ref
- B Biguer, M Jeannerod, and C Prablanc. 1982. The coordination of eye, head, and arm movements during reaching at a single visual target. Experimental brain research. Experimentelle Hirnforschung. Experimentation cerebrale 46, 2 (1982), 301--304.Google Scholar
- Daniel Brand, Alexander Meschtscherjakov, and Kevin Büchele. 2016. Pointing at the HUD: Gesture Interaction Using a Leap Motion. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Google ScholarDigital Library
- Ishan Chatterjee, Robert Xiao, and Chris Harrison. 2015. Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM Press, New York, New York, USA, 131--138. Google ScholarDigital Library
- Paul M. Fitts and James R. Peterson. 1964. Information capacity of discrete motor responses. Journal of Experimental Psychology 67, 2 (1964), 103--112.Google ScholarCross Ref
- Dagmar Kern, Angela Mahr, Sandro Castronovo, Albrecht Schmidt, and Christian Müller. 2010. Making use of drivers' glances onto the screen for explicit gaze-based interaction. In Proceedings of the 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, New York, NY, USA, 110--116. Google ScholarDigital Library
- Sven Mayer, Katrin Wolf, Stefan Schneegass, and Niels Henze. 2015. Modeling Distant Pointing for Compensating Systematic Displacements. Proceedings of the SIGCHI conference on Human Factors in Computing Systems 1, 3606 (2015), 4165--1168. Google ScholarDigital Library
- David McNeill. 1992. Guide to Gesture Classification, Transcription and Distribution. (1992).Google Scholar
- S F Neggers and H Bekkering. 2001. Gaze anchoring to a pointing target is present during the entire pointing movement and is driven by a nonvisual signal. Journal of Neurophysiology 86, 2 (2001), 961--970.Google ScholarCross Ref
- R Neßelrath, Mohammad Mehdi Moniri, and Michael Feld. 2016. Combining Speech, Gaze, and Micro-gestures for the Multimodal Control of In-Car Functions. In International Conference on Intelligent Environments. 190--193.Google ScholarCross Ref
- Sharon Oviatt, Phil Cohen, Lizhong Wu, Bernhard Suhm, Josh Bers, Terry Winograd, and James Landay. 2009. Designing the User Interface for Multimodal Speech and Pen- Based Gesture Applications: State-of-the-Art Systems and Future Research Directions. Human-Computer Interaction 15, May 2014 (2009), 37--41. Google ScholarDigital Library
- Carl A. Pickering, Keith J. Burnham, and Michael J. Richardson. 2007. A research study of hand gesture recognition technologies and applications for human vehicle interaction. Institution of Engineering and Technology Conference on Automotive Electronics (2007), 1--15.Google Scholar
- Katrin Plaumann, Matthias Weing, Christian Winkler, Michael Müller, and Enrico Rukzio. 2017. Towards accurate cursorless pointing: the effects of ocular dominance and handedness. Personal and Ubiquitous Computing (2017), 1--14. Google ScholarDigital Library
- Florian Roider and Konstantin Raab. 2018. Implementation and Evaluation of Peripheral Light Feedback for Mid-Air Gesture Interaction in the Car.. In International Conference on Intelligent Environments. Accepted for publication.Google ScholarCross Ref
- Dario D. Salvucci and John R. Anderson. 2000. Intelligent gaze-added interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM Press, New York, New York, USA, 273--280. Google ScholarDigital Library
- Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and gaze input cascaded (MAGIC) pointing. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 246--253. Google ScholarDigital Library
- Yanxia Zhang, Sophie Stellmach, and Abigail Sellen. 2015. The Costs and Benefits of Combining Gaze and Hand Gestures for Remote Interaction. (2015). Google ScholarDigital Library
Index Terms
- I See Your Point: Integrating Gaze to Enhance Pointing Gesture Accuracy While Driving
Recommendations
Pointing gesture recognition based on 3D-tracking of face, hands and head orientation
ICMI '03: Proceedings of the 5th international conference on Multimodal interfacesIn this paper, we present a system capable of visually detecting pointing gestures and estimating the 3D pointing direction in real-time. In order to acquire input features for gesture recognition, we track the positions of a person's face and hands on ...
Better Than You Think: Head Gestures for Mid Air Input
Human-Computer Interaction – INTERACT 2015AbstractThis paper presents a systematical comparison of pointing gestures in the context of controlling home appliances in smart homes. The pointing gestures were conducted with head, hand, arm and a computer mouse serving as baseline. To the best of our ...
The Costs and Benefits of Combining Gaze and Hand Gestures for Remote Interaction
Human-Computer Interaction – INTERACT 2015AbstractGaze has been proposed as an ideal modality for supporting remote target selection. We explored the potential of integrating gaze with hand gestures for remote interaction on a large display in terms of user experience and preference. We conducted ...
Comments