skip to main content
10.1145/3239060.3239084acmconferencesArticle/Chapter ViewAbstractPublication PagesautomotiveuiConference Proceedingsconference-collections
research-article

I See Your Point: Integrating Gaze to Enhance Pointing Gesture Accuracy While Driving

Authors Info & Claims
Published:23 September 2018Publication History

ABSTRACT

Mid-air pointing gestures enable drivers to interact with a wide range of vehicle functions, without requiring drivers to learn a specific set of gestures. A sufficient pointing accuracy is needed, so that targeted elements can be correctly identified. However, people make relatively large pointing errors, especially in demanding situations such as driving a car. Eye-gaze provides additional information about the drivers' focus of attention that can be used to compensate imprecise pointing. We present a practical implementation of an algorithm that integrates gaze data, in order to increase the accuracy of pointing gestures. A user experiment with 91 participants showed that our approach led to an overall increase of pointing accuracy. However, the benefits depended on the participants' initial gesture performance and on the position of the target elements. The results indicate a great potential to support gesture accuracy, but also the need for a more sophisticated fusion algorithm.

References

  1. Bashar I. Ahmad and Patrick Langdon. 2018. How Does Eye-Gaze Relate to Gesture Movement in an Automotive Pointing Task? Advances in Intelligent Systems and Computing 597, August 2017 (2018), 423--134.Google ScholarGoogle Scholar
  2. Bashar I. Ahmad, Patrick M. Langdon, Simon J. Godsill, Richard Donkor, and Rebecca Wilde. 2016. You Do Not Have to Touch to Select: A Study on Predictive In-car Touchscreen with Mid-air Selection. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 113--120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bashar I. Ahmad, Patrick M. Langdon, Simon J. Godsill, Robert Hardy, Lee Skrypchuk, and Richard Donkor. 2015. Touchscreen usability and input performance in vehicles under different road conditions. Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (2015), 47--54. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Bashar I. Ahmad, James Kevin Murphy, Simon Godsill, Patrick M. Langdon, and Robery Hardy. 2017. Intelligent Interactive Displays in Vehicles with Intent Prediction: A Bayesian framework. IEEE Signal Processing Magazine 34, 2 (2017), 82--94.Google ScholarGoogle ScholarCross RefCross Ref
  5. B Biguer, M Jeannerod, and C Prablanc. 1982. The coordination of eye, head, and arm movements during reaching at a single visual target. Experimental brain research. Experimentelle Hirnforschung. Experimentation cerebrale 46, 2 (1982), 301--304.Google ScholarGoogle Scholar
  6. Daniel Brand, Alexander Meschtscherjakov, and Kevin Büchele. 2016. Pointing at the HUD: Gesture Interaction Using a Leap Motion. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Ishan Chatterjee, Robert Xiao, and Chris Harrison. 2015. Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM Press, New York, New York, USA, 131--138. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Paul M. Fitts and James R. Peterson. 1964. Information capacity of discrete motor responses. Journal of Experimental Psychology 67, 2 (1964), 103--112.Google ScholarGoogle ScholarCross RefCross Ref
  9. Dagmar Kern, Angela Mahr, Sandro Castronovo, Albrecht Schmidt, and Christian Müller. 2010. Making use of drivers' glances onto the screen for explicit gaze-based interaction. In Proceedings of the 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, New York, NY, USA, 110--116. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Sven Mayer, Katrin Wolf, Stefan Schneegass, and Niels Henze. 2015. Modeling Distant Pointing for Compensating Systematic Displacements. Proceedings of the SIGCHI conference on Human Factors in Computing Systems 1, 3606 (2015), 4165--1168. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. David McNeill. 1992. Guide to Gesture Classification, Transcription and Distribution. (1992).Google ScholarGoogle Scholar
  12. S F Neggers and H Bekkering. 2001. Gaze anchoring to a pointing target is present during the entire pointing movement and is driven by a nonvisual signal. Journal of Neurophysiology 86, 2 (2001), 961--970.Google ScholarGoogle ScholarCross RefCross Ref
  13. R Neßelrath, Mohammad Mehdi Moniri, and Michael Feld. 2016. Combining Speech, Gaze, and Micro-gestures for the Multimodal Control of In-Car Functions. In International Conference on Intelligent Environments. 190--193.Google ScholarGoogle ScholarCross RefCross Ref
  14. Sharon Oviatt, Phil Cohen, Lizhong Wu, Bernhard Suhm, Josh Bers, Terry Winograd, and James Landay. 2009. Designing the User Interface for Multimodal Speech and Pen- Based Gesture Applications: State-of-the-Art Systems and Future Research Directions. Human-Computer Interaction 15, May 2014 (2009), 37--41. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Carl A. Pickering, Keith J. Burnham, and Michael J. Richardson. 2007. A research study of hand gesture recognition technologies and applications for human vehicle interaction. Institution of Engineering and Technology Conference on Automotive Electronics (2007), 1--15.Google ScholarGoogle Scholar
  16. Katrin Plaumann, Matthias Weing, Christian Winkler, Michael Müller, and Enrico Rukzio. 2017. Towards accurate cursorless pointing: the effects of ocular dominance and handedness. Personal and Ubiquitous Computing (2017), 1--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Florian Roider and Konstantin Raab. 2018. Implementation and Evaluation of Peripheral Light Feedback for Mid-Air Gesture Interaction in the Car.. In International Conference on Intelligent Environments. Accepted for publication.Google ScholarGoogle ScholarCross RefCross Ref
  18. Dario D. Salvucci and John R. Anderson. 2000. Intelligent gaze-added interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM Press, New York, New York, USA, 273--280. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and gaze input cascaded (MAGIC) pointing. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 246--253. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Yanxia Zhang, Sophie Stellmach, and Abigail Sellen. 2015. The Costs and Benefits of Combining Gaze and Hand Gestures for Remote Interaction. (2015). Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. I See Your Point: Integrating Gaze to Enhance Pointing Gesture Accuracy While Driving

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          AutomotiveUI '18: Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
          September 2018
          374 pages
          ISBN:9781450359467
          DOI:10.1145/3239060

          Copyright © 2018 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 23 September 2018

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate248of566submissions,44%

          Upcoming Conference

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader