skip to main content
10.1145/3448018.3457993acmconferencesArticle/Chapter ViewAbstractPublication PagesetraConference Proceedingsconference-collections
short-paper

55 Rides: attention annotated head and gaze data during naturalistic driving

Published:25 May 2021Publication History

ABSTRACT

Trained eye patterns are essential for safe driving. Whether for exploration of the surrounding traffic or to make sure that a lane is clear through a shoulder check - quick and effective perception is the key to driving safety. Surprisingly though, free and open access data on gaze behavior during driving are yet extremely sparse. The environment inside a vehicle is challenging for eye-tracking technology due to rapidly changing illumination conditions, such as exiting a tunnel to brightest sunlight, proper calibration and safety. So far, available data exhibits environments that likely influence the viewing behavior, sometimes dramatically (e.g., driving simulators without mirrors, limited field of view).

We propose crowd-sourced eye-tracking data collected during real-world driving using NIR-cameras and illuminators that were placed within the driver’s cabin. We analyze this data using a deep learning appearance-based gaze estimation, with raw videos not being part of the data set due to legal restrictions. Our data set contains four different drivers in their habitual cars and 55 rides of an average of 30 minutes length. At least three human raters rated each ride continuously with regard to driver attention and vigilance level on a ten-point scale. From the recorded videos we extracted drivers’ head and eye movements as well as eye opening angle. For this data, we apply a normalization with respect to different placement of the driver monitoring camera and demonstrate a baseline for driver attention monitoring based on eye gaze and head movement features.

References

  1. Yehya Abouelnaga, Hesham M Eraqi, and Mohamed N Moustafa. 2017. Real-time distracted driver posture classification. arXiv preprint arXiv:1706.09498(2017).Google ScholarGoogle Scholar
  2. Stefano Alletto, Andrea Palazzi, Francesco Solera, Simone Calderara, and Rita Cucchiara. 2016. Dr (eye) ve: a dataset for attention-based tasks with applications to autonomous and assisted driving. In Proceedings of the ieee conference on computer vision and pattern recognition workshops. 54–60.Google ScholarGoogle ScholarCross RefCross Ref
  3. Christian Braunagel, David Geisler, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2017. Online recognition of driver-activity based on visual scanpath classification. IEEE Intelligent Transportation Systems Magazine 9, 4 (2017), 23–36.Google ScholarGoogle ScholarCross RefCross Ref
  4. Christian Braunagel, Wolfgang Stolzmann, Enkelejda Kasneci, Thomas C Kübler, Wolfgang Fuhl, and Wolfgang Rosenstiel. 2015. Exploiting the potential of eye movements analysis in the driving context. In 15. Internationales Stuttgarter Symposium. Springer, 1093–1105.Google ScholarGoogle Scholar
  5. Adrian Bulat and Georgios Tzimiropoulos. 2017. How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision. 1021–1030.Google ScholarGoogle ScholarCross RefCross Ref
  6. Alyssa Byrnes and Cynthia Sturton. 2018. On Using Drivers’ Eyes to Predict Accident-Causing Drowsiness Levels. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2092–2097.Google ScholarGoogle Scholar
  7. Richard Dewhurst, Marcus Nyström, Halszka Jarodzka, Tom Foulsham, Roger Johansson, and Kenneth Holmqvist. 2012. It depends on how you look at it: Scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach. Behavior research methods 44, 4 (2012), 1079–1100.Google ScholarGoogle Scholar
  8. Isha Dua, Akshay Uttama Nambi, CV Jawahar, and Venkat Padmanabhan. 2019. AutoRate: How attentive is the driver?. In 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019). IEEE, 1–8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Jianwu Fang, Dingxin Yan, Jiahuan Qiao, Jianru Xue, He Wang, and Sen Li. 2019. DADA-2000: Can Driving Accident be Predicted by Driver Attentionƒ Analyzed by A Benchmark. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE, 4303–4309.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Luke Fletcher, Gareth Loy, Nick Barnes, and Alexander Zelinsky. 2005. Correlating driver gaze with the road scene for driver assistance systems. Robotics and Autonomous Systems 52, 1 (2005), 71–84.Google ScholarGoogle ScholarCross RefCross Ref
  11. Lex Fridman, Philipp Langhans, Joonbum Lee, and Bryan Reimer. 2016. Driver gaze region estimation without use of eye movement. IEEE Intelligent Systems 31, 3 (2016), 49–56.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Wolfgang Fuhl, Efe Bozkir, Benedikt Hosp, Nora Castner, David Geisler, Thiago C Santini, and Enkelejda Kasneci. 2019a. Encodji: encoding gaze data into emoji space for an amusing scanpath classification approach. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. 1–4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Wolfgang Fuhl, Nora Castner, and Enkelejda Kasneci. 2018a. Histogram of oriented velocities for eye movement detection. In International Conference on Multimodal Interaction Workshops, ICMIW. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Wolfgang Fuhl, Nora Castner, and Enkelejda Kasneci. 2018b. Rule based learning for eye movement type detection. In International Conference on Multimodal Interaction Workshops, ICMIW. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Wolfgang Fuhl, Nora Castner, Thomas C. Kübler, Alexander Lotz, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2019b. Ferns for area of interest free scanpath classification. In Proceedings of the 2019 ACM Symposium on Eye Tracking Research & Applications (ETRA). Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Wolfgang Fuhl, Nora Castner, Lin Zhuang, Markus Holzer, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2018c. Mam: Transfer learning for fully automatic video annotation and specialized detector creation. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops. 0–0.Google ScholarGoogle Scholar
  17. Wolfgang Fuhl, Hong Gao, and Enkelejda Kasneci. 2020a. Neural networks for optical vector and eye ball parameter estimation. In ACM Symposium on Eye Tracking Research & Applications, ETRA 2020. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Wolfgang Fuhl, David Geisler, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2019c. The applicability of Cycle GANs for pupil and eyelid segmentation, data generation and image refinement. In International Conference on Computer Vision Workshops, ICCVW.Google ScholarGoogle ScholarCross RefCross Ref
  19. Wolfgang Fuhl and Enkelejda Kasneci. 2018. Eye movement velocity and gaze data generator for evaluation, robustness testing and assess of eye tracking software and visualization tools. In Poster at Egocentric Perception, Interaction and Computing, EPIC.Google ScholarGoogle Scholar
  20. Wolfgang Fuhl and Enkelejda Kasneci. 2019. Learning to validate the quality of detected landmarks. In International Conference on Machine Vision, ICMV.Google ScholarGoogle Scholar
  21. Wolfgang Fuhl and Enkelejda Kasneci. 2020. Weight and Gradient Centralization in Deep Neural Networks. arXiv preprint arXiv:2010.00866 (08 2020).Google ScholarGoogle Scholar
  22. Wolfgang Fuhl, Yao Rong, and Kasneci Enkelejda. 2020b. Fully Convolutional Neural Networks for Raw Eye Tracking Data Segmentation, Generation, and Reconstruction. In Proceedings of the International Conference on Pattern Recognition. 0–0.Google ScholarGoogle Scholar
  23. Wolfgang Fuhl, Yao Rong, Thomas Motz, Michael Scheidt, Andreas Hartel, Andreas Koch, and Enkelejda Kasneci. 2020c. Explainable Online Validation of Machine Learning Models for Practical Applications. In Proceedings of the International Conference on Pattern Recognition. 0–0.Google ScholarGoogle Scholar
  24. Wolfgang Fuhl, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2019d. 500,000 images closer to eyelid and pupil segmentation. In Computer Analysis of Images and Patterns, CAIP.Google ScholarGoogle Scholar
  25. Wolfgang Fuhl, Thiago Santini, David Geisler, Thomas C. Kübler, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2016. Eyes Wide Open? Eyelid Location and Eye Aperture Estimation for Pervasive Eye Tracking in Real-World Scenarios. In ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct publication – PETMEI 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Wolfgang Fuhl, Thiago Santini, and Enkelejda Kasneci. 2017. Fast and Robust Eyelid Outline and Aperture Detection in Real-World Scenarios. In IEEE Winter Conference on Applications of Computer Vision (WACV 2017).Google ScholarGoogle Scholar
  27. Wolfgang Fuhl, Thiago Santini, Thomas C. Kübler, Nora Castner, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2018d. Eye movement simulation and detector creation to reduce laborious parameter adjustments. arXiv preprint arXiv:1804.00970(2018).Google ScholarGoogle Scholar
  28. Takahiro Ishikawa. 2004. Passive driver gaze tracking with active appearance models. (2004).Google ScholarGoogle Scholar
  29. Ashesh Jain, Hema S Koppula, Shane Soh, Bharad Raghavan, Avi Singh, and Ashutosh Saxena. 2016. Brain4cars: Car that knows before you do via sensory-fusion deep learning architecture. arXiv preprint arXiv:1601.00740(2016).Google ScholarGoogle Scholar
  30. Murray W Johns, Andrew Tucker, Robert Chapman, Kate Crowley, and Natalie Michael. 2007. Monitoring eye and eyelid movements by infrared reflectance oculography to measure drowsiness in drivers. Somnologie-Schlafforschung und Schlafmedizin 11, 4(2007), 234–242.Google ScholarGoogle ScholarCross RefCross Ref
  31. Enkelejda Kasneci, Katrin Sippel, Kathrin Aehling, Martin Heister, Wolfgang Rosenstiel, Ulrich Schiefer, and Elena Papageorgiou. 2014. Driving with binocular visual field loss? A study on a supervised on-road parcours with simultaneous eye and head tracking. PloS one 9, 2 (2014), e87470.Google ScholarGoogle ScholarCross RefCross Ref
  32. Sheila G Klauer, Thomas A Dingus, Vicki L Neale, Jeremy D Sudweeks, David J Ramsey, 2006. The impact of driver inattention on near-crash/crash risk: An analysis using the 100-car naturalistic driving study data. (2006).Google ScholarGoogle Scholar
  33. Thomas C Kübler, Enkelejda Kasneci, and Wolfgang Rosenstiel. 2014a. Subsmatch: Scanpath similarity in dynamic scenes based on subsequence frequencies. In Proceedings of the Symposium on Eye Tracking Research and Applications. 319–322. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Thomas C Kübler, Enkelejda Kasneci, Wolfgang Rosenstiel, Ulrich Schiefer, Katja Nagel, and Elena Papageorgiou. 2014b. Stress-indicators and exploratory gaze for the analysis of hazard perception in patients with visual field loss. Transportation Research Part F: Traffic Psychology and Behaviour 24 (2014), 231–243.Google ScholarGoogle ScholarCross RefCross Ref
  35. Thomas C Kübler, Colleen Rothe, Ulrich Schiefer, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2017. SubsMatch 2.0: Scanpath comparison and classification based on subsequence frequencies. Behavior research methods 49, 3 (2017), 1048–1064.Google ScholarGoogle Scholar
  36. Manuel Martin, Alina Roitberg, Monica Haurilet, Matthias Horne, Simon Reiß, Michael Voit, and Rainer Stiefelhagen. 2019. Drive&act: A multi-modal dataset for fine-grained driver behavior recognition in autonomous vehicles. In Proceedings of the IEEE international conference on computer vision. 2801–2810.Google ScholarGoogle ScholarCross RefCross Ref
  37. Eshed Ohn-Bar, Sujitha Martin, Ashish Tawari, and Mohan M Trivedi. 2014. Head, eye, and hand patterns for driver activity recognition. In 2014 22nd International Conference on Pattern Recognition. IEEE, 660–665. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Salah Taamneh, Panagiotis Tsiamyrtzis, Malcolm Dcosta, Pradeep Buddharaju, Ashik Khatri, Michael Manser, Thomas Ferris, Robert Wunderlich, and Ioannis Pavlidis. 2017. A multimodal dataset for various forms of distracted driving. Scientific data 4(2017), 170110.Google ScholarGoogle Scholar
  39. Mohan M Trivedi 2019. Attention monitoring and hazard assessment with bio-sensing and vision: Empirical analysis utilizing CNNs on the kitti dataset. In 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 1673–1678.Google ScholarGoogle Scholar
  40. Francisco Vicente, Zehua Huang, Xuehan Xiong, Fernando De la Torre, Wende Zhang, and Dan Levi. 2015. Driver gaze tracking and eyes off the road detection system. IEEE Transactions on Intelligent Transportation Systems 16, 4(2015), 2014–2027.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Ye Xia, Danqing Zhang, Jinkyu Kim, Ken Nakayama, Karl Zipser, and David Whitney. 2018. Predicting driver attention in critical situations. In Asian conference on computer vision. Springer, 658–674.Google ScholarGoogle Scholar
  42. Chao Yan, Frans Coenen, and Bailing Zhang. 2016. Driving posture recognition by convolutional neural networks. IET Computer Vision 10, 2 (2016), 103–114.Google ScholarGoogle ScholarCross RefCross Ref
  43. Lichao Yang, Kuo Dong, Arkadiusz Jan Dmitruk, James Brighton, and Yifan Zhao. 2019. A dual-cameras-based driver gaze mapping system with an application on non-driving activities monitoring. IEEE Transactions on Intelligent Transportation Systems (2019). Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. 55 Rides: attention annotated head and gaze data during naturalistic driving
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            ETRA '21 Short Papers: ACM Symposium on Eye Tracking Research and Applications
            May 2021
            232 pages
            ISBN:9781450383455
            DOI:10.1145/3448018

            Copyright © 2021 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 25 May 2021

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • short-paper
            • Research
            • Refereed limited

            Acceptance Rates

            Overall Acceptance Rate69of137submissions,50%

            Upcoming Conference

            ETRA '24
            The 2024 Symposium on Eye Tracking Research and Applications
            June 4 - 7, 2024
            Glasgow , United Kingdom

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format