ABSTRACT
Trained eye patterns are essential for safe driving. Whether for exploration of the surrounding traffic or to make sure that a lane is clear through a shoulder check - quick and effective perception is the key to driving safety. Surprisingly though, free and open access data on gaze behavior during driving are yet extremely sparse. The environment inside a vehicle is challenging for eye-tracking technology due to rapidly changing illumination conditions, such as exiting a tunnel to brightest sunlight, proper calibration and safety. So far, available data exhibits environments that likely influence the viewing behavior, sometimes dramatically (e.g., driving simulators without mirrors, limited field of view).
We propose crowd-sourced eye-tracking data collected during real-world driving using NIR-cameras and illuminators that were placed within the driver’s cabin. We analyze this data using a deep learning appearance-based gaze estimation, with raw videos not being part of the data set due to legal restrictions. Our data set contains four different drivers in their habitual cars and 55 rides of an average of 30 minutes length. At least three human raters rated each ride continuously with regard to driver attention and vigilance level on a ten-point scale. From the recorded videos we extracted drivers’ head and eye movements as well as eye opening angle. For this data, we apply a normalization with respect to different placement of the driver monitoring camera and demonstrate a baseline for driver attention monitoring based on eye gaze and head movement features.
- Yehya Abouelnaga, Hesham M Eraqi, and Mohamed N Moustafa. 2017. Real-time distracted driver posture classification. arXiv preprint arXiv:1706.09498(2017).Google Scholar
- Stefano Alletto, Andrea Palazzi, Francesco Solera, Simone Calderara, and Rita Cucchiara. 2016. Dr (eye) ve: a dataset for attention-based tasks with applications to autonomous and assisted driving. In Proceedings of the ieee conference on computer vision and pattern recognition workshops. 54–60.Google ScholarCross Ref
- Christian Braunagel, David Geisler, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2017. Online recognition of driver-activity based on visual scanpath classification. IEEE Intelligent Transportation Systems Magazine 9, 4 (2017), 23–36.Google ScholarCross Ref
- Christian Braunagel, Wolfgang Stolzmann, Enkelejda Kasneci, Thomas C Kübler, Wolfgang Fuhl, and Wolfgang Rosenstiel. 2015. Exploiting the potential of eye movements analysis in the driving context. In 15. Internationales Stuttgarter Symposium. Springer, 1093–1105.Google Scholar
- Adrian Bulat and Georgios Tzimiropoulos. 2017. How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision. 1021–1030.Google ScholarCross Ref
- Alyssa Byrnes and Cynthia Sturton. 2018. On Using Drivers’ Eyes to Predict Accident-Causing Drowsiness Levels. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2092–2097.Google Scholar
- Richard Dewhurst, Marcus Nyström, Halszka Jarodzka, Tom Foulsham, Roger Johansson, and Kenneth Holmqvist. 2012. It depends on how you look at it: Scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach. Behavior research methods 44, 4 (2012), 1079–1100.Google Scholar
- Isha Dua, Akshay Uttama Nambi, CV Jawahar, and Venkat Padmanabhan. 2019. AutoRate: How attentive is the driver?. In 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019). IEEE, 1–8.Google ScholarDigital Library
- Jianwu Fang, Dingxin Yan, Jiahuan Qiao, Jianru Xue, He Wang, and Sen Li. 2019. DADA-2000: Can Driving Accident be Predicted by Driver Attentionƒ Analyzed by A Benchmark. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE, 4303–4309.Google ScholarDigital Library
- Luke Fletcher, Gareth Loy, Nick Barnes, and Alexander Zelinsky. 2005. Correlating driver gaze with the road scene for driver assistance systems. Robotics and Autonomous Systems 52, 1 (2005), 71–84.Google ScholarCross Ref
- Lex Fridman, Philipp Langhans, Joonbum Lee, and Bryan Reimer. 2016. Driver gaze region estimation without use of eye movement. IEEE Intelligent Systems 31, 3 (2016), 49–56.Google ScholarDigital Library
- Wolfgang Fuhl, Efe Bozkir, Benedikt Hosp, Nora Castner, David Geisler, Thiago C Santini, and Enkelejda Kasneci. 2019a. Encodji: encoding gaze data into emoji space for an amusing scanpath classification approach. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. 1–4. Google ScholarDigital Library
- Wolfgang Fuhl, Nora Castner, and Enkelejda Kasneci. 2018a. Histogram of oriented velocities for eye movement detection. In International Conference on Multimodal Interaction Workshops, ICMIW. Google ScholarDigital Library
- Wolfgang Fuhl, Nora Castner, and Enkelejda Kasneci. 2018b. Rule based learning for eye movement type detection. In International Conference on Multimodal Interaction Workshops, ICMIW. Google ScholarDigital Library
- Wolfgang Fuhl, Nora Castner, Thomas C. Kübler, Alexander Lotz, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2019b. Ferns for area of interest free scanpath classification. In Proceedings of the 2019 ACM Symposium on Eye Tracking Research & Applications (ETRA). Google ScholarDigital Library
- Wolfgang Fuhl, Nora Castner, Lin Zhuang, Markus Holzer, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2018c. Mam: Transfer learning for fully automatic video annotation and specialized detector creation. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops. 0–0.Google Scholar
- Wolfgang Fuhl, Hong Gao, and Enkelejda Kasneci. 2020a. Neural networks for optical vector and eye ball parameter estimation. In ACM Symposium on Eye Tracking Research & Applications, ETRA 2020. ACM. Google ScholarDigital Library
- Wolfgang Fuhl, David Geisler, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2019c. The applicability of Cycle GANs for pupil and eyelid segmentation, data generation and image refinement. In International Conference on Computer Vision Workshops, ICCVW.Google ScholarCross Ref
- Wolfgang Fuhl and Enkelejda Kasneci. 2018. Eye movement velocity and gaze data generator for evaluation, robustness testing and assess of eye tracking software and visualization tools. In Poster at Egocentric Perception, Interaction and Computing, EPIC.Google Scholar
- Wolfgang Fuhl and Enkelejda Kasneci. 2019. Learning to validate the quality of detected landmarks. In International Conference on Machine Vision, ICMV.Google Scholar
- Wolfgang Fuhl and Enkelejda Kasneci. 2020. Weight and Gradient Centralization in Deep Neural Networks. arXiv preprint arXiv:2010.00866 (08 2020).Google Scholar
- Wolfgang Fuhl, Yao Rong, and Kasneci Enkelejda. 2020b. Fully Convolutional Neural Networks for Raw Eye Tracking Data Segmentation, Generation, and Reconstruction. In Proceedings of the International Conference on Pattern Recognition. 0–0.Google Scholar
- Wolfgang Fuhl, Yao Rong, Thomas Motz, Michael Scheidt, Andreas Hartel, Andreas Koch, and Enkelejda Kasneci. 2020c. Explainable Online Validation of Machine Learning Models for Practical Applications. In Proceedings of the International Conference on Pattern Recognition. 0–0.Google Scholar
- Wolfgang Fuhl, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2019d. 500,000 images closer to eyelid and pupil segmentation. In Computer Analysis of Images and Patterns, CAIP.Google Scholar
- Wolfgang Fuhl, Thiago Santini, David Geisler, Thomas C. Kübler, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2016. Eyes Wide Open? Eyelid Location and Eye Aperture Estimation for Pervasive Eye Tracking in Real-World Scenarios. In ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct publication – PETMEI 2016. Google ScholarDigital Library
- Wolfgang Fuhl, Thiago Santini, and Enkelejda Kasneci. 2017. Fast and Robust Eyelid Outline and Aperture Detection in Real-World Scenarios. In IEEE Winter Conference on Applications of Computer Vision (WACV 2017).Google Scholar
- Wolfgang Fuhl, Thiago Santini, Thomas C. Kübler, Nora Castner, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2018d. Eye movement simulation and detector creation to reduce laborious parameter adjustments. arXiv preprint arXiv:1804.00970(2018).Google Scholar
- Takahiro Ishikawa. 2004. Passive driver gaze tracking with active appearance models. (2004).Google Scholar
- Ashesh Jain, Hema S Koppula, Shane Soh, Bharad Raghavan, Avi Singh, and Ashutosh Saxena. 2016. Brain4cars: Car that knows before you do via sensory-fusion deep learning architecture. arXiv preprint arXiv:1601.00740(2016).Google Scholar
- Murray W Johns, Andrew Tucker, Robert Chapman, Kate Crowley, and Natalie Michael. 2007. Monitoring eye and eyelid movements by infrared reflectance oculography to measure drowsiness in drivers. Somnologie-Schlafforschung und Schlafmedizin 11, 4(2007), 234–242.Google ScholarCross Ref
- Enkelejda Kasneci, Katrin Sippel, Kathrin Aehling, Martin Heister, Wolfgang Rosenstiel, Ulrich Schiefer, and Elena Papageorgiou. 2014. Driving with binocular visual field loss? A study on a supervised on-road parcours with simultaneous eye and head tracking. PloS one 9, 2 (2014), e87470.Google ScholarCross Ref
- Sheila G Klauer, Thomas A Dingus, Vicki L Neale, Jeremy D Sudweeks, David J Ramsey, 2006. The impact of driver inattention on near-crash/crash risk: An analysis using the 100-car naturalistic driving study data. (2006).Google Scholar
- Thomas C Kübler, Enkelejda Kasneci, and Wolfgang Rosenstiel. 2014a. Subsmatch: Scanpath similarity in dynamic scenes based on subsequence frequencies. In Proceedings of the Symposium on Eye Tracking Research and Applications. 319–322. Google ScholarDigital Library
- Thomas C Kübler, Enkelejda Kasneci, Wolfgang Rosenstiel, Ulrich Schiefer, Katja Nagel, and Elena Papageorgiou. 2014b. Stress-indicators and exploratory gaze for the analysis of hazard perception in patients with visual field loss. Transportation Research Part F: Traffic Psychology and Behaviour 24 (2014), 231–243.Google ScholarCross Ref
- Thomas C Kübler, Colleen Rothe, Ulrich Schiefer, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2017. SubsMatch 2.0: Scanpath comparison and classification based on subsequence frequencies. Behavior research methods 49, 3 (2017), 1048–1064.Google Scholar
- Manuel Martin, Alina Roitberg, Monica Haurilet, Matthias Horne, Simon Reiß, Michael Voit, and Rainer Stiefelhagen. 2019. Drive&act: A multi-modal dataset for fine-grained driver behavior recognition in autonomous vehicles. In Proceedings of the IEEE international conference on computer vision. 2801–2810.Google ScholarCross Ref
- Eshed Ohn-Bar, Sujitha Martin, Ashish Tawari, and Mohan M Trivedi. 2014. Head, eye, and hand patterns for driver activity recognition. In 2014 22nd International Conference on Pattern Recognition. IEEE, 660–665. Google ScholarDigital Library
- Salah Taamneh, Panagiotis Tsiamyrtzis, Malcolm Dcosta, Pradeep Buddharaju, Ashik Khatri, Michael Manser, Thomas Ferris, Robert Wunderlich, and Ioannis Pavlidis. 2017. A multimodal dataset for various forms of distracted driving. Scientific data 4(2017), 170110.Google Scholar
- Mohan M Trivedi 2019. Attention monitoring and hazard assessment with bio-sensing and vision: Empirical analysis utilizing CNNs on the kitti dataset. In 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 1673–1678.Google Scholar
- Francisco Vicente, Zehua Huang, Xuehan Xiong, Fernando De la Torre, Wende Zhang, and Dan Levi. 2015. Driver gaze tracking and eyes off the road detection system. IEEE Transactions on Intelligent Transportation Systems 16, 4(2015), 2014–2027.Google ScholarDigital Library
- Ye Xia, Danqing Zhang, Jinkyu Kim, Ken Nakayama, Karl Zipser, and David Whitney. 2018. Predicting driver attention in critical situations. In Asian conference on computer vision. Springer, 658–674.Google Scholar
- Chao Yan, Frans Coenen, and Bailing Zhang. 2016. Driving posture recognition by convolutional neural networks. IET Computer Vision 10, 2 (2016), 103–114.Google ScholarCross Ref
- Lichao Yang, Kuo Dong, Arkadiusz Jan Dmitruk, James Brighton, and Yifan Zhao. 2019. A dual-cameras-based driver gaze mapping system with an application on non-driving activities monitoring. IEEE Transactions on Intelligent Transportation Systems (2019). Google ScholarDigital Library
Index Terms
- 55 Rides: attention annotated head and gaze data during naturalistic driving
Recommendations
Driving Maneuvers Analysis Using Naturalistic Highway Driving Data
ITSC '15: Proceedings of the 2015 IEEE 18th International Conference on Intelligent Transportation SystemsAccounting about 70% of vehicle miles on roadways, highway driving is a critical issue in traffic safety deployment. Of the various maneuvers that comprise the highly complex driving task, each one requires understanding on the connections between ...
Towards Safe Autonomy in Hybrid Traffic: Detecting Unpredictable Abnormal Behaviors of Human Drivers via Information Sharing
Hybrid traffic which involves both autonomous and human-driven vehicles would be the norm of the autonomous vehicles’ practice for a while. On the one hand, unlike autonomous vehicles, human-driven vehicles could exhibit sudden abnormal behaviors such as ...
Driver lane keeping behavior in normal driving using 100-car naturalistic driving study data
2016 IEEE Intelligent Vehicles Symposium (IV)Lane departure warning (LDW) systems have great potential to reduce the number of road departures and resulting crashes, but only if drivers accept and react appropriately to the warnings. With a better understanding of normal lane keeping, there is the ...
Comments