Skip to main content

Creating Geopositioned 3D Areas of Interest from Fleet Gaze Data

  • Conference paper
  • First Online:
HCI in Mobility, Transport, and Automotive Systems (HCII 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13335))

Included in the following conference series:

  • 1454 Accesses

Abstract

In order to observe driver’s attention levels, different approaches are followed. They include simple methods counting driver input changes [6], machine learning based approaches based on driver input [17], and methods considering additional inputs such as environmental data and eye tracking data [3,4,5, 7, 12, 16]. Recent studies have proposed geopositioned 3D AOIs as a tool for driver intention observation. Geopositioned 3D AOIs are three dimensional Areas (boxes), with fix geopositiones (e.g. GPS) which have to be observed for a safe completion of driving maneuvers. Examples are pedestrian waiting areas, crosswalks, and traffic light. Creating these AOIs by hand is a tedious task with ample room for potential errors, as the created AOIs might differ from the real AOIs drivers look at. We therefore propose a pipeline to generate real 3D AOIs from gaze clouds. To generate relevant gaze clouds we use the points of closest encounter in fleet gaze data collected in a driving simulator setup. The results show that the generation of 3D AOIs from fleet data is possible and the created AOIs are mostly consistent with the expected AOIs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ankerst, M., et al.: OPTICS: ordering points to identify the clustering structure. ACM SIGMOD Rec. 28(2), 49–60 (1999). https://doi.org/10.1145/304181.304187

    Article  Google Scholar 

  2. Bickerdt, J., Sonnenberg, J., Gollnick, C., Kasneci, E.: Geopositioned 3D areas of interest for gaze analysis, pp. 1–11, September 2021. https://doi.org/10.1145/3409118.3475138

  3. Bozkir, E., Geisler, D., Kasneci, E.: Assessment of driver attention during a safety critical situation in VR to generate VR-based training. In: Neyret, S., Kokkinara, E., Franco, M.G., Hoyet, L., Cunningham, D.W., Świdrak, J. (eds.) SAP 2019: ACM Symposium on Applied Perception 2019, pp. 1–5 (2019). https://doi.org/10.1145/3343036.3343138

  4. Braunagel, C., Kasneci, E., Stolzmann, W., Rosenstiel, W.: Driver-activity recognition in the context of conditionally autonomous driving. In: 2015 IEEE 18th International Conference on Intelligent Transportation Systems - (ITSC 2015), pp. 1652–1657 (2015). https://doi.org/10.1109/ITSC.2015.268

  5. Doshi, A., Trivedi, M.M.: Investigating the relationships between gaze patterns, dynamic vehicle surround analysis, and driver intentions. In: IEEE Intelligent Vehicle Symposium. IEEE (2009). https://doi.org/10.1109/IVS.2009.5164397, https://ieeexplore.ieee.org/abstract/document/5164397

  6. Fletcher, L., Loy, G., Barnes, N., Zelinsky, A.: Correlating driver gaze with the road scene for driver assistance systems. Robot. Auton. Syst. 52(1), 71–84 (2005). https://doi.org/10.1016/j.robot.2005.03.010

    Article  Google Scholar 

  7. Fletcher, L., Zelinsky, A.: Driver inattention detection based on eye gaze–road event correlation. Int. J. Robot. Res. 28(6), 774–801 (2009). https://doi.org/10.1177/0278364908099459

    Article  Google Scholar 

  8. Geruschat, D.R., Hassan, S.E., Turano, K.A.: Gaze behavior while crossing complex intersections. Optom. Vis. Sci. 80, 515–528 (2003). https://doi.org/10.1097/00006324-200307000-00013, https://pubmed.ncbi.nlm.nih.gov/12858087/

  9. Kircher, K., Ahlstrom, C.: Minimum required attention: a human-centered approach to driver inattention. Hum. Factors 59(3), 471–484 (2017). https://doi.org/10.1177/0018720816672756

    Article  Google Scholar 

  10. Lemonnier, S., Brémond, R., Baccino, T.: Gaze behavior when approaching an intersection: dwell time distribution and comparison with a quantitative prediction. Transp. Res. F: Traffic Psychol. Behav. 35(4), 60–74 (2015). https://doi.org/10.1016/j.trf.2015.10.015

    Article  Google Scholar 

  11. Lemonnier, S., Désiré, L., Brémond, R., Baccino, T.: Drivers’ visual attention: a field study at intersections. Transp. Res. Part F: Traffic Psychol. Behav. 69, 206–221 (2020). https://doi.org/10.1016/j.trf.2020.01.012, https://www.sciencedirect.com/science/article/pii/S1369847819301597

  12. Mavely, A.G., Judith, J.E., Sahal, P.A., Kuruvilla, S.A.: Eye gaze tracking based driver monitoring system. In: 2017 IEEE International Conference on Circuits and Systems (ICCS), pp. 364–367 (2017). https://doi.org/10.1109/ICCS1.2017.8326022

  13. Najdataei, H., Nikolakopoulo, Y., Gulisano, V., Papatriantafilou, M.: Continuous and parallel lidar point-cloud clustering. In: IEEE International Conference on Distributed Computing Systems (ICDCS), vol. 38. IEEE (2018). https://ieeexplore.ieee.org/document/8416334

  14. Euro NCAP: Euroncap-roadmap-2025-v4 (2017). https://cdn.euroncap.com/media/30700/euroncap-roadmap-2025-v4.pdf

  15. Euro NCAP: Assessment protocol - safety assist (2019). https://cdn.euroncap.com/media/53156/euro-ncap-assessment-protocol-sa-v902.pdf

  16. Rong, Y., Akata, Z., Kasneci, E.: Driver intention anticipation based on in-cabin and driving scene monitoring. In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–8 (2020). https://doi.org/10.1109/ITSC45102.2020.9294181

  17. Sayed, R., Eskandarian, A.: Unobtrusive drowsiness detection by neural network learning of driver steering. Proc. Inst. Mech. Eng. Part D: J. Automob. Eng. 215(9), 969–975 (2005). https://doi.org/10.1243/0954407011528536

    Article  Google Scholar 

  18. Smart Eye: Smart eye pro (2014). http://smarteye.se/wp-content/uploads/2014/12/Smart-Eye-Pro.pdf

  19. SmartEye: Smarteyepro (2019). https://smarteye.se/research-instruments/se-pro/

  20. Song, H., Feng, H.Y.: A global clustering approach to point cloud simplification with a specified data reduction ratio. Comput.-Aided Design 40, 281–292 (2008). https://doi.org/10.1016/j.cad.2007.10.013, https://www.sciencedirect.com/science/article/pii/S0010448507002448

  21. European Union: Regulation (EU) 2019/2144 of the European parliament and of the council. Official Journal of the European Union (2019)

    Google Scholar 

  22. Wang, H., Antonelli, M., Shi, B.E.: Using point cloud data to improve three dimensional gaze estimation. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 39. IEEE (2017). https://doi.org/10.1109/EMBC.2017.8036944, https://ieeexplore.ieee.org/abstract/document/8036944

  23. Werneke, J., Vollrath, M.: Where did the car come from? Attention allocation at intersections. In: Modelling of drivers’ behaviour for ITS design. Loughborough University (2012). https://www.humanist-vce.eu/fileadmin/contributeurs/humanist/Berlin2010/2b_Werneke.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan Bickerdt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bickerdt, J., Gollnick, C., Sonnenberg, J., Kasneci, E. (2022). Creating Geopositioned 3D Areas of Interest from Fleet Gaze Data. In: Krömker, H. (eds) HCI in Mobility, Transport, and Automotive Systems. HCII 2022. Lecture Notes in Computer Science, vol 13335. Springer, Cham. https://doi.org/10.1007/978-3-031-04987-3_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-04987-3_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-04986-6

  • Online ISBN: 978-3-031-04987-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics