Skip to main content

Saliency Detection in a Virtual Driving Environment for Autonomous Vehicle Behavior Improvement

  • Conference paper
  • First Online:
Augmented Reality, Virtual Reality, and Computer Graphics (AVR 2021)

Abstract

To make the best decisions in real-world situations, autonomous vehicles require learning algorithms that process a large number of labeled images. This paper aims to compare the automatically generated saliency maps with attention maps obtained with an eye-tracking device in order to provide automated labeling of images for the learning algorithm. To simulate traffic scenarios, we are using a virtual driving environment with a motion platform and an eye-tracking device for identifying the driver’s attention. The saliency maps are generated by post-processing the driver’s view provided by the front camera.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Riener, A., Jeon, M., Alvarez, I., Frison, A.K.: Driver in the loop: Best practices in automotive sensing and feedback mechanisms. In: Meixner, G., Müller, C. (eds.) Automotive user interfaces. HIS, pp. 295–323. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-49448-7_11

    Chapter  Google Scholar 

  2. Wang, J., Sun, F., Ge, H.: Effect of the driver’s desire for smooth driving on the car-following model. Physica A: Stat. Mech. Appl. 512, 96–108 (2018)

    Article  Google Scholar 

  3. Vaiana, R., et al.: Driving behavior and traffic safety: an acceleration-based safety evaluation procedure for smartphones. Mod. Appl. Sci. 8(1), 88 (2014)

    Article  Google Scholar 

  4. Mirman, J.H.: A dynamical systems perspective on driver behavior. Transp. Res. F: Traffic Psychol. Behav. 63, 193–203 (2019)

    Article  Google Scholar 

  5. Witt, M., Kompaß, K., Wang, L., Kates, R., Mai, M., Prokop, G.: Driver profiling–data-based identification of driver behavior dimensions and affecting driver characteristics for multi-agent traffic simulation. Transp. Res. F: Traffic Psychol. Behav. 64, 361–376 (2019)

    Article  Google Scholar 

  6. Deng, T., Yan, H., Qin, L., Ngo, T., Manjunath, B.S.: How do drivers allocate their potential attention? Driving fixation prediction via convolutional neural networks. IEEE Trans. Intell. Transp. Syst. 21(5), 2146–2154 (2019)

    Article  Google Scholar 

  7. Tobii homepage. https://www.tobii.com/. Accessed 20 Feb 2021

  8. Antonya, Cs., Irimia, C., Grovu, M., Husar, C., Ruba, M.: Co-simulation environment for the analysis of the driving simulator’s actuation. In: 7th International Conference on Control, Mechatronics and Automation (ICCMA), Delft, Netherlands, pp. 315–321 (2019)

    Google Scholar 

  9. CARLA - Open-source simulator for autonomous driving research, homepage. https://carla.org/. Accessed 5 May 2021

  10. Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Conference on Robot Learning PMLR, pp. 1–16 (2017)

    Google Scholar 

  11. Brogle, C., Zhang, C., Lim, K.L., Bräun, T.: Hardware-in-the-loop autonomous driving simulation without real-time constraints. IEEE Trans. Intell. Veh. 4(3), 375–384 (2019)

    Article  Google Scholar 

  12. Cai, P., Wang, H., Sun, Y., Liu, M.: Learning scalable self-driving policies for generic traffic scenarios. arXiv preprint arXiv:2011.06775 (2020)

  13. Dworak, D., Ciepiela, F., Derbisz, J., Izzat, I., Komorkiewicz, M., Wójcik, M.: Performance of LiDAR object detection deep learning architectures based on artificially generated point cloud data from CARLA simulator. In: 24th International Conference on Methods and Models in Automation and Robotics (MMAR), pp. 600–605 (2019).

    Google Scholar 

  14. Hofbauer, M., Kuhn, C.B., Petrovic, G. Steinbach, E.: TELECARLA: an open source extension of the CARLA Simulator for tele-operated driving research using off-the-shelf components. In: IEEE Intelligent Vehicles Symposium (IV), Las Vegas, USA (2020)

    Google Scholar 

  15. Xue, J.R., Fang, J.W., Zhang, P.: A survey of scene understanding by event reasoning in autonomous driving. Int. J. Autom. Comput. 15(3), 249–266 (2018)

    Article  Google Scholar 

  16. Shang, J., Guan, H.P., Liu, Y., Bi, H., Yang, L., Wang, M.: A novel method for vehicle headlights detection using salient region segmentation and PHOG feature. Multimedia Tools Appl. 1–21 (2021)

    Google Scholar 

  17. Aksoy, E., Yazıcı, A., Kasap, M.: See, attend and brake: an attention-based saliency map prediction model for end-to-end driving. arXiv preprint arXiv:2002.11020 (2020)

  18. Dang, T., Papachristos, C., Alexis, K.: Visual saliency-aware receding horizon autonomous exploration with application to aerial robotics. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2526–2533 (2018)

    Google Scholar 

  19. Xu, J., Min, J., Hu, J.: Real-time eye tracking for the assessment of driver fatigue. Healthc. Technol. Lett. 5(2), 54–58 (2018)

    Article  Google Scholar 

  20. Kapitaniak, B., Walczak, M., Kosobudzki, M., Jozwiak, Z., Bortkiewicz, A.: Application of eye-tracking in drivers testing: a review of research. Int. J. Occup. Med. Environ. Health 28(6), 941 (2015)

    Article  Google Scholar 

  21. Deng, T., Yang, K., Li, Y., Yan, H.: Where does the driver look? Top-down-based saliency detection in a traffic driving environment. IEEE Trans. Intell. Transp. Syst. 17(7), 2051–2062 (2016)

    Article  Google Scholar 

  22. tf-keras-vis toolkit. https://github.com/keisen/tf-keras-vis. Accessed 11 Apr 2021

  23. Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017)

  24. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  25. Krassanakis, V., Filippakopoulou, V., Nakos, B.: EyeMMV toolbox: an eye movement post-analysis tool based on a two-step spatial dispersion threshold for fixation identification. J. Eye Mov. Res. 7(1) (2014)

    Google Scholar 

Download references

Acknowledgement

This work was supported by a grant of the Ministry of Research, Innovation and Digitization, CNCS/CCCDI – UEFISCDI, project number PN-III-P2-2.1-PED-2019-4366 within PNCDI III (431PED).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florin Gîrbacia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Antonya, C., Gîrbacia, F., Postelnicu, C., Voinea, D., Butnariu, S. (2021). Saliency Detection in a Virtual Driving Environment for Autonomous Vehicle Behavior Improvement. In: De Paolis, L.T., Arpaia, P., Bourdot, P. (eds) Augmented Reality, Virtual Reality, and Computer Graphics. AVR 2021. Lecture Notes in Computer Science(), vol 12980. Springer, Cham. https://doi.org/10.1007/978-3-030-87595-4_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87595-4_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87594-7

  • Online ISBN: 978-3-030-87595-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics