Skip to main content
Log in

In-vehicle augmented reality TSR to improve driving safety and enhance the driver’s experience

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

In-vehicle contextual augmented reality (AR) has the potential to provide novel visual feedbacks to drivers for an enhanced driving experience. In this paper, we propose a new AR traffic sign recognition system (AR-TSR) to improve driving safety and enhance the driver’s experience based on the Haar cascade and the Bag-of-Visual-Words approach, using spatial information to improve accuracy and an overview of studies related to the driver’s perception and the effectiveness of the AR in improving driving safety. In the first step, the region of interest (ROI) is extracted using a scanning window with a Haar cascade detector and an AdaBoost classifier to reduce the computational region in the hypothesis generation step. Second, we proposed a new computationally efficient method to model global spatial distribution of visual words by taking into consideration the spatial relationships of its visual words. Finally, a multiclass sign classifier takes the positive ROIs and assigns a 3D traffic sign for each one using a linear SVM. Experimental results show that the suggested method could reach comparable performance of the state-of-the-art approaches with less computational complexity and shorter training time, and the AR-TSR more strongly impacts the allocation of visual attention during the decision-making phase.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Abdi, L., Meddeb, A., Abdallah, F.B.: Augmented reality based traffic sign recognition for improved driving safety. In: International Workshop on Communication Technologies for Vehicles, pp. 94–102. Springer, Berlin (2015)

  2. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: Slic Superpixels. Tech. rep. (2010)

  3. Amamra, A., Aouf, N., Stuart, D., Richardson, M.: A recursive robust filtering approach for 3d registration. Signal Image Video Process. 10(5), 835–842 (2016)

    Article  Google Scholar 

  4. Chen, C., Zhang, B., Su, H., Li, W., Wang, L.: Land-use scene classification using multi-scale completed local binary patterns. Signal Image Video Process. 10(4), 745–752 (2016)

    Article  Google Scholar 

  5. Cireşan, D., Meier, U., Masci, J., Schmidhuber, J.: Multi-column deep neural network for traffic sign classification. Neural Netw. 32, 333–338 (2012)

    Article  Google Scholar 

  6. Dawn, D.D., Shaikh, S.H.: A comprehensive survey of human action recognition with spatio-temporal interest point (stip) detector. Vis. Comput. 32:1–18 (2015)

  7. Haloi, M.: A Novel PLSA Based Traffic Signs Classification System. arXiv:1503.06643 (2015)

  8. Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M., Igel, C.: Detection of traffic signs in real-world images: the German traffic sign detection benchmark. In: The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2013)

  9. Jiang, S., Ning, J., Cai, C., Li, Y.: Robust struck tracker via color Haar-like feature and selective updating. Signal Image Video Process. 1–8 (2017)

  10. Kun, Z., Wenpeng, W., Guangmin, S.: An effective recognition method for road information based on mobile terminal. Math. Probl. Eng. 2014:1–8(2014)

  11. Li, Y., Xu, J., Zhang, Y., Zhang, C., Yin, H., Lu, H.: Image classification using spatial difference descriptor under spatial pyramid matching framework. In: International Conference on Multimedia Modeling, pp. 527–539. Springer, Berlin (2016)

  12. Nguyen, K.D., Le, D.D., Duong, D.A.: Efficient traffic sign detection using bag of visual words and multi-scales sift. In: International Conference on Neural Information Processing, pp. 433–441. Springer, Berlin (2013)

  13. Ren, Y.: A comparative study of irregular pyramid matching in bag-of-bags of words model for image retrieval. Signal Image Video Process. 10(3), 471–478 (2016)

    Article  Google Scholar 

  14. Sermanet, P., LeCun, Y.: Traffic sign recognition with multi-scale convolutional networks. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 2809–2813. IEEE (2011)

  15. Shahid, M., Nawaz, T., Habib, H.A.: Eye-gaze and augmented reality framework for driver assistance. Life Sci. J. 10(3):1–8 (2013)

  16. Silvéria, M.K.: Virtual Windshields: Merging Reality and Digital Content to Improve the Driving Experience. arXiv:1405.0910 (2014)

  17. Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The German traffic sign recognition benchmark: a multi-class classification competition. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 1453–1460. IEEE (2011)

  18. Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 32, 323–332 (2012)

    Article  Google Scholar 

  19. Timofte, R.: Kul Belgium Traffic Signs and Classification Benchmark Datasets. http://btsd.ethz.ch/shareddata

  20. Uzyıldırım, F.E., Özuysal, M.: Instance detection by keypoint matching beyond the nearest neighbor. Signal Image Video Process. 10(8), 1527–1534 (2016)

    Article  Google Scholar 

  21. Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137–154 (2004)

    Article  Google Scholar 

  22. Virupakshappa, K., Han, Y., Oruklu, E.: Traffic sign recognition based on prevailing bag of visual words representation on feature descriptors. In: 2015 IEEE International Conference on Electro/Information Technology (EIT), pp. 489–493. IEEE (2015)

  23. Zaklouta, F., Stanciulescu, B., Hamdoun, O.: Traffic sign classification using KD trees and random forests. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 2151–2155. IEEE (2011)

  24. Zhu, Q., Zhong, Y., Zhao, B., Xia, G.S., Zhang, L.: Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery. IEEE Geosci. Remote Sens. Lett. 13(6), 747–751 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lotfi Abdi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abdi, L., Meddeb, A. In-vehicle augmented reality TSR to improve driving safety and enhance the driver’s experience. SIViP 12, 75–82 (2018). https://doi.org/10.1007/s11760-017-1132-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-017-1132-5

Keywords

Navigation