Skip to main content
Log in

In-vehicle augmented reality system to provide driving safety information

  • Regular Paper
  • Published:
Journal of Visualization Aims and scope Submit manuscript

Abstract

Improving traffic safety is one of the important goals of intelligent transportation systems. Traffic signs play a very vital role in safe driving and in avoiding accidents by informing the driver about the speed limits or possible dangers such as icy roads, imminent road works or pedestrian crossings. In-vehicle contextual Augmented reality (AR) has the potential to provide novel visual feedbacks to drivers for an enhanced driving experience. In this paper, we propose a new AR traffic sign recognition system (AR-TSR) to improve driving safety and enhance the driver’s experience based on the Haar cascade and the bag-of-visual-words approach, using spatial information to improve accuracy and an overview of studies related to the driver’s perception and the effectiveness of the AR in improving driving safety. In the first step, the region of interest (ROI) is extracted using a scanning window with a Haar cascade detector and an AdaBoost classifier to reduce the computational region in the hypothesis-generation step. Second, we proposed a new computationally efficient method to model global spatial distribution of visual words by taking into consideration the spatial relationships of its visual words. Finally, a multiclass sign classifier takes the positive ROIs and assigns a 3D traffic sign for each one using a linear SVM. Experimental results show that the suggested method could reach comparable performance of the state-of-the-art approaches with less computational complexity and shorter training time, and the AR-TSR more strongly impacts the allocation of visual attention during the decision-making phase.

Graphical Abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Abdi L, Meddeb A, Abdallah FB (2015) Augmented reality based traffic sign recognition for improved driving safety. In: International Workshop on Communication Technologies for Vehicles, Springer, pp 94–102

  • Azad R, Azad B, Kazerooni IT (2014) Optimized method for iranian road signs detection and recognition system. arXiv:1407.5324

  • Azuma RT et al (1997) A survey of augmented reality. Presence 6(4):355–385

    Article  Google Scholar 

  • Chuan L, Shenghui P, Fan Z, Menghe L, Baozhong K (2011) A method of traffic sign detecting based on color similarity. In: Measuring Technology and Mechatronics Automation (ICMTMA), 2011 Third International Conference on, vol. 1. IEEE, pp 123–126

  • Cireşan D, Meier U, Masci J, Schmidhuber J (2012) Multi-column deep neural network for traffic sign classification. Neural Netw 32:333–338

    Article  Google Scholar 

  • Danelakis A, Theoharis T, Pratikakis I (2016) A robust spatio-temporal scheme for dynamic 3d facial expression retrieval. Vis Comput 32(2):257–269

  • Dawn DD, Shaikh SH (2016) A comprehensive survey of human action recognition with spatio-temporal interest point (stip) detector. Vis Comput 32(3):289–306

  • Doshi A, Cheng SY, Trivedi MM (2009) A novel active heads-up display for driver assistance. Syst Man Cybern Part B: Cybern IEEE Trans 39(1):85–93

    Article  Google Scholar 

  • Geronimo D, Lopez AM, Sappa AD, Graf T (2010) Survey of pedestrian detection for advanced driver assistance systems. Pattern Analy Mach Intell IEEE Trans 32(7):1239–1258

    Article  Google Scholar 

  • Gonzalez-Reyna SE, Avina-Cervantes JG, Ledesma-Orozco SE, Cruz-Aceves I (2013) Eigen-gradients for traffic sign recognition. Math Probl Eng 2013:1–6

  • Greenhalgh J, Mirmehdi M (2012) Real-time detection and recognition of road traffic signs. Intell Transp Syst IEEE Trans 13(4):1498–1506

    Article  Google Scholar 

  • Guo L, Ge PS, Zhang MH, Li LH, Zhao YB (2012) Pedestrian detection for intelligent transportation systems combining adaboost algorithm and support vector machine. Expert Syst Appl 39(4):4274–4286

    Article  Google Scholar 

  • Haloi M (2015) A novel plsa based traffic signs classification system. arXiv:1503.06643

  • Han P, Zhao G (2016) Line-based initialization method for mobile augmented reality in aircraft assembly. Vis Comput 33(9):1185–1196

  • Ho C, Spence C (2005) Assessing the effectiveness of various auditory cues in capturing a driver’s visual attention. J Exp Psychol Appl 11(3):157

    Article  Google Scholar 

  • Houben S, Stallkamp J, Salmen J, Schlipsing M, Igel C (2013) Detection of traffic signs in real-world images: the German traffic sign detection benchmark. In: Neural Networks (IJCNN), The 2013 International Joint Conference on IEEE, pp 1–8

  • Jeon WJ, Sanchez GAR, Lee T, Choi Y, Woo B, Lim K, Byun H (2014) Real-time detection of speed-limit traffic signs on the real road using haar-like features and boosted cascade. In: Proceedings of the 8th International Conference on Ubiquitous Information Management and Communication, ACM

  • Khan R, Barat C, Muselet D, Ducottet C (2015) Spatial histograms of soft pairwise similar patches to improve the bag-of-visual-words model. Comput Vis Image Underst 132:102–112

    Article  Google Scholar 

  • Kim S, Dey AK (2009) Simulated augmented reality windshield display as a cognitive mapping aid for elder driver navigation. In: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, pp 133–142

  • Kun Z, Wenpeng W, Guangmin S (2014) An effective recognition method for road information based on mobile terminal. Math Probl Eng 2014:1–8

    Article  Google Scholar 

  • Li Y, Xu J, Zhang Y, Zhang C, Yin H, Lu H (2016) Image classification using spatial difference descriptor under spatial pyramid matching framework. In: International conference on multimedia modeling, Springer, pp 527–539

  • Li Y, Ye J, Wang T, Huang S (2015) Augmenting bag-of-words: a robust contextual representation of spatiotemporal interest points for action recognition. Vis Comput 31(10):1383–1394

    Article  Google Scholar 

  • Li Z, He S, Hashem M (2015) Robust object tracking via multi-feature adaptive fusion based on stability: contrast analysis. Vis Comput 31(10):1319–1337

    Article  Google Scholar 

  • Liang M, Yuan M, Hu X, Li J, Liu H (2013) Traffic sign detection by roi extraction and histogram features-based recognition. In: Neural Networks (IJCNN). The 2013 international joint conference on IEEE , pp 1–8

  • Livingston MA, Swan II JE, Gabbard JL, Höllerer TH, Hix D, Julier SJ, Baillot Y, Brown D (2003) Resolving multiple occluded layers in augmented reality. In: Proceedings of the 2nd IEEE/ACM international symposium on mixed and augmented reality. IEEE Computer Society

  • Mathias M, Timofte R, Benenson R, Van Gool L (2013) Traffic sign recognitionhow far are we from the solution? In: Neural Networks (IJCNN), The 2013 International Joint Conference on IEEE, pp 1–8

  • McCall JC, Trivedi MM (2006) Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation. Intell Transp Syst IEEE Transac 7(1):20–37

    Article  Google Scholar 

  • Mogelmose A, Trivedi MM, Moeslund TB (2012) Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey. Intell Transp Syst IEEE Trans 13(4):1484–1497

    Article  Google Scholar 

  • Nguyen BV, Pham D, Ngo TD, Le DD, Duong DA (2014) Integrating spatial information into inverted index for large-scale image retrieval. In: Multimedia (ISM), 2014 IEEE International Symposium on IEEE, pp 102–105

  • Nguyen KD, Le DD, Duong DA (2013) Efficient traffic sign detection using bag of visual words and multi-scales sift. In: International conference on neural information processing, Springer, pp 433–441

  • Park JG, Kim KJ (2013) Design of a visual perception model with edge-adaptive gabor filter and support vector machine for traffic sign detection. Expert Syst Appl 40(9):3679–3687

    Article  Google Scholar 

  • Pedrosa GV, Traina AJ (2015) Compact and discriminative approach for encoding spatial-relationship of visual words. In: Proceedings of the 30th annual ACM symposium on applied computing, ACM, pp 92–95

  • Richter P (2014) Ar-hud technology for a dialog without words. http://holistic-human-machine-interface.com/home-en-2-0. Accessed 13 Sept 2016

  • Schall MC, Rusch ML, Lee JD, Dawson JD, Thomas G, Aksan N, Rizzo M (2013) Augmented reality cues and elderly driver hazard perception. Hum Factors J Hum Factors 55(3):643–658

  • Sermanet P, LeCun Y (2011) Traffic sign recognition with multi-scale convolutional networks. In: The 2011 International Joint Conference on Neural Networks, San jose, CA, pp 2809–2813

  • Shahid M, Nawaz T, Habib HA (2013) Eye-gaze and augmented reality framework for driver assistance. Life Sci J 10(3):1–8

    Google Scholar 

  • Silvéria MK (2014) Virtual windshields: merging reality and digital content to improve the driving experience. arXiv:1405.0910

  • Stallkamp J, Schlipsing M, Salmen J, Igel C (2011) The German traffic sign recognition benchmark: a multi-class classification competition. In: Neural networks (IJCNN), The 2011 international joint conference on IEEE, pp 1453–1460

  • Stallkamp J, Schlipsing M, Salmen J, Igel C (2012) Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw 32:323–332

    Article  Google Scholar 

  • Timofte R (2012) Kul Belgium traffic signs and classification benchmark datasets. http://btsd.ethz.ch/shareddata. Accessed 10 Mar 2016

  • Tonnis M, Sandor C, Lange C, Bubb H (2005) Experimental evaluation of an augmented reality visualization for directing a car driver’s attention. In: Proceedings of the 4th IEEE/ACM international symposium on mixed and augmented reality. IEEE Computer Society, pp 56–59

  • Viola P, Jones MJ (2004) Robust real-time face detection. Int J Comput Vision 57(2):137–154

    Article  Google Scholar 

  • Virupakshappa K, Han Y, Oruklu E (2015) Traffic sign recognition based on prevailing bag of visual words representation on feature descriptors. In: Electro/Information Technology (EIT), 2015 IEEE international conference on IEEE, pp 489–493

  • Wang JG, Lin CJ, Chen SM (2010) Applying fuzzy method to vision-based lane detection and departure warning system. Expert Syst Appl 37(1):113–126

    Article  Google Scholar 

  • Wang Q, Shi L (2013) Pose estimation based on pnp algorithm for the racket of table tennis robot. In: Control and decision conference (CCDC), 2013 25th Chinese. IEEE, pp 2642–2647

  • Wang W, Zhang W, Guo H, Bubb H, Ikeuchi K (2011) A safety-based approaching behavioural model with various driving characteristics. Transp Res Part C: Emer Technol 19(6):1202–1214

    Article  Google Scholar 

  • Yeh M, Wickens CD (2001) Display signaling in augmented reality: Effects of cue reliability and image realism on attention allocation and trust calibration. Hum Factors J Hum Factors Ergon Soc 43(3):355–365

    Article  Google Scholar 

  • Yin S, Ouyang P, Liu L, Guo Y, Wei S (2015) Fast traffic sign recognition with a rotation invariant binary pattern based feature. Sensors 15(1):2161–2180

    Article  Google Scholar 

  • Zagoris K, Pratikakis I, Antonacopoulos A, Gatos B, Papamarkos N (2014) Distinction between handwritten and machine-printed text based on the bag of visual words model. Pattern Recognit 47(3):1051–1062

    Article  Google Scholar 

  • Zaklouta F, Stanciulescu B, Hamdoun O (2011) Traffic sign classification using kd trees and random forests. In: Neural networks (IJCNN), The 2011 international joint conference on IEEE, pp 2151–2155

  • Zhang E, Mayo M (2010) Enhanced spatial pyramid matching using log-polar-based image subdivision and representation. In: Digital image computing: techniques and applications (DICTA), 2010 International Conference on IEEE, pp 208–213

  • Zhu Q, Zhong Y, Zhao B, Xia GS, Zhang L (2016) Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery. IEEE Geosci Remote Sens Lett 13(6):747–751

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lotfi Abdi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abdi, L., Meddeb, A. In-vehicle augmented reality system to provide driving safety information. J Vis 21, 163–184 (2018). https://doi.org/10.1007/s12650-017-0442-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12650-017-0442-6

Keywords

Navigation