ABSTRACT
Driving is a complex, continuous, and multitask process that involves driver's cognition, perception, and motor movements. The way road traffic signs and vehicle information is displayed impacts strongly driver's attention with increased mental workload leading to safety concerns. Drivers must keep their eyes on the road, but can always use some assistance in maintaining their awareness and directing their attention to potential emerging hazards. Research in perceptual and human factors assessment is needed for relevant and correct display of this information for maximal road traffic safety as well as optimal driver comfort. In-vehicle contextual Augmented Reality (AR) has the potential to provide novel visual feedbacks to drivers for an enhanced driving experience. In this paper, we present a new real-time approach for fast and accurate framework for traffic sign recognition, based on Cascade Deep learning and AR, which superimposes augmented virtual objects onto a real scene under all types of driving situations, including unfavorable weather conditions. Experiments results show that, by combining the Haar Cascade and deep convolutional neural networks show that the joint learning greatly enhances the capability of detection and still retains its realtime performance.
- Y. Bengio. Learning deep architectures for ai. Foundations and trends® in Machine Learning, 2(1):1--127, 2009. Google ScholarDigital Library
- L. Chuan, P. Shenghui, Z. Fan, L. Menghe, and K. Baozhong. A method of traffic sign detecting based on color similarity. In Measuring Technology and Mechatronics Automation (ICMTMA), 2011 Third International Conference on, volume 1, pages 123--126. IEEE, 2011. Google ScholarDigital Library
- D. Cireşan, U. Meier, J. Masci, and J. Schmidhuber. Multi-column deep neural network for traffic sign classification. Neural Networks, 32:333--338, 2012. Google ScholarDigital Library
- J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013.Google Scholar
- W. J. Jeon, G. A. R. Sanchez, T. Lee, Y. Choi, B. Woo, K. Lim, and H. Byun. Real-time detection of speed-limit traffic signs on the real road using haar-like features and boosted cascade. In Proceedings of the 8th International Conference on Ubiquitous Information Management and Communication, page 93. ACM, 2014. Google ScholarDigital Library
- Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the ACM International Conference on Multimedia, pages 675--678. ACM, 2014. Google ScholarDigital Library
- S. Kim and A. K. Dey. Simulated augmented reality windshield display as a cognitive mapping aid for elder driver navigation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 133--142. ACM, 2009. Google ScholarDigital Library
- K. Lenc and A. Vedaldi. R-cnn minus r. arXiv preprint arXiv:1506.06981, 2015.Google Scholar
- A. Mogelmose, M. M. Trivedi, and T. B. Moeslund. Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey. Intelligent Transportation Systems, IEEE Transactions on, 13(4):1484--1497, 2012. Google ScholarDigital Library
- P. Sermanet and Y. LeCun. Traffic sign recognition with multi-scale convolutional networks. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pages 2809--2813. IEEE, 2011. Google ScholarCross Ref
- J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. The german traffic sign recognition benchmark: a multi-class classification competition. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pages 1453--1460. IEEE, 2011. Google ScholarCross Ref
- J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural networks, 32:323--332, 2012. Google ScholarDigital Library
- R. Timofte. Kul belgium traffic signs and classification benchmark datasets. http://btsd.ethz.ch/shareddata.Google Scholar
- M. Tonnis, C. Sandor, C. Lange, and H. Bubb. Experimental evaluation of an augmented reality visualization for directing a car driver's attention. In Proceedings of the 4th IEEE/ACM International Symposium on Mixed and Augmented Reality, pages 56--59. IEEE Computer Society, 2005. Google ScholarDigital Library
- F. Zaklouta, B. Stanciulescu, and O. Hamdoun. Traffic sign classification using kd trees and random forests. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pages 2151--2155. IEEE, 2011.Google Scholar
Index Terms
- Deep learning traffic sign detection, recognition and augmentation
Recommendations
Deep Learning Approach for U.S. Traffic Sign Recognition
ICDLT '19: Proceedings of the 2019 3rd International Conference on Deep Learning TechnologiesAdvanced Driver Assistant Systems (ADAS) have seen massive improvements in recent years; from detecting pedestrians, road lanes, traffic signs and signals, and vehicles to recognizing and tracking traffic signs. Traffic sign recognition systems are used ...
Mexican traffic sign detection and classification using deep learning
AbstractAutomatic detection and classification of traffic signs is challenging to support a driver’s safety and even assist in autonomous driving. This paper aims to propose a methodology for detecting and classifying Mexican traffic signs ...
Highlights- A new data set with 1426 Mexican traffic signs is presented.
- We compare an R-...
Autonomous Traffic Sign (ATSR) Detection and Recognition using Deep CNN
AbstractAutomatic detection and recognition of traffic signs is very important and could potentially be used for driver assistance to reduce accidents and eventually in driverless automobiles. In this paper, Deep Convolutional Neural Network (CNN) is used ...
Comments