skip to main content
10.1145/3019612.3019643acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
research-article

Deep learning traffic sign detection, recognition and augmentation

Published:03 April 2017Publication History

ABSTRACT

Driving is a complex, continuous, and multitask process that involves driver's cognition, perception, and motor movements. The way road traffic signs and vehicle information is displayed impacts strongly driver's attention with increased mental workload leading to safety concerns. Drivers must keep their eyes on the road, but can always use some assistance in maintaining their awareness and directing their attention to potential emerging hazards. Research in perceptual and human factors assessment is needed for relevant and correct display of this information for maximal road traffic safety as well as optimal driver comfort. In-vehicle contextual Augmented Reality (AR) has the potential to provide novel visual feedbacks to drivers for an enhanced driving experience. In this paper, we present a new real-time approach for fast and accurate framework for traffic sign recognition, based on Cascade Deep learning and AR, which superimposes augmented virtual objects onto a real scene under all types of driving situations, including unfavorable weather conditions. Experiments results show that, by combining the Haar Cascade and deep convolutional neural networks show that the joint learning greatly enhances the capability of detection and still retains its realtime performance.

References

  1. Y. Bengio. Learning deep architectures for ai. Foundations and trends® in Machine Learning, 2(1):1--127, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. L. Chuan, P. Shenghui, Z. Fan, L. Menghe, and K. Baozhong. A method of traffic sign detecting based on color similarity. In Measuring Technology and Mechatronics Automation (ICMTMA), 2011 Third International Conference on, volume 1, pages 123--126. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. D. Cireşan, U. Meier, J. Masci, and J. Schmidhuber. Multi-column deep neural network for traffic sign classification. Neural Networks, 32:333--338, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013.Google ScholarGoogle Scholar
  5. W. J. Jeon, G. A. R. Sanchez, T. Lee, Y. Choi, B. Woo, K. Lim, and H. Byun. Real-time detection of speed-limit traffic signs on the real road using haar-like features and boosted cascade. In Proceedings of the 8th International Conference on Ubiquitous Information Management and Communication, page 93. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the ACM International Conference on Multimedia, pages 675--678. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Kim and A. K. Dey. Simulated augmented reality windshield display as a cognitive mapping aid for elder driver navigation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 133--142. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. K. Lenc and A. Vedaldi. R-cnn minus r. arXiv preprint arXiv:1506.06981, 2015.Google ScholarGoogle Scholar
  9. A. Mogelmose, M. M. Trivedi, and T. B. Moeslund. Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey. Intelligent Transportation Systems, IEEE Transactions on, 13(4):1484--1497, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. Sermanet and Y. LeCun. Traffic sign recognition with multi-scale convolutional networks. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pages 2809--2813. IEEE, 2011. Google ScholarGoogle ScholarCross RefCross Ref
  11. J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. The german traffic sign recognition benchmark: a multi-class classification competition. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pages 1453--1460. IEEE, 2011. Google ScholarGoogle ScholarCross RefCross Ref
  12. J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural networks, 32:323--332, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. R. Timofte. Kul belgium traffic signs and classification benchmark datasets. http://btsd.ethz.ch/shareddata.Google ScholarGoogle Scholar
  14. M. Tonnis, C. Sandor, C. Lange, and H. Bubb. Experimental evaluation of an augmented reality visualization for directing a car driver's attention. In Proceedings of the 4th IEEE/ACM International Symposium on Mixed and Augmented Reality, pages 56--59. IEEE Computer Society, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. F. Zaklouta, B. Stanciulescu, and O. Hamdoun. Traffic sign classification using kd trees and random forests. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pages 2151--2155. IEEE, 2011.Google ScholarGoogle Scholar

Index Terms

  1. Deep learning traffic sign detection, recognition and augmentation

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SAC '17: Proceedings of the Symposium on Applied Computing
        April 2017
        2004 pages
        ISBN:9781450344869
        DOI:10.1145/3019612

        Copyright © 2017 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 3 April 2017

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,650of6,669submissions,25%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader