Abstract
The importance of traffic signs cannot be overstated when it comes to road safety. The necessity for rapid and precise Traffic Sign classifier remains a challenge due to the complexity of traffic signs shapes and forms. In this paper, a real-time detector is presented for the German Traffic Sign Recognition Benchmark (GTSRB). GTSRB has 43 different classes with various shapes, forms, and colours. Their similarity is useful for object localisation but not for sign classification. In this article, a real-time detector for GTSRB is created using an upgraded compact YOLO-V4 Technique and implemented on the new NVIDIA Jetson Nano. To find and detect GTSRB pictures, a compact and efficient classifier is introduced. For the first time, this paper compares the detection and categorization of traffic signs using YOLO-V3 and 4, both regular and tiny.
Because most of real-time identification algorithms require a lot of processing power, the suggested compact classifier, which is based on the new YOLO-V4 Tiny, can recognize all 43 traffic signals with an average accuracy of 95.44% percent and a YOLO model size of just 9 MB. The GTSRB test dataset was used to validate this approach, which was then tested on the new Jetson Nano. In comparison to existing algorithms such as CNN, YOLO-V3, YOLO-V4, and Faster R-CNN, the suggested technique may successfully save more computational power and processing time.


















Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Pires, C., Torfs, K., Areal, A., Goldenbeld, C., Vanlaar, W., Granié, M.-A., Stürmer, Y.A., Usami, D.S., Kaiser, S., Jankowska-Karpa, D., Nikolaou, D., Holte, H., Kakinuma, T., Trigoso, J., Van den Berghe, W., Meesmann, U.: Car drivers’ road safety performance: a benchmark across 32 countries. IATSS Re. 44(3), 166–179 (2020). https://doi.org/10.1016/j.iatssr.2020.08.002
Cireşan, D., Meier, U., Masci, J., Schmidhuber, J.: An optimization approach for localization refinement of candidate traffic signs. Neural Netw. 32, 333–338 (2012a). https://doi.org/10.1016/j.neunet.2012.02.023
Zhu, Z., Lu, J., Martin, R.R., Hu, S.: An optimization approach for localization refinement of candidate traffic signs. IEEE Trans Intell Transp Syst. 18(11), 3006–3016 (2017). https://doi.org/10.1109/tits.2017.2665647
Han, C., Gao, G., Zhang, Y.: Real-time small traffic sign detection with revised faster-RCNN. Multimed Tools Appl. 78(10), 13263–13278 (2018). https://doi.org/10.1007/s11042-018-6428-0
Lin, T.-Y., Dollár, P., Girshick, R., et al.: Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125 (2017). https://doi.org/10.1109/CVPR.2017.106
Tai, S.-K., Dewi, C., Chen, R.-C., Liu, Y.-T., Jiang, X., Yu, H.: Deep learning for traffic sign recognition based on spatial pyramid pooling with scale analysis. Appl Sci. 10(19), 6997 (2020). https://doi.org/10.3390/app10196997
Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271 (2017). https://doi.org/10.1109/CVPR.2017.690
Redmon, J., Farhadi, A.: “YOLO-V3: An incremental improvement,”, arXiv:1804.02767. [Online]. Available: https://arxiv.org/abs/1804.02767 (2018)
Johner, F.M., Wassner, J.: "Efficient Evolutionary Architecture Search for CNN Optimization on GTSRB," 2019 18th IEEE international conference on machine learning and applications (ICMLA), Boca Raton, FL, USA, pp. 56–61. (2019). doi: https://doi.org/10.1109/ICMLA.2019.00018
Mogelmose, A., Trivedi, M.M., Moeslund, T.B.: Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey. IEEE Trans. Intell. Transp. Syst. 13(4), 1484–1497 (2012). https://doi.org/10.1109/tits.2012.2209421
Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M., Igel, C.: "Detection of Traffic Signs in Real-World Images: the German Traffic Sign Detection Benchmark," the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, pp. 1-8. (2013). https://doi.org/10.1109/IJCNN.2013.6706807
Jin, J., Fu, K., Zhang, C.: Traffic sign recognition with hinge loss trained convolutional neural networks. IEEE Trans. Intell. Transp. Syst. 15(5), 1991–2000 (2014). https://doi.org/10.1109/TITS.2014.2308281
Zhang, J., Wang, W., Lu, C., Wang, J., Sangaiah, A.K.: Lightweight deep network for traffic sign classification. Ann Telecommun. 75(7–8), 369–379 (2019b). https://doi.org/10.1007/s12243-019-00731-9
Zhang, J., Jin, X., Sun, J., Wang, J., Sangaiah, A.K.: Spatial and semantic convolutional features for robust visual object tracking. Multimed Tools Appl. 79(21–22), 15095–15115. (2018). https://doi.org/10.1007/s11042-018-6562-8
Zhang, J., Xie, Z., Sun, J., Zou, X., Wang, J.: A cascaded R-CNN with multiscale attention and imbalanced samples for traffic sign detection. IEEE Access. 8, 29742–29754. (2020). https://doi.org/10.1109/access.2020.2972338
Zhang, J., Jin, X., Sun, J., Wang, J., Li, K.: Dual model learning combined with multiple feature selection for accurate visual tracking. IEEE Access. 7, 43956–43969 (2019). https://doi.org/10.1109/access.2019.2908668
Zhang, J., Wu, Y., Feng, W., Wang, J.: Spatially attentive visual tracking using multi-model adaptive response fusion. IEEE Access. 7, 83873–83887 (2019). https://doi.org/10.1109/ACCESS.2019.2924944
García-Garrido, M.Á., Sotelo, M.Á., Martín-Gorostiza, E.: Fast Road Sign Detection Using Hough Transform for Assisted Driving of Road Vehicles. Lect Notes Comput Sci. 543–548 (2005). https://doi.org/10.1007/11556985_71
Yakimov, P., Fursov, V.: Traffic signs detection and tracking using modified hough transform. In: 2015 12th International Joint Conference on e-Business and Telecommunications (ICETE), pp. 22–28. IEEE (2015). https://doi.org/10.5220/0005543200220028
Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788 (2016). https://doi.org/10.1109/CVPR.2016.91
Bochkovskiy, A., Wang, C.-Y., Mark Liao, H.-Y.: YOLO-V4: Optimal speed and accuracy of object detection, arXiv:2004.10934 [Online]. Available: https://arxiv.org/abs/2004.10934 (2020)
Khnissi, K., Ben Jabeur, C., Seddik, H.: A smart mobile robot commands predictor using recursive neural network. Robot Auton Syst. 131, (2020). https://doi.org/10.1016/j.robot.2020.103593
Khnissi, K., Seddik, C., Seddik, H.: Smart navigation of mobile robot using neural network controller. In: 2018 International Conference on Smart Communications in Network Technologies (SaCoNeT), pp. 205–210. IEEE (2018). https://doi.org/10.1109/SaCoNeT.2018.8585616
Khnissi, K., Jabeur, C.B., Seddik, H.: 3D simulator for navigation of a mobile robot using simscape-SIMULINK. In: 2019 International Conference on Control, Automation and Diagnosis (ICCAD), p. 1–6. IEEE (2019). https://doi.org/10.1109/ICCAD46983.2019.9037958
Carneiro, T., Medeiros Da NóBrega, R.V., Nepomuceno, T., Bian, G., De Albuquerque, V.H.C., Filho, P.P.R.: Performance analysis of Google Colaboratory as a tool for accelerating deep learning applications. IEEE Access. 6, 61677–61685 (2018). https://doi.org/10.1109/ACCESS.2018.2874767
Fang, W., Wang, L., Ren, P.: Tinier-YOLO: a real-time object detection method for constrained environments. IEEE Access. 8, 1935–1944 (2020). https://doi.org/10.1109/access.2019.2961959
Iandola, Forrest N, Han, Song, Moskewicz, Matthew W, Ashraf, Khalid, Dally, William J, Keutzer, Kurt; Squeezenet: Alexnet-level accuracy with 50x fewer parameters and < 0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016)
Mazzia, V., Khaliq, A., Salvetti, F., Chiaberge, M.: Real-time apple detection system using embedded systems with hardware accelerators: an edge AI application. IEEE Access. 8, 9102–9114 (2020). https://doi.org/10.1109/ACCESS.2020.2964608
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest and Authorship Conformation Form
• All authors have participated in (a) conception and design, or analysis and interpretation of the data; (b) drafting the article or revising it critically for important intellectual content; and (c) approval of the final version.
• This manuscript has not been submitted to, nor is under review at, another journal or other publishing venue.
• The authors have no affiliation with any organization with a direct or indirect financial interest in the subject matter discussed in the manuscript.
• The following authors have affiliations with organizations with direct or indirect financial interest in the subject matter discussed in the manuscript:
Author’s name | Affiliation | |
---|---|---|
Khaled Khnissi | ENSIT | khaledkhnissi@gmail.com |
Chiraz Ben Jabeur | ENSIT | chirazbenjabeur@gmail.com |
Hassene Seddik | ENSIT | seddikhassne@gmail.com |
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Khnissi, K., Jabeur, C.B. & Seddik, H. Implementation of a Compact Traffic Signs Recognition System Using a New Squeezed YOLO. Int. J. ITS Res. 20, 466–482 (2022). https://doi.org/10.1007/s13177-022-00304-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13177-022-00304-6