Skip to main content
Log in

An edge implementation of a traffic sign detection system for Advanced driver Assistance Systems

  • Regular Paper
  • Published:
International Journal of Intelligent Robotics and Applications Aims and scope Submit manuscript

Abstract

Recent cars have been equipped with many technologies to raise the safety level of vehicles and pedestrians. Advanced driver assistance systems (ADAS) are intelligent systems based on a huge number of sensors and actuators. Due to safety concerns, traffic sign detection is one of the important systems in ADAS. Detecting traffic signs is an important factor to prevent accidents thanks to respecting traffic rules. However, building reliable traffic suitable for implementation on edge devices such as field-programmable gate arrays (FPGA) is a hard challenge. For this purpose, we proposed traffic signs detection based on convolutional neural networks. The YOLO object detection model was used as a detection framework. A lightweight backbone based on the squeezeNet model was proposed to achieve high performance with a light model that fits into the memory of the FPGA. Many optimization techniques have been applied to compress the model size such as pruning and quantization. The proposed traffic sign dataset was implemented on the pynq z1 platform. training and evaluating the model were performed on the Chinese traffic sign detection (CTSD) dataset. The proposed model has achieved an mAP of 96% and a processing speed of 16 FPS. The obtained results proved the efficiency of the proposed model in terms of speed and accuracy while being suitable for implementation on an edge device.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Driver Support services. Available at: https://www.volvotrucks.com/en-en/services/driver-support.html lasted accessed: 01/06/2021

  2. Audi is advancing the tech that teaches cars to talk to traffic lights. Available at: https://www.digitaltrends.com/cars/audi-traffic-light-recognition-v2i-technology-gains-new-features/ last accessed: 01/06/2021

  3. Lasi, H., Fettke, P., Kemper, H.-G., Feld, T.: and Michael Hoffmann. “Industry 4.0.“ Business & information systems engineering 6(4), 239–242 (2014)

    Article  Google Scholar 

  4. Goodfellow, I., Bengio, Y., Courville, A., Bengio, Y.: Deep learning. Vol. 1, no. 2. Cambridge: MIT press, 2016

  5. Voulodimos, A., Doulamis, N., Doulamis, A., Protopapadakis, E. “Deep learning for computer vision: A brief review.“ Computational intelligence and neuroscience: 2018 (2018)

  6. Li, H.: “Deep learning for natural language processing: advantages and challenges.“ National Science Review (2017). Volume 5, Issue 1, January 2018, Pages 24–26.

  7. Ayachi, R., Said, Y., Atri, M.: “A Convolutional Neural Network to Perform Object Detection and Identification in Visual Large-Scale Data.“ Big Data (2020). Volume: 9 Issue 1.

  8. Afif, M., Ayachi, R., Said, Y., Pissaloux, E., Atri, M.: “An evaluation of retinanet on indoor object detection for blind and visually impaired persons assistance navigation.“ Neural Processing Letters (2020): 1–15. volume: 51.

  9. Afif, M., Ayachi, R., Pissaloux, E., Said, Y., Atri, M.: Indoor objects detection and recognition for an ICT mobility assistance of visually impaired people. Multimedia Tools and Applications 79(41), 31645–31662 (2020)

    Article  Google Scholar 

  10. Afif, M., Ayachi, R., Yahia Said, and Atri, M.: “Deep Learning Based Application for Indoor Scene Recognition.“ Neural Processing Letters (2020): 1–11

  11. Ayachi, R., Said, Y., Abdessalem Ben, A.: “Pedestrian Detection Based on Light-Weighted Separable Convolution for Advanced Driver Assistance Systems.“. Neural Process. Lett. 52(3), 2655–2668 (2020)

    Article  Google Scholar 

  12. Sun, X., Wu, P., Steven, C.H.H.: Face detection using deep learning: An improved faster RCNN approach. Neurocomputing 299, 42–50 (2018)

    Article  Google Scholar 

  13. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T.: Marco Andreetto, and Hartwig Adam. “Mobilenets: Efficient convolutional neural networks for mobile vision applications.“ arXiv preprint arXiv:1704.04861 (2017)

  14. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J.: and Kurt Keutzer. “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size.“ arXiv preprint arXiv:1602.07360 (2016)

  15. Tan, M., and Quoc V. Le. “Efficientnet: Rethinking model scaling for convolutional neural networks.“ arXiv preprint arXiv:1905.11946: (2019)

  16. Zhang, Y., Wang, Z., Qi, Y., Liu, J., Yang, J.: “Ctsd: A dataset for traffic sign recognition in complex real-world images.“. In: 2018 IEEE Visual Communications and Image Processing (VCIP), pp. 1–4. IEEE (2018)

  17. Lechner, M., Axel Jantsch, and Sai Manoj Pudukotai Dinakarrao. “ResCoNN: Resource-Efficient FPGA-Accelerated CNN for Traffic Sign Classification.“ In 2019 Tenth International Green and Sustainable Computing Conference (IGSC), pp. 1–6. IEEE, 2019

  18. Lin, Z., Yih, M., Ota, J.M.: John D. Owens, and Pınar Muyan-Özçelik. “Benchmarking Deep Learning Frameworks and Investigating FPGA Deployment for Traffic Sign Classification and Detection.“. IEEE Trans. Intell. Veh. 4(3), 385–395 (2019)

    Article  Google Scholar 

  19. Simonyan, K., and Andrew Zisserman. “Very deep convolutional networks for large-scale image recognition.“ arXiv preprint arXiv:1409.1556: (2014)

  20. He, K., Zhang, X., Ren, S., Sun, J. “Deep residual learning for image recognition.“ In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. 2016

  21. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T.: Marco Andreetto, and Hartwig Adam. “Mobilenets: Efficient convolutional neural networks for mobile vision applications.“ arXiv preprint arXiv:1704.04861 (2017)

  22. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J.: and Kurt Keutzer. “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size.“ arXiv preprint arXiv:1602.07360 (2016)

  23. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Alexander, C. Berg. “Ssd: Single shot multibox detector.“ In European conference on computer vision, pp. 21–37. Springer, Cham, 2016

  24. Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C. “The German traffic sign recognition benchmark: a multi-class classification competition.“ In The 2011 international joint conference on neural networks, pp. 1453–1460. IEEE, 2011

  25. Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M., Igel, C. “Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark.“ In The 2013 international joint conference on neural networks (IJCNN), pp. 1–8. IEEE, 2013

  26. Redmon, J.: and Ali Farhadi. “YOLOv3: An incremental improvement.“ arXiv preprint arXiv:1804.02767 (2018)

  27. Oh, SeonTaek, You, J.-H., Young-Keun, K. “Implementation of Compressed YOLOv3-tiny on FPGA-SoC.“ In 2020 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), pp. 1–4. IEEE, 2020

  28. Shabarinath, B.B., Muralidhar, P. “Convolutional Neural Network based Traffic-Sign Classifier Optimized for Edge Inference.“ In 2020 IEEE REGION 10 CONFERENCE (TENCON), pp. 420–425. IEEE, 2020

  29. Yeom, S.-K., Seegerer, P., Lapuschkin, S., Binder, A., Wiedemann, S.: Klaus-Robert Müller, and Wojciech Samek. “Pruning by explaining: A novel criterion for deep neural network pruning.“ arXiv preprint arXiv:1912.08881 (2019)

  30. Nahshan, Y., Chmiel, B., Baskin, C., Zheltonozhskii, E., Banner, R., Bronstein, A.M.: and Avi Mendelson. “Loss Aware Post-training Quantization.“ arXiv preprint arXiv:1911.07190 (2019)

  31. Blaschko, M.B. “Branch and bound strategies for non-maximal suppression in object detection.“ In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 385–398. Springer, Berlin, Heidelberg, 2011

  32. He, Y., Zhang, X., Sun, J. “Channel pruning for accelerating very deep neural networks.“ In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397. 2017

  33. Young, S.I., Zhe, W., Taubman, D., Girod, B. “Transform Quantization for CNN Compression.“ arXiv preprint arXiv:2009.01174: (2020)

  34. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Quantized neural networks: training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18(187), 1–30 (2018) “,”,

    MathSciNet  MATH  Google Scholar 

  35. Ayachi, R., Afif, M., Yahia Said, and Atri, M. “Strided convolution instead of max pooling for memory efficiency of convolutional neural networks.“ In International conference on the Sciences of Electronics, Technologies of Information and Telecommunications, pp. 234–243. Springer, Cham, 2018

  36. Ayachi, R., Said, Y., Abdessalem Ben, A.: “Optimizing Neural Networks for Efficient FPGA Implementation: A Survey.“ Archives of Computational Methods in Engineering: 1–11 (2021).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Riadh Ayachi.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ayachi, R., Afif, M., Said, Y. et al. An edge implementation of a traffic sign detection system for Advanced driver Assistance Systems. Int J Intell Robot Appl 6, 207–215 (2022). https://doi.org/10.1007/s41315-022-00232-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41315-022-00232-4

Keywords

Navigation