Skip to main content
Log in

Traffic signs and markings recognition based on lightweight convolutional neural network

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Intelligent recognition of traffic signs and markings is an important component of autonomous driving and intelligent transportation systems. It is also an important theoretical basis for autonomous driving path planning. To address the low accuracy and poor real-time performance of the traffic signs and markings detection in complex and multivariate scenes, a lightweight convolutional neural network recognition method in multiple interference scenes is proposed. Firstly, based on Gamma correction and contrast limited histogram equalization algorithm, the traffic signs and markings image under the low illumination condition is enhanced adaptively. Then, the MobileNet-V2 fuse together with DeepLab-V3 + algorithm to segment traffic signs and markings in multiple scenes. Finally, an identification model is established to realize the adaptive recognition of traffic signs and markings, which are based on Lightweight Convolutional Neural Networks (Lw-CNN). Besides, the recognition method proposed in this paper is verified by the CSUST Chinese Traffic Sign Detection Benchmark (CCTSDB). The experimental results show that the Mean Intersection Over Union (MIOU) of the MobileNet-V2 and DeepLab-V3 + algorithm reaches 83.07%, it increased by 25.8% than before image enhancement. The recognition accuracy of algorithm based on lightweight convolutional neural network is 99.92%, higher than the algorithms of MobileNet-V2 and VGG16.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Code availability

Some or all of the code used during the study are available on request from the corresponding author.

References

  1. Lee, S., Kwon, H., Han, H., Lee, G., Kang, B.: A space-variant luminance map based color image enhancement. IEEE Trans. Consum. Electron. 56(4), 2636–2643 (2010)

    Article  Google Scholar 

  2. Liu, S., Zhang, Y.: Detail-preserving underexposed image enhancement via optimal weighted multi-exposure fusion. IEEE Trans. Consum. Electron. 65(3), 303–311 (2019)

    Article  Google Scholar 

  3. Tang, H., Li, Z., Peng, Z., Tang, J.: Blockmix: meta regularization and self-calibrated inference for metric-based meta-learning. In: Proceedings of the 28th ACM international conference on multimedia, pp. 610–618 (2020)

  4. Xiao, J.S., Shan, S.S., Duan, P.F., Tu, C.P., Yi, B.S.: A fast image enhancement algorithm based on fusion of different color spaces. Acta Automatica Sinica. 40(4), 697–705 (2014)

    Google Scholar 

  5. Yu, X., Li, H., Yang, H.: Two-stage image decomposition and color regulator for low-light image enhancement. The Visual Computer. 1–11(2022)

  6. Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Wang, Z.: Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Article  Google Scholar 

  7. Yu, N., Li, J., Hua, Z.: Fla-net: multi-stage modular network for low-light image enhancement. The Visual Computer. 1–20(2022)

  8. Tang, H., Yuan, C., Li, Z., Tang, J.: Learning attention-guided pyramidal features for few-shot fine-grained recognition. Pattern Recogn. 130, 108792 (2022)

    Article  Google Scholar 

  9. Zha, Z., Tang, H., Sun, Y., Tang, J.: Boosting Few-shot Fine-grained Recognition with Background Suppression and Foreground Alignment. IEEE Transactions on Circuits and Systems for Video Technology. (2023)

  10. Hillel, A.B., Lerner, R., Dan, L., Raz, G.: Recent progress in road and lane detection: a survey. Mach. Vis. Appl. 25(3), 727–745 (2014)

    Article  Google Scholar 

  11. You, F., Zhang, R., Zhong, L., Wang, H., Xu, J.: Lane detection algorithm for night-time digital image based on distribution feature of boundary pixels. J. Opt. Soc. Korea. 17(2), 188–199 (2013)

    Article  Google Scholar 

  12. Li, Y., Chen, L., Huang, H., Li, X., Xu, W., Zheng, L., Huang, J.: Nighttime lane markings recognition based on Canny detection and Hough transform. In: Proceedings of the IEEE International Conference on Real-time Computing and Robotics, pp. 411–415 (2016)

  13. Neven, D., De, Brabandere B., Georgoulis, S., Proesmans, M., Van Gool, L.: Towards end-to-end lane detection: an instance segmentation approach. In: Proceedings of the IEEE intelligent vehicles symposium, pp. 286–291 (2018)

  14. Pan, X., Shi, J., Luo, P., Wang, X., Tang, X.: Spatial as deep: Spatial cnn for traffic scene understanding. In: Proceedings of the AAAI Conference on Artificial Intelligence, (2018)

  15. Lee, S., Kim, J., Yoon, J. S., Shin, S., Bailo, O., Kim, N., So Kweon, I.: Vpgnet: Vanishing point guided network for lane and road marking detection and recognition. In: Proceedings of the IEEE international conference on computer vision, pp. 1947–1955 (2017)

  16. Alam, A., Jaffery, Z.A.: Indian traffic sign detection and recognition. Int. J. Intell. Transp. Syst. Res. 18(1), 98–112 (2020)

    Google Scholar 

  17. Guofeng, T., Huairong, C., Yong, L., Kai, Z.: Traffic sign recognition based on SVM and convolutional neural network. In: Proceedings of the IEEE Conference on Industrial Electronics and Applications, pp. 2066–2071 (2017)

  18. Wu, L., Li, H., He, J., Chen, X.: Traffic sign detection method based on Faster R-CNN. J. Phys: Conf. Ser. 1176(3), 032045 (2019)

    Google Scholar 

  19. Zhang, J., Huang, M., Jin, X., Li, X.: A real-time chinese traffic sign detection algorithm based on modified YOLOv2. Algorithms 10(4), 127 (2017)

    Article  MathSciNet  Google Scholar 

  20. Yu, H., Shim, J. H., Kwak, J., Song, J. W., Kang, S. J.: Vision Transformer-Based Retina Vessel Segmentation with Deep Adaptive Gamma Correction. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1456–1460 (2022)

  21. Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European conference on computer vision, pp. 801–818 (2018)

  22. Wang, K., Yang, J., Yuan, S., Li, M.: A lightweight network with attention decoder for real-time semantic segmentation. Vis. Comput. 38(7), 2329–2339 (2022)

    Article  Google Scholar 

  23. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  24. Zhang, H., Dana, K., Shi, J., Zhang, Wang, Z., Tyagi, X. A., Agrawal, A.: Context encoding for semantic segmentation. In: Proceedings of the. IEEE conference on Computer Vision and Pattern Recognition, pp. 7151–7160 (2018)

  25. Zhang, J., Xie, Z., Sun, J., Zou, X., Wang, J.: A cascaded R-CNN with multiscale attention and imbalanced samples for traffic sign detection. IEEE Access 8, 29742–29754 (2020)

    Article  Google Scholar 

  26. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention, pp. 234–241(2015)

  27. Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2881–2890(2017)

  28. Sun, K., Zhao, Y., Jiang, B., Cheng, T., Xiao, B., Liu, D., Wang, J.: High-resolution representations for labeling pixels and regions. arXiv preprint arXiv:1904.04514 (2019)

  29. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., Berg, A. C.: Ssd: Single shot multibox detector. In: European conference on computer vision, pp. 21–37(2016)

  30. Bochkovskiy, A., Wang, C. Y., Liao, H. Y. M.: Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (52072054), the Special Key Project of Chongqing Technology Innovation and Application Development (cstc2021jscx-cylh0026), and the Open Funding Key Laboratory of Industry and Information Technology (2021KFKT01).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhikun Gong.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, S., Gong, Z. & Zhao, D. Traffic signs and markings recognition based on lightweight convolutional neural network. Vis Comput 40, 559–570 (2024). https://doi.org/10.1007/s00371-023-02801-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-02801-5

Keywords

Navigation