Skip to main content

Advertisement

Log in

All-weather road drivable area segmentation method based on CycleGAN

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

It is a challenging task to segment drivable area of road in automatic driving system. Convolutional neural network has excellent performance in road segmentation. However, the existing segmentation methods only focus on improving the performance of road segmentation under good road conditions, but pay little attention to the performance of road segmentation under severe weather conditions. In this paper, an image enhancement network (IEC-Net) based on CycleGAN is proposed to enhance the diversified features of input images. Firstly, an unsupervised CycleGAN network is trained to feature enhance road images under severe weather conditions, so as to obtain an enhanced image with rich feature information. Secondly, the enhanced image is input into the most advanced semantic segmentation network, so as to realize the segmentation of the drivable area of the road. The experimental results show that the IEC-Net based on CycleGAN can be directly combined with any advanced semantic segmentation network and can not only realize end-to-end training, but also greatly improve the performance of the original semantic segmentation network for road segmentation under severe weather conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Wang, R., Pan, F., An, Q., Diao, Q., Feng, X.: Aerial unstructured road segmentation based on deep convolution neural network. In: 2019 Chinese Control Conference (CCC), pp. 8494–8500 (2019). https://doi.org/10.23919/ChiCC.2019.8865464

  2. Chen, B., Gong, C., Yang, J.: Importance-aware semantic segmentation for autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 66, 1–12 (2018)

    Google Scholar 

  3. Dong, S., Chen, Z.: Block multi-dimensional attention for road segmentation in remote sensing imagery. IEEE Geosci. Remote Sens. Lett. 19, 6504505 (2022). https://doi.org/10.1109/LGRS.2021.3137551

    Article  Google Scholar 

  4. Zhang, Y., Huang, Y.P., Guo, Z.Y., et al.: Point cloud-image data fusion for road segmentation. Opto-Electron. Eng. 48(12), 210–340 (2021). https://doi.org/10.12086/oee.2021.210340

    Article  Google Scholar 

  5. Peng, J., Shen, J., Li, X.: High-order energies for stereo segmentation. IEEE Trans. Cybernet. 46(7), 1616–1627 (2016). https://doi.org/10.1109/TCYB.2015.2453091

    Article  Google Scholar 

  6. Yang, F., Wang, H., Jin, Z.: Road segmentation model based on fusion via hierarchical conditional random field. Robot 40(6), 803–816 (2018)

    Google Scholar 

  7. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012). https://doi.org/10.1109/TPAMI.2012.120

    Article  Google Scholar 

  8. Reyes, A., Rincón, M.E.R., García, M.O.M., et al.: Robust image segmentation based on superpixels and Gauss–Markov measure fields. In: Mexican International Conference on Artificial Intelligence

  9. Maurya, R., Gupta, P.R., Shukla, A.S.: Road extraction using K-means clustering and morphological operations. In: 2011 International Conference on Image Information Processing, pp. 1–6 (2011). https://doi.org/10.1109/ICIIP.2011.6108839

  10. Tang, B., He, H.: ENN: Extended nearest neighbor method for pattern recognition [research frontier]. IEEE Comput. Intell. Mag. 10(3), 52–60 (2015)

    Article  Google Scholar 

  11. Wang, Z., Song, R., Duan, P., et al.: EFNet: enhancement-fusion network for semantic segmentation. Pattern Recognit. 9, 108023 (2021)

    Article  Google Scholar 

  12. López-Cifuentes, A., Escudero-Violo, M., Bescós, J., et al.: Semantic-aware scene recognition. Pattern Recognit. 102, 66 (2020)

    Article  Google Scholar 

  13. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Machine Intell. 39(6), 1137–1149 (2017). https://doi.org/10.1109/TPAMI.2016.2577031

    Article  Google Scholar 

  14. Li, X., Ye, M., Liu, Y., Zhu, C.: Adaptive deep convolutional neural networks for scene-specific object detection. IEEE Trans. Circuits Syst. Video Technol. 29(9), 2538–2551 (2019). https://doi.org/10.1109/TCSVT.2017.2749620

    Article  Google Scholar 

  15. Liang, Y., Qin, G., Sun, M., et al.: MAFNet: multi-style attention fusion network for salient object detection. Neurocomputing 422(2), 22–33 (2021)

    Article  Google Scholar 

  16. Ouyang, N., Zhu, T., Lin, L.: Convolutional neural network trained by joint loss for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 16(3), 457–461 (2019). https://doi.org/10.1109/LGRS.2018.2872359

    Article  Google Scholar 

  17. Lu, Q., Lu, J., Yu, D.: Gender classification based on the convolutional neural network. In: Proceeding of the 11th World Congress on Intelligent Control and Automation, pp. 1962–1965 (2014). https://doi.org/10.1109/WCICA.2014.7053021

  18. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90

  19. Zhang, Y., Chen, H., He, Y., et al.: Road segmentation for all-day outdoor robot navigation. Neurocomputing 314, 316–325 (2018)

    Article  Google Scholar 

  20. Bai, L., Lyu, Y., Huang, X.: RoadNet-RT: High Throughput CNN Architecture and SoC Design for Real-Time Road Segmentation (2020)

  21. Abdollahi, A., Pradhan, B., Sharma, G., Maulud, K.N.A., Alamri, A.: Improving road semantic segmentation using generative adversarial network. IEEE Access 9, 64381–64392 (2021). https://doi.org/10.1109/ACCESS.2021.3075951

    Article  Google Scholar 

  22. Li, Y., Guo, L., Rao, J., Xu, L., Jin, S.: Road segmentation based on hybrid convolutional network for high-resolution visible remote sensing image. IEEE Geosci. Remote Sens. Lett. 16(4), 613–617 (2019). https://doi.org/10.1109/LGRS.2018.2878771

    Article  Google Scholar 

  23. Romera, E., Alvarez, J.M., Bergasa, L.M., et al.: ERFNet: efficient residual factorized ConvNet for real-time semantic segmentation. IEEE Trans. Intell. Transp. Syst. 66(1), 1–10 (2017)

    Google Scholar 

  24. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (Eds.) Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015. Lecture Notes in Computer Science, vol. 9351. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

  25. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder–decoder with atrous separable convolution for semantic image segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (Eds.) Computer Vision—ECCV 2018. Lecture Notes in Computer Science, vol. 11211. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_49

  26. Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder–decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 66, 1 (2017)

    Google Scholar 

  27. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci. 6, 66 (2014)

    Google Scholar 

  28. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25(2), 66 (2012)

    Google Scholar 

  29. Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6230–6239 (2017). https://doi.org/10.1109/CVPR.2017.660.

  30. Chen, L.C., Papandreou, G., Schroff, F., et al.: Rethinking Atrous Convolution for Semantic Image Segmentation (2017)

  31. Cheng, M., Zhang, Y., Su, Y., Alvarez, J.M., Kong, H.: Curb detection for road and sidewalk detection. IEEE Trans. Veh. Technol. 67(11), 10330–10342 (2018). https://doi.org/10.1109/TVT.2018.2865836

    Article  Google Scholar 

  32. Shen, J., Du, Y., Wang, W., Li, X.: Lazy random walks for superpixel segmentation. IEEE Trans. Image Process. 23(4), 1451–1462 (2014). https://doi.org/10.1109/TIP.2014.2302892

    Article  MathSciNet  MATH  Google Scholar 

  33. Wang, W., Shen, J.: Higher-order image co-segmentation. IEEE Trans. Multimedia 18(6), 1011–1021 (2016). https://doi.org/10.1109/TMM.2016.2545409

    Article  Google Scholar 

  34. Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2017). https://doi.org/10.1109/TPAMI.2016.2572683

    Article  Google Scholar 

  35. Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5967–5976. https://doi.org/10.1109/CVPR.2017.632

  36. Paszke, A., Chaurasia, A., Kim, S., Culurciello, E.: ENet: a deep neural network architecture for real-time semantic segmentation, arXiv:1606.02147 (2016)

  37. Tan, X., Xiao, Z., Wan, Q., Shao, W.: Scale sensitive neural network for road segmentation in high-resolution remote sensing images. IEEE Geosci. Remote Sens. Lett. 18(3), 533–537 (2021). https://doi.org/10.1109/LGRS.2020.2976551

    Article  Google Scholar 

  38. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the International Conference on Computer Vision, pp. 2242–2251 (2017)

  39. Liu, M.-Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: Proceedings of the Advances in Neural Information Processing Systems, pp. 700–708 (2017)

  40. Shen, J., Du, Y., Wang, W., et al.: Lazy random walks for superpixel segmentation. IEEE Trans. Image Process. 23(4), 1451–1462 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  41. Dong, X., Shen, J., Ling, S., et al.: Interactive co-segmentation using global and local energy optimization. IEEE Trans. Image Process. 24(11), 66 (2015)

    Google Scholar 

  42. Low, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 6, 66 (2004)

    Google Scholar 

  43. Zheng, S., Lu, J., Zhao, H., et al.: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers (2020)

  44. Xie, E., Wang, W., Yu, Z., et al.: SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers (2021)

  45. Zhang, J., Yang, K., Stiefelhagen, R.: ISSAFE: Improving Semantic Segmentation in Accidents by Fusing Event-Based Data (2020)

  46. Yang, K., Hu, X., Fang, Y., et al.: Omnisupervised omnidirectional semantic segmentation. IEEE Trans. Intell. Transp. Syst. 66(99), 1–16 (2020)

    Google Scholar 

  47. Kim, T., Cha, M., Kim, H., Lee, J., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: International Conference on Machine Learning, pp. 1857–1865 (2017)

  48. Sun, L., Wang, K., Yang, K., et al.: See clearer at night: towards robust nighttime semantic segmentation through day-night image conversion (2019)

  49. Romera, E., Bergasa, L.M., Yang, K., et al.: Bridging the day and night domain gap for semantic segmentation. In: 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE (2019)

  50. Uricar, M., Sistu, G., Rashed, H., et al.: Let's get dirty: GAN based data augmentation for camera lens soiling detection in autonomous driving. In: Workshop on Applications of Computer Vision. IEEE (2021)

  51. Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2242–2251. https://doi.org/10.1109/ICCV.2017.244

  52. Long, J., Shelhamer, A., et al.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 6, 66 (2017)

    Google Scholar 

  53. Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering (2016)

  54. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018). https://doi.org/10.1109/TPAMI.2017.2699184

    Article  Google Scholar 

  55. Yu, F., Koltun, V.: Multi-scale Context Aggregation by Dilated Convolutions (2016)

  56. Howard, A.G., Zhu, M., Chen, B., et al.: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications (2017)

  57. Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3213–3223 (2016). https://doi.org/10.1109/CVPR.2016.350

  58. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. Comput. Sci. 6, 66 (2014)

    Google Scholar 

  59. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: a large—scale hierarchical image database. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255, IEEE (2009)

  60. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

Download references

Funding

This research was funded by the National Nature Science Foundation of China (grant number 62163005), Natural Science Foundation of Guangxi Province (grant number 2022GXNSFAA035633).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Long Teng.

Ethics declarations

Conflict of interest

All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiqing, C., Depeng, W., Teng, L. et al. All-weather road drivable area segmentation method based on CycleGAN. Vis Comput 39, 5135–5151 (2023). https://doi.org/10.1007/s00371-022-02650-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02650-8

Keywords

Navigation