Skip to main content
Log in

Lane Detection Method under Low-Light Conditions Combining Feature Aggregation and Light Style Transfer

  • Published:
Automatic Control and Computer Sciences Aims and scope Submit manuscript

Abstract

Deep learning technology is widely used in lane detection, but applying this technology to conditions such as environmental occlusion and low light remains challenging. On the one hand, obtaining lane information before and after the occlusion in low-light conditions using an ordinary convolutional neural network (CNN) is impossible. On the other hand, only a small amount of lane data (such as CULane) have been collected under low-light conditions, and the new data require considerable manual labeling. Given the above problems, we propose a double attention recurrent feature-shift aggregator (DARESA) module, which uses the prior knowledge of the lane shape in space and channel dimensions, and enriches the original lane features by repeatedly capturing pixel information across rows and columns. This indirectly increased the global feature information and ability of the network to extract feature fine-grained information. Moreover, we trained an unsupervised low-light style transfer model suitable for autonomous driving scenarios. The model transferred the daytime images in the CULane dataset to low-light images, eliminating the cost of manual labeling. In addition, adding an appropriate number of generated images to the training set can enhance the environmental adaptability of the lane detector, yielding better detection results than those achieved by using CULane only.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.

REFERENCES

  1. Yaqoob, I., Khan, L.U., Kazmi, S.A., Imran, M., Guizani, N., and Hong, C.S., Autonomous driving cars in smart cities: Recent advances, requirements, and challenges, IEEE Network, 2019, vol. 34, no. 1, pp. 174–181. https://doi.org/10.1109/MNET.2019.1900120

    Article  Google Scholar 

  2. Zheng, T., Fang, H., Zhang, Y., Tang, W., Yang, Z., Liu, H., and Cai, D., RESA: Recurrent feature-shift aggregator for lane detection, Proc. AAAI Conf. on Artif. Intell., 2020, vol. 35, no. 4, pp. 3547–3554. https://doi.org/10.1609/aaai.v35i4.16469

  3. Ko, Y., Lee, Y., Azam, S., Munir, F., Jeon, M., and Pedrycz, W., Key points estimation and point instance segmentation approach for lane detection, IEEE Trans. Intell. Transp. Syst., 2021, vol. 23, no. 7, pp. 8949–8958. https://doi.org/10.1109/TITS.2021.3088488

    Article  Google Scholar 

  4. Lee, S., Kim, J., Yoon, J.Sh., Shin, S., Bailo, O., Kim, N., Lee, T.-H., Hong, H.S., Han, S.-H., and Kweon, I.S., VPGNet: Vanishing point guided network for lane and road marking detection and recognition, 2017 IEEE Int. Conf. on Computer Vision (ICCV), Venice, 2017, IEEE, 2017, pp. 1965–1973. https://doi.org/10.1109/ICCV.2017.215

  5. Pan, X., Shi, J., Luo, P., Wang, X., and Tang, X., Spatial as deep: Spatial CNN for traffic scene understanding, Proc. AAAI Conf. Artif. Intell., 2018, vol. 32, no. 1. https://doi.org/10.1609/aaai.v32i1.12301

  6. He, K., Zhang, X., Ren, S., and Sun, J., Deep residual learning for image recognition, 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 2016, IEEE, 2016, pp. 770–778. https://doi.org/10.1109/CVPR.2016.90

  7. Liu, T., Chen, Z., Yang, Y., Wu, Z., and Li, H., Lane detection in low-light conditions using an efficient data enhancement: Light conditions style transfer, 2020 IEEE Intelligent Vehicles Symp. (IV), Las Vegas, 2020, IEEE, 2020, pp. 1394–1399. https://doi.org/10.1109/IV47402.2020.9304613

  8. Liu, M.-Yu, Breuel, T., and Kautz, J., Unsupervised image-to-image translation networks, NIPS’17: Proc. 31st Int. Conf. on Neural Information Processing Systems, Long Beach, 2017, von Luxburg, U., Guyon, I., Bengio, S., Wallach, H., and Fergus, R., Eds., Red Hook, N.Y.: Curran Associates, 2017, pp. 700–708.

  9. Wang, Z., Ren, W., and Qiu, Q., LaneNet: Real-time lane detection networks for autonomous driving, 2018. arXiv:1807.01726 [cs.CV]

  10. Qin, Z., Wang, H., and Li, X., Ultra fast structure-aware deep lane detection, Computer Vision—ECCV 2020, Vedaldi, A., Bischof, H., Brox, T., and Frahm, J.M., Eds., Lecture Notes in Compuer Science, vol. 12369, Cham: Springer, 2020, pp. 276–291. https://doi.org/10.1007/978-3-030-58586-0_17

  11. Tabelini, L., Berriel, R., Paixao, T. M., Badue, C., De Souza, A. F., and Oliveira-Santos, T., Keep your eyes on the lane: Real-time attention-guided lane detection, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, Tenn., 2021, IEEE, 2021, pp. 294–302. https://doi.org/10.1109/CVPR46437.2021.00036

  12. Xu, H., Wang, Sh., Cai, X., Zhang, W., Liang, X., and Li, Zh. Curvelane-nas: Unifying lane-sensitive architecture search and adaptive point blending, Computer Vision—ECCV 2020, Vedaldi, A., Bischof, H., Brox, T., and Frahm, J.M., Eds., Lecture Notes in Computer Science, vol. 12360, Cham: Springer, 2020, pp. 689–704. https://doi.org/10.1007/978-3-030-58555-6_41

    Book  Google Scholar 

  13. Tabelini, L., Berriel, R., Paixao, T. M., Badue, C., De Souza, A. F., and Oliveira-Santos, T., PolyLaneNet: Lane estimation via deep polynomial regression, 25th Int. Conf. on Pattern Recognition (ICPR), Milan, 2021, IEEE, 2021, pp. 6150–6156. https://doi.org/10.1109/ICPR48806.2021.9412265

  14. Bell, S., Zitnick, C. L., Bala, K., and Girshick, R. Inside-Outside Net: Detecting objects in context with skip pooling and recurrent neural networks, 2016 IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, 2016, IEEE, 2016, pp. 2874–2883. https://doi.org/10.1109/CVPR.2016.314

  15. Liang, X., Shen, X., Feng, J., Lin, L., and Yan, Sh., Semantic object parsing with graph LSTM, European Conference on Computer Vision—ECCV 2016, Leibe, B., Matas, J., Sebe, N., and Welling, M., Eds., Lecture Notes in Computer Science, vol. 9905, Cham: Springer, 2016, pp. 125–143. https://doi.org/10.1007/978-3-319-46448-0_8

  16. Liu, L., Muelly, M., Deng, J., Pfister, T., and Li, L.-J., Generative modeling for small-data object detection, 2019 IEEE/CVF Int. Conf. on Computer Vision (ICCV), Seoul, 2019, IEEE, 2019, pp. 6073–6081. https://doi.org/10.1109/ICCV.2019.00617

  17. Liu, H., Liu, F., Fan, X., and Huang, D., Polarized self-attention: Towards high-quality pixel-wise regression, 2021. arXiv:2107.00782 [cs.CV]

  18. Hou, Yu., Ma, Zh., Liu, Ch., and Loy, Ch.Ch., Learning lightweight lane detection CNNs by self-attention distillation, 2019 IEEE/CVF Int. Conf. on Computer Vision, Seoul, 2019, IEEE, 2019, pp. 1013–1021. https://doi.org/10.1109/ICCV.2019.00110

  19. Goyal, P., Dollar, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Ya., and He, K., Accurate, large minibatch SGD: Training ImageNet in 1 hour, 2017. arXiv:1706.02677 [cs.CV]

  20. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H., MobileNets: Efficient convolutional neural networks for mobile vision applications, 2017. arXiv:1704.04861 [cs.CV]

  21. Mishra, P., and Sarawadekar, K., Polynomial learning rate policy with warm restart for deep neural network, TENCON 2019—2019 IEEE Region 10 Conf., Kochi, India, 2019, IEEE, 2019, pp. 2087–2092. https://doi.org/10.1109/TENCON.2019.8929465

  22. Liu, Y.B., Zeng, M., and Meng, Q.H., Heatmap-based vanishing point boosts lane detection, 2020. arXiv:2007.15602 [cs.CV]

  23. Yu, F., Xian, W., Chen, Y., Liu, F., Liao, M., Madhavan, V., and Darrell, T., BDD100K: A diverse driving video database with scalable annotation tooling, 2018. arXiv:1805.04687 [cs.CV]

  24. Ioffe, S., and Szegedy, C., Batch normalization: Accelerating deep network training by reducing internal covariate shift, Proc. Mach. Learn. Res., 2015, vol. 37, pp. 448–456.

    Google Scholar 

  25. Ba, J.L., Kiros, J.R., and Hinton, G.E., Layer normalization, 2016. arXiv:1607.06450 [stat.ML]

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feng Liang.

Ethics declarations

The authors declare that they have no conflicts of interest.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jianlou Lou, Liang, F., Qu, Z. et al. Lane Detection Method under Low-Light Conditions Combining Feature Aggregation and Light Style Transfer. Aut. Control Comp. Sci. 57, 143–153 (2023). https://doi.org/10.3103/S0146411623020050

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3103/S0146411623020050

Keywords:

Navigation