Skip to main content
Log in

Refinecurvelane: lane detection with B-spline curve in a layer-by-layer refinement manner

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

Lane detection with front-view RGB images has been a long-standing challenge. Among the various methods, curve-based approaches are known for their fast speed, conciseness, and ability to handle occlusions. However, these methods often suffer from a relative low accuracy, attributing to the inflexibility of adopted curve model, the inefficient lane feature extraction, and a rigid curve regression supervision. In this paper, we propose a novel curve-based lane detection method that addresses these limitations. The lane lines are modeled with B-splines, which provide greater flexibility. Explicit spatial attention maps are used to guide the network in extracting relevant lane features from the image. Additionally, a layer-by-layer refinement process is employed to improve the lane predictions. Importantly, the ground truth of spatial attention maps also serve as pixel-level supervision for the lane instances. We evaluate the proposed method on four widely used lane detection datasets and demonstrate the state-of-the-art performance achieved among curve-based approaches on CULane and LLAMAS dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availibility

All data supporting the findings of this study are publicly available.

References

  1. Ishida, S., Gayko, J.E.: Development, evaluation and introduction of a lane keeping assistance system. In: IEEE Intelligent Vehicles Symposium, pp. 943–944 (2004). IEEE

  2. Chen, W., Wang, W., Wang, K., Li, Z., Li, H., Liu, S.: Lane departure warning systems and lane line detection methods based on image processing and semantic segmentation: A review. J Traffic Transp Eng (English edition) 7(6), 748–774 (2020)

    Article  Google Scholar 

  3. Zabihi, S., Beauchemin, S.S., De Medeiros, E., Bauer, M.A.: Lane-based vehicle localization in urban environments. In: 2015 IEEE International Conference on Vehicular Electronics and Safety (ICVES), pp. 225–230 (2015). IEEE

  4. Jang, W., Hyun, J., An, J., Cho, M., Kim, E.: A lane-level road marking map using a monocular camera. IEEE/CAA J Automatica Sinica 9(1), 187–204 (2021)

    Article  Google Scholar 

  5. Feng, Z., Guo, S., Tan, X., Xu, K., Wang, M., Ma, L.: Rethinking efficient lane detection via curve modeling. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17062–17070 (2022)

  6. Tabelini, L., Berriel, R., Paixao, T.M., Badue, C., De Souza, A.F., Oliveira-Santos, T.: Polylanenet: Lane estimation via deep polynomial regression. In: Proceedings of the International Conference on Pattern Recognition (ICPR), pp. 6150–6156 (2021). IEEE

  7. Liu, R., Yuan, Z., Liu, T., Xiong, Z.: End-to-end lane shape prediction with transformers. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3694–3702 (2021)

  8. Li, X., Li, J., Hu, X., Yang, J.: Line-cnn: end-to-end traffic line detection with line proposal unit. IEEE Trans. Intell. Transp. Syst. 21(1), 248–258 (2019)

    Article  Google Scholar 

  9. Tabelini, L., Berriel, R., Paixao, T.M., Badue, C., De Souza, A.F., Oliveira-Santos, T.: Keep your eyes on the lane: Real-time attention-guided lane detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 294–302 (2021)

  10. Zheng, T., Huang, Y., Liu, Y., Tang, W., Yang, Z., Cai, D., He, X.: Clrnet: Cross layer refinement network for lane detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 898–907 (2022)

  11. Honda, H., Uchida, Y.: Clrernet: improving confidence of lane detection with laneiou. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1176–1185 (2024)

  12. Yu, J., Jiang, Y., Wang, Z., Cao, Z., Huang, T.: Unitbox: An advanced object detection network. In: Proceedings of the ACM International Conference on Multimedia, pp. 516–520 (2016)

  13. Xiao, L., Li, X., Yang, S., Yang, W.: Adnet: Lane shape prediction via anchor decomposition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6404–6413 (2023)

  14. Pan, X., Shi, J., Luo, P., Wang, X., Tang, X.: Spatial as deep: Spatial cnn for traffic scene understanding. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

  15. TuSimple benchmark. https://github.com/TuSimple/tusimple-benchmark (2017). Accessed Nov 2024

  16. Behrendt, K., Soussan, R.: Unsupervised labeled lane markers using maps. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), pp. 832–839 (2019)

  17. Xu, H., Wang, S., Cai, X., Zhang, W., Liang, X., Li, Z.: Curvelane-nas: Unifying lane-sensitive architecture search and adaptive point blending. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 689–704 (2020). Springer

  18. Zheng, T., Fang, H., Zhang, Y., Tang, W., Yang, Z., Liu, H., Cai, D.: Resa: Recurrent feature-shift aggregator for lane detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 3547–3554 (2021)

  19. Qiu, Z., Zhao, J., Sun, S.: Mfialane: multiscale feature information aggregator network for lane detection. IEEE Trans. Intell. Transp. Syst. 23(12), 24263–24275 (2022)

    Article  Google Scholar 

  20. Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122 (2015). Accessed Nov 2024

  21. Qin, Z., Wang, H., Li, X.: Ultra fast structure-aware deep lane detection. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 276–291 (2020). Springer, New York

  22. Yoo, S., Lee, H.S., Myeong, H., Yun, S., Park, H., Cho, J., Kim, D.H.: End-to-end lane marker detection via row-wise classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1006–1007 (2020)

  23. Liu, L., Chen, X., Zhu, S., Tan, P.: Condlanenet: a top-to-down lane detection framework based on conditional convolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3773–3782 (2021)

  24. Chen, Z., Liu, Y., Gong, M., Du, B., Qian, G., Smith-Miles, K.: Generating dynamic kernels via transformers for lane detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6835–6844 (2023)

  25. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems 28 (2015)

  26. Qu, Z., Jin, H., Zhou, Y., Yang, Z., Zhang, W.: Focus on local: Detecting lane marker from bottom up via key point. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14122–14130 (2021)

  27. Wang, J., Ma, Y., Huang, S., Hui, T., Wang, F., Qian, C., Zhang, T.: A keypoint-based global association network for lane detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1392–1401 (2022)

  28. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 213–229 (2020). Springer

  29. Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2117–2125 (2017)

  30. Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 764–773 (2017)

  31. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Proceedings of the International Conference on Machine Learning, pp. 448–456 (2015). pmlr

  32. Fred, A., Agarap, M.: Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375, 1–6 (2018). Accessed Nov 2024

  33. Einstein, A.: Die feldgleichungen der gravitation. Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften, 844–847 (1915)

  34. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017)

  35. Popescu, M.-C., Balas, V.E., Perescu-Popescu, L., Mastorakis, N.: Multilayer perceptron and neural networks. WSEAS Trans Circuits Syst 8(7), 579–588 (2009)

    Google Scholar 

  36. Kuhn, H.W.: The hungarian method for the assignment problem. Naval Res Logistics Quart 2(1–2), 83–97 (1955)

    Article  MathSciNet  Google Scholar 

  37. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

  38. Yu, F., Wang, D., Shelhamer, E., Darrell, T.: Deep layer aggregation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2403–2412 (2018)

  39. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016)

  40. Wang, C.-Y., Bochkovskiy, A., Liao, H.-Y.M.: Scaled-yolov4: Scaling cross stage partial network. In: Proceedings of the IEEE/cvf Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13029–13038 (2021)

  41. Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159 (2020). Accessed Nov 2024

  42. Abualsaud, H., Liu, S., Lu, D.B., Situ, K., Rangesh, A., Trivedi, M.M.: Laneaf: Robust multi-lane detection with affinity fields. IEEE Robot AutomLett 6(4), 7477–7484 (2021)

    Article  Google Scholar 

  43. Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)

  44. Alam, M.Z., Kaleem, Z., Kelouwani, S.: How to deal with glare for improved perception of autonomous vehicles. arXiv preprint arXiv:2404.10992 (2024). Accessed Nov 2024

  45. Jia, Z., Sun, S., Liu, G., Liu, B.: Mssd: multi-scale self-distillation for object detection. Visual Intell 2(1), 8 (2024)

    Article  Google Scholar 

  46. Zeng, Q., Xie, Y., Lu, Z., Xia, Y.: A human-in-the-loop method for pulmonary nodule detection in CT scans. Visual Intell 2(1), 19 (2024)

    Article  Google Scholar 

  47. Zhang, Y., Yang, Q.: An overview of multi-task learning. Natl. Sci. Rev. 5(1), 30–43 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

Project is supported by the Natural Science Foundation of Chongqing [CSTB2023NSCQ-MSX0063], the original research project of Tongji University [22120220593], the National Key R&D Program of China [2021YFB2501104], the Shanghai Municipal Science and Shanghai Automotive Industry Science and Technology Development Foundation [2407].

Author information

Authors and Affiliations

Authors

Contributions

Concept: W. Tian; Method: W. Tian, Y. Han, Y. Huang, X. Yu; Implementation: Y. Han; Writing: W. Tian, Y. Han; Review: W. Tian, Y. Huang; Project management: W. Tian.

Corresponding author

Correspondence to Wei Tian.

Ethics declarations

Conflict of interest

The authors declare no Conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, W., Han, Y., Huang, Y. et al. Refinecurvelane: lane detection with B-spline curve in a layer-by-layer refinement manner. Multimedia Systems 30, 343 (2024). https://doi.org/10.1007/s00530-024-01557-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00530-024-01557-9

Keywords