Abstract
At present, even though great progress has been made in lane detection based on deep learning method in complex scenarios, there is still room for improvement in the real-time performance of most models. Row-wise classification method is the current mainstream method to improve the real-time performance of the model. It makes a trade-off between accuracy and speed. However, many models based on the row-wise classification method are not strong enough to extract spatial contextual information, This hinders the recognition of lanes. Inspired by Feature Pyramid Networks, we propose a simple and lightweight framework based on row-wise classification method: SIE-Net. The method can fully extract the spatial position information in the image. The framework can fuse the semantic information contained in the deep feature map and the spatial information contained in the shallow feature map. Then dilated convolution is used in the feature extraction process, which increases the receptive field of the model and extracts more global information in the image. Meanwhile, channel attention mechanism is used in the feature extraction process. It can give greater weight to the channel containing the structure information of the lanes. Finally, the experimental results demonstrate the effectiveness of the proposed method in terms of accuracy and speed on two popular Tusimple and CULane benchmark datasets.
Similar content being viewed by others
Data availability
The data that support the findings of this study are openly available in Tusimple and CULane dataset at https://github.com/TuSimple/tusimple-benchmark/issues/3, https://xingangpan.github.io/projects/CULane.html, respectively.
References
Chiu K, Lin S (2005) Lane detection using color-based segmentation. Proceedings of the IEEE Intelligent Vehicles Symposium: 706–711. https://doi.org/10.1109/IVS.2005.1505186
Ghafoorian M, Nugteren C, Baka N, Booij O, Hofmann M (2018) El-gan: Embedding loss driven generative adversarial networks for lane detection. proceedings of the european conference on computer vision (ECCV) Workshops (pp. 0–0). https://doi.org/10.1007/978-3-030-11009-3_15
Ghazali K, Xiao R, Ma J (2012) Road lane detection using h-maxima and improved hough transform. 2012 Fourth International Conference on Computational Intelligence, Modelling and Simulation: 205–208. https://doi.org/10.1109/CIMSim.2012.31
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition:770–778. https://doi.org/10.1109/CVPR.2016.90
Hou Y, Ma Z, Liu C, Loy C (2019) Learning lightweight lane detection cnns by self attention distillation. Proceedings of the IEEE International Conference on Computer Vision: 1013–1021. https://doi.org/10.48550/arXiv.1908.00821
Hou Y, Ma Z, Liu C et al (2020) Inter-region affinity distillation for road marking segmentation. 2020 IEEE/CVF conference on computer vision and pattern recognition (CVPR). https://doi.org/10.1109/CVPR42600.2020.01250
Hu J, Shen L, Sun G (2018) squeeze-and-excitation networks. Proceedings of the IEEE conference on computer vision and pattern recognition: 2011-2013. https://doi.org/10.1109/TPAMI.2019.2913372
Jayasinghe O, Anhettigama D, Hemachandra S et al (2021) SwiftLane: towards fast and efficient lane detection. IEEE International Conference on Machine Learning and Applications. https://doi.org/10.1109/ICMLA52953.2021.00142
Kluge K, Lakshmanan S (1995) A deformable-template approach to lane detection. Proceedings of the IEEE Intelligent Vehicles Symposium: 54–59. https://doi.org/10.1109/IVS.1995.528257
Ko Y, Lee Y, Azam S et al (2020) Key points estimation and point instance segmentation approach for lane detection. IEEE Trans Intell Transp Syst:1–10. https://doi.org/10.1109/TITS.2021.3088488
Lee J-W, Cho J-S (2009) Effective lane detection and tracking method using statistical modeling of color and lane edge-orientation. Fourth International Conference on Computer Sciences and Convergence Information Technology: 1586–1591. https://doi.org/10.1109/ICCIT.2009.81
Lee M, Lee J, Lee D et al (2021) Robust lane detection via expanded self attention. 2022 IEEE/CVF winter conference on applications of computer vision (WACV). https://doi.org/10.1109/WACV51458.2022.00201
Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR): 936–944. https://doi.org/10.1109/CVPR.2017.106
Liu T, Chen Z, Yang Y, Wu Z, Li H (2019) Lane detection in low-light conditions using an efficient data enhancement: light conditions style transfer. 2020 IEEE Intelligent Vehicles Symposium (IV)1394–1399. https://doi.org/10.48550/arXiv.2002.01177
Liu Y B, Zeng M, Meng QH (2020) Heatmap-based vanishing point boosts lane detection. IEEE J. https://doi.org/10.48550/arXiv.2007.15602
Neven D et al (2018) Towards end-to-end lane detection: an instance segmentation approach. 2018 IEEE intelligent vehicles symposium (IV). https://doi.org/10.1109/IVS.2018.8500547
Pan X, Shi J, Luo P, Wang X, Tang X (2018) Spatial as deep: Spatial cnn for traffic scene understanding. AAAI Conference on Artificial Intelligence (AAAI). https://doi.org/10.48550/arXiv.1712.06080
Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, Desmaison A, Kopf A, Yang E, DeVito Z, Raison M, Tejani A, Chilamkurthy S, Steiner B, Fang L, Chintala S (2019) Pytorch: An imperative style, highperformance deep learning library. Adv Neural Inform Process Syst 32:8024–8035. https://doi.org/10.48550/arXiv.1912.01703
Qin Z, Wang H, Li X (2020) Ultra fast structure-aware deep lane detection. The European Conference on Computer Vision (ECCV). https://doi.org/10.1007/978-3-030-58586-0_17
Tabelini L, Berriel R, Ao TMP, Badue C, Souza AFD, Oliveira–Santos T (2021) Keep your eyes on the lane: real-time attention-guided lane detection. Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.48550/arXiv.2010.12035
TuSimple. https://github.com/TuSimple/tusimple-benchmark/issues/3
Woo S et al (2018) Cbam: Convolutional block attention module. Proceedings of the European conference on computer vision (ECCV): 3–19. https://doi.org/10.48550/arXiv.1807.06521
Xu H, Wang S, Cai X, Zhang W, Liang X, Li Z (2020) Curvelane-nas: Unifying lane-sensitive architecture search and adaptive point blending. European Conference on Computer Vision (ECCV): Pages 689–704. https://doi.org/10.1007/978-3-030-58555-6_41
Yoo S, Lee H, Myeong H, Yun S, Park H, Cho J, Kim D (2020) End-to-end lane marker detection via row-wise classification. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: 1006–1007. https://doi.org/10.48550/arXiv.2005.08630
Yu B, Jain A (1997) Lane boundary detection using a multiresolution hough transform. Proceedings of International Conference on Image Processing: 748–751. https://doi.org/10.1109/ICIP.1997.638604
Yu F, Koltun V (2015) Multi-scale context aggregation by dilated convolutions. ICLR 2016. https://doi.org/10.48550/arXiv.1511.07122
Zheng T, Fang H, Zhang Y, Tang W, Yang Z, Liu H, Cai D (2021) Resa: Recurrent feature-shift aggregator for lane detection. AAAI. https://doi.org/10.48550/arXiv.2008.13719
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interests
The authors declare that they have no conflicts of interest to this work. The people involved in the experiment have been informed and formally accepted.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Tan, X., Li, S. & Yan, H. Fast lane detection for extracting spatial location information. Multimed Tools Appl 82, 21743–21756 (2023). https://doi.org/10.1007/s11042-023-14845-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-023-14845-9