skip to main content
10.1145/3655532.3655561acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicrsaConference Proceedingsconference-collections
research-article

CFTNet: Cross-Scale Feature Transfer for Lane Detection

Published: 28 June 2024 Publication History

Abstract

Lane detection is a critical task in autonomous driving and advanced driver-assistance systems that requires accurate and robust identification of lane markings. Currently, most lane detection methods are unable to efficiently utilize all features, especially those output by feature pyramid network. Besides, these methods cannot accurately focus on the lane line during detection, resulting in unsatisfied detection results. In this work, we propose a novel lane detection network aiming at better utilizing global features. First, we designed cross-scale feature transfer module, which enables feature transfer between different layers and allows the features of each layer to incorporate more information for accurate lane detection. In addition, we have incorporated multi-attention integration module into the network, which allows the network to better handle long, straight lane markings while also improving detection performance for diagonal and curved lanes. The experimental results on public benchmark datasets demonstrate that our proposed network outperforms state-of-the-art methods in terms of accuracy and robustness, making it a promising solution for lane detection in dynamic driving conditions.

References

[1]
Zhuang, Jiayuan, Qin, Zheng, Yu, Hao, and Chen, Xucan. 2023. Task-Specific Context Decoupling for Object Detection. arXiv preprint arXiv:2303.01047.
[2]
Wan, Qiang, Huang, Zilong, Lu, Jiachen, Yu, Gang, and Zhang, Li. 2023. Seaformer: Squeeze-enhanced axial transformer for mobile semantic segmentation. arXiv preprint arXiv:2301.13156.
[3]
Ho, Jonathan, Kalchbrenner, Nal, Weissenborn, Dirk, and Salimans, Tim. 2019. Axial attention in multidimensional transformers. arXiv preprint arXiv:1912.12180.
[4]
Liu, Yin-Bo, Zeng, Ming, and Meng, Qing-Hao. 2020. Heatmap-based vanishing point boosts lane detection. arXiv preprint arXiv:2007.15602.
[5]
Xu, Hang, Wang, Shaoju, Cai, Xinyue, Zhang, Wei, Liang, Xiaodan, and Li, Zhenguo. 2020. Curvelane-nas: Unifying lane-sensitive architecture search and adaptive point blending. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XV 16 (pp. 689-704). Springer International Publishing.
[6]
Wu, Dong, Liao, Man-Wen, Zhang, Wei-Tian, Wang, Xing-Gang, Bai, Xiang, Cheng, Wen-Qing, and Liu, Wen-Yu. 2022. Yolop: You only look once for panoptic driving perception. Machine Intelligence Research, 19(6), 550-562.
[7]
Qin, Zequn, Wang, Huanyu, and Li, Xi. 2020. Ultra fast structure-aware deep lane detection. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIV 16 (pp. 276-291). Springer.
[8]
Qin, Zequn, Zhang, Pengyi, and Li, Xi. 2022. Ultra fast deep lane detection with hybrid anchor driven ordinal classification. IEEE Transactions on Pattern Analysis and Machine Intelligence. IEEE.
[9]
Zheng, Tu, Huang, Yifei, Liu, Yang, Tang, Wenjian, Yang, Zheng, Cai, Deng, and He, Xiaofei. 2022. Clrnet: Cross layer refinement network for lane detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 898-907).
[10]
Xingang Pan, Jianping Shi, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2018. Spatial As Deep: Spatial CNN for Traffic Scene Understanding. In AAAI Conference on Artificial Intelligence (AAAI), February 2018.
[11]
Seungwoo Yoo, Hee Seok Lee, Heesoo Myeong, Sungrack Yun, Hyoungwoo Park, Janghoon Cho, and Duck Hoon Kim. 2020. End-to-end lane marker detection via row-wise classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 1006-1007, 2020.
[12]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
[13]
Fisher Yu, Dequan Wang, Evan Shelhamer, and Trevor Darrell. 2018. Deep layer aggregation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2403-2412, 2018.

Index Terms

  1. CFTNet: Cross-Scale Feature Transfer for Lane Detection

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICRSA '23: Proceedings of the 2023 6th International Conference on Robot Systems and Applications
    September 2023
    335 pages
    ISBN:9798400708039
    DOI:10.1145/3655532
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 28 June 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Feature utilization and transfer
    2. Lane detection
    3. Multi-attention Module

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Graduate Innovation Fund of Wuhan Institute of Technology
    • Central Government Guides Local Science and Technology Development Special Projects
    • National Natural Science Foundation of China
    • Key RD Program in Hubei Province, China
    • National Natural of Science Foundation of China

    Conference

    ICRSA 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 9
      Total Downloads
    • Downloads (Last 12 months)9
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 20 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media