Skip to main content

Advertisement

Log in

ULODNet: A Unified Lane and Obstacle Detection Network Towards Drivable Area Understanding in Autonomous Navigation

  • Regular paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Drivable area understanding is an essential problem in the fields of robot autonomous navigation. Mobile robots or other autonomous vehicles need to perceive their surrounding environments such as obstacles, lanes and freespace to ensure safety. Many recent works have made great achievements benefiting from the breakthrough of deep learning. However, those methods resolve the challenge in a separated way which cause repeated utilization of resources in some occasions. Thus, we present a unified lane and obstacle detection network, ULODNet, which can detect the lanes and obstacles in a joint manner and further frame the drivable areas for mobile robots or other autonomous vehicles. To better coordinate the training of ULODNet, we also create a new dataset, CULane-ULOD Dataset, based on the widely used CULane Dataset. The new dataset contains both the lane labels and obstacle labels which the original dataset do not have. At last, to construct an integrated autonomous driving scheme, an area intersection paradigm is introduced to generate the driving commands by calculating the obstacle area proportion in the drivable regions. Moreover, the well-designed comparison experiments verify the efficiency and effectiveness of the new algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Code Availability

The project is open source at https://github.com/phosphenesvision/ULODNet.

References

  1. Pan, X., Shi, J., Luo, P., Wang, X., Tang, X.: Spatial as deep: Spatial cnn for traffic scene understanding. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, p 1 (2018)

  2. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision, Springer, pp 740–755 (2014)

  3. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 (2016)

  4. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 779–788 (2016)

  5. Neven, D., De Brabandere, B., Georgoulis, S., Proesmans, M., Van Gool, L.: Towards end-to-end lane detection: an instance segmentation approach. In: 2018 IEEE intelligent vehicles symposium (IV), IEEE, pp 286–291 (2018)

  6. Qin, Z., Wang, H., Li, X.: Ultra fast structure-aware deep lane detection, arXiv:2004.11757 (2020)

  7. Qian, Y., Dolan, J., Yang, M.: DLT-Net: Joint detection of drivable areas, lane lines, and traffic objects. IEEE Trans. Intell. Transp. Syst. 21(11), 4670–4679 (2019)

    Article  Google Scholar 

  8. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587 (2014)

  9. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn:, Towards real-time object detection with region proposal networks, arXiv:1506.01497 (2015)

  10. Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7263–7271 (2017)

  11. Redmon, J., Farhadi, A: Yolov3:, An incremental improvement, arXiv:1804.02767 (2018)

  12. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4:, Optimal speed and accuracy of object detection, arXiv:2004.10934 (2020)

  13. Tusimple benchmark: https://github.com/TuSimple/tusimple-benchmark.AccessedSeptemberhttps://github.com/TuSimple/tusimple-benchmark.AccessedSeptember (2020)

  14. Zou, Z., Shi, Z., Guo, Y., Ye, J.: Object detection in 20 years:, A survey, arXiv:1905.05055 (2019)

  15. Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117–2125 (2017)

  16. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp 2961–2969 (2017)

  17. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440 (2015)

  18. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: ssd: Single shot multibox detector. In: European conference on computer vision, Springer, pp 21–37 (2016)

  19. Huang, R., Pedoeem, J., Chen, C.: yolo-lite: a real-time object detection algorithm optimized for non-gpu computers. In: 2018 IEEE International Conference on Big Data (Big Data), IEEE, pp 2503–2510 (2018)

  20. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988 (2017)

  21. Almeida, T., Santos, V., Mozos, O.M., Loureno, B.: Comparative analysis of deep neural networks for the detection and decoding of data matrix landmarks in cluttered indoor environments. Journal of Intelligent & Robotic Systems 103(1), 1–14 (2021)

    Article  Google Scholar 

  22. Silva, I., Perico, D.H., Homem, T., Bianchi, R.: Deep reinforcement learning for a humanoid robot soccer player. J. Intell. Robot. Syst. 102(3), 69 (2021)

    Article  Google Scholar 

  23. Law, H., Deng, J.: Cornernet: Detecting objects as paired keypoints. In: Proceedings of the European conference on computer vision (ECCV), pp 734–750 (2018)

  24. Law, H., Teng, Y., Russakovsky, O., Deng, J.: Cornernet-lite:, Efficient keypoint based object detection, arXiv:1904.08900 (2019)

  25. Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., Tian, Q.: Centernet: Keypoint triplets for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 6569–6578 (2019)

  26. Tian, Z., Shen, C., Chen, H., He, T.: Fcos: Fully convolutional one-stage object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 9627–9636 (2019)

  27. Liu, Z., Zheng, T., Xu, G., Yang, Z., Liu, H., Cai, D.: Training-time-friendly network for real-time object detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp 11685–11692 (2020)

  28. Aly M.: Real time detection of lane markers in urban streets. In: 2008 IEEE Intelligent Vehicles Symposium, IEEE, pp 7–12 (2008)

  29. Wang, Y., Teoh, E.K., Shen, D.: Lane detection and tracking using b-snake. Image Vis. Comput. 22(4), 269–280 (2004)

    Article  Google Scholar 

  30. Jung, S., Youn, J., Sull, S.: Efficient lane detection based on spatiotemporal images. IEEE Trans. Intell. Transp. Syst. 17(1), 289–295 (2015)

    Article  Google Scholar 

  31. Tabelini, L., Berriel, R., Paixao, T.M., Badue, C., De Souza, A.F., Oliveira-Santos, T.: Polylanenet:, Lane estimation via deep polynomial regression, arXiv:2004.10924 (2020)

  32. Zheng, T., Fang, H., Zhang, Y., Tang, W., Yang, Z., Liu, H., Cai, D.: Resa:, Recurrent feature-shift aggregator for lane detection, arXiv:2008.13719 (2020)

  33. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition, arXiv:1409.1556 (2014)

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 61922076, Grant 61873252, and Grant 61725304.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61922076, Grant 61873252, and Grant 61725304.

Author information

Authors and Affiliations

Authors

Contributions

Zhanpeng Zhang: Coding and writing; Jiahu Qin: Writing and review; Shuai Wang: Writing and review; Yu Kang: Review; Qingchen Liu: Review.

Corresponding author

Correspondence to Jiahu Qin.

Ethics declarations

Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use)

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Availability of data and materials

Our CULane-ULOD Dataset is open source at https://rec.ustc.edu.cn/share/9f97cc30-00f2-11ec-b059-a7276527f2db. The demo video of our proposed autonomous scheme is available in https://rec.ustc.edu.cn/share/f6c83d00-ff47-11eb-bf42-07a0d8061db3.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Z., Qin, J., Wang, S. et al. ULODNet: A Unified Lane and Obstacle Detection Network Towards Drivable Area Understanding in Autonomous Navigation. J Intell Robot Syst 105, 4 (2022). https://doi.org/10.1007/s10846-022-01606-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-022-01606-3

Keywords