Skip to main content
Log in

Detection of the drivable area on high-speed road via YOLACT

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

In intelligent driving, the detection and ranging of drivable area are key technologies in path planning. In order to realize quick and accurate detection and segmentation of drivable area, we adopt YOLACT_ResNet38_TFPN network as an improved design of YOLACT. The original YOLACT has ResNet101 residual structure. We reduce the network layers to ResNet38 and add C6 and C7 with larger receiving field to improve the detection speed in the drivable area. Moreover, the FPN (feature pyramid network) is designed as a structure of three sizes to match C6 and C7. According to the characteristics of road image, the aspect ratio of three anchor points is redesigned to further improve the detection accuracy and speed. A univariate linear regression model is designed to accurately calculate the distance of the drivable area. The model parameters are achieved by multivariate linear fitting method based on multiple sets of distance measurement data. Finally, YOLACT_ResNet38_TFPN’s FPS for the drivable area is 46.13, the box mAP is 62.36, and the mask mAP is 61.36. The maximum ranging error of this method within 96.90 m is 5.3 m. It can quickly and accurately measure the drivable distance and provide reasonable path driving suggestions for intelligent driving.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Chan, Y., Lin, Y., Chen, P.: Lane mark and drivable area detection using a novel instance segmentation scheme. In: 2019 IEEE/SICE International Symposium on System Integration (SII), pp. 502–506, Paris, France (2019)

  2. Lee, S., Moon, B.: Drivable area detection method capable of distinguishing vegetation area on country road. In: 2018 International SoC Design Conference (ISOCC), pp. 80–81, Daegu, Korea (South) (2018)

  3. Shi, J., Wang, J., Fu, F.: Fast and robust vanishing point detection for unstructured road following. IEEE Trans. Intell. Transp. Syst. 17(4), 970–979 (2016)

    Article  Google Scholar 

  4. Cheng, H., Yu, C., et al.: Environment classification and hierarchical lane detection for structured and unstructured roads. IET Comput. Vision 4(1), 37–49 (2010)

    Article  Google Scholar 

  5. You, J.: Weather data integrated mask R-CNN for automatic road surface condition monitoring. In: 2019 IEEE Visual Communications and Image Processing (VCIP), pp. 1–4, Sydney, Australia (2019)

  6. J. Long, E. Shelhamer, T. Darrell. “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440.

  7. Li, Y., Qi, H., Dai, J., Ji, X., Wei, Y.: Fully convolutional instance-aware semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2359–2367 (2017)

  8. Lyu, Y., Bai, L., Huang, X.: Road segmentation using CNN and distributed LSTM. In: 2019 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5, Sapporo, Japan (2019)

  9. Xu, C., Wang, G., Yan, S., et al.: Fast vehicle and pedestrian detection using improved mask R-CNN. Math. Probl. Eng. 2020, 1–15 (2020)

    Google Scholar 

  10. Bolya, D., Zhou, C., Xiao, F., Lee, Y.J.: YOLACT: real-time instance segmentation. In: IEEE/CVF International Conference on Computer Vision, pp. 9156–9165, Seoul, Korea (South) (2019)

  11. Lee, Y., Park, J.: CenterMask: real-time anchor-free instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13906–13915 (2020)

  12. Lin, T., Maire, M., Belongie, S., et al.: Microsoft COCO: common objects in context. In: IEEE Proceedings of European Conference on Computer Vision (ECCV), pp. 740–755 (2014)

  13. Chen, H., Sun, K., Tian, Z., Shen, C., Huang, Y., Yan, Y.: BlendMask: top–down meets bottom–up for instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8573–8581 (2020)

  14. Peng, S., Jiang, W., Pi, H., Li, X., Bao, H., Zhou, X.: Deep snake for real-time instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8533–8542 (2020)

  15. Wang, J., Zou, F., Zhang, M., Li, Y.: A monocular ranging algorithm for detecting illegal vehicle jumping. In: 2017 International Conference on Green Informatics (ICGI), pp. 25–29, Fuzhou (2017)

  16. Jia, S., Wang, K., Li, X.: Mobile robot simultaneous localization and mapping based on a monocular camera. J. Robot. 1–11 (2016)

  17. Xu, H., Liu, X., Zhu, C., Li, S., Chang, H.: A real-time ranging method based on parallel binocular vision. In: 2017 10th International Symposium on Computational Intelligence and Design (ISCID), pp. 183–187, Hangzhou (2017)

  18. Gao, M., Meng, X., Yang, Y., He, Z.: A traffic avoidance system based on binocular ranging. In: 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 1039–1043, Siem Reap (2017)

  19. Ikram, M., Ahmad, A., Wang, D.: High-accuracy distance measurement using millimeter-wave radar. In: 2018 IEEE Radar Conference (RadarConf18), pp. 1296–1300, Oklahoma City, OK, USA (2018)

  20. Du, L., Sun, Q., Cai, C., et al.: A vehicular mobile standard instrument for field verification of traffic speed meters based on dual-antenna Doppler radar sensor. Sensors pp. 1–19 (2018)

  21. Lin, T. Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 936–944, Honolulu, HI (2017)

  22. Neubeck, A., Van Gool, L.: Efficient non-maximum suppression. In: 18th International Conference on Pattern Recognition (ICPR'06), pp. 850–855, Hong Kong (2006)

  23. Yu, F., et al.: BDD100K: a diverse driving dataset for heterogeneous multitask learning. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2633–2642, Seattle, WA, USA (2020)

  24. Oliveira, R., Lima, P., et al.: Path planning for autonomous bus driving in urban environments. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 2743–2749, Auckland, New Zealand (2019)

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (No. 11971034) and in part by 2020 Anhui Provincial Engineering Laboratory on Information Fusion and Control of Intelligent Robot Open Project (No. ifcir2020002)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guili Wang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, G., Zhang, B., Wang, H. et al. Detection of the drivable area on high-speed road via YOLACT. SIViP 16, 1623–1630 (2022). https://doi.org/10.1007/s11760-021-02117-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-021-02117-8

Keywords

Navigation