Skip to main content
Log in

Winding pathway understanding based on angle projections in a field environment

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Scene understanding is a core problem for autonomous navigation. However, its implementation is frustrated by a variety of unsettled issues, such as understanding winding pathways in unknown, dynamic, and field environments. Traditional three-dimensional (3D) estimation from 3D point clouds or fused data is memory intensive and energy-consuming, which makes these approaches less reliable in a resource-constrained field robot system with limited computation, memory, and energy resources. In this study, we present a methodology to understand winding field pathways and reconstruct them in a 3D environment, using a low-cost monocular camera without prior training. Winding angle projections are assigned to clusters. By composing subclusters, candidate surfaces are shaped. Based on geometric inferences of integrity and orientation, a field pathway can be approximately understood and reconstructed using straight and winding surfaces in a 3D scene. With the use of geometric inference, no prior training is needed, and the approach is robust to colour and illumination. The percentage of incorrectly classified pixels was compared to the ground truth. Experimental results demonstrated that the method can successfully understand winding pathways, meeting the requirements for robot navigation in an unstructured environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. GIBSON EJ, WALK RD (1960) The visual cliff. Sci Am 202:64–71

    Article  Google Scholar 

  2. Koenderink JJ, Doorn AJV, Kappers AM (1996) Pictorial surface attitude and local depth comparisons. Percept Psychophys. 58(2):163–173

    Article  Google Scholar 

  3. Wei H, Wang L (2018) Understanding of indoor scenes based on projection of spatial rectangles. Pattern Recognit 81:497–514

    Article  Google Scholar 

  4. Magerand L, Del Bue A (2020) Revisiting projective structure from motion: a robust and efficient incremental solution. IEEE Trans Pattern Anal Mach Intell 42(2):430–443

    Article  Google Scholar 

  5. Bescós B, Cadena C, Neira J (2021) Empty Cities: a dynamic-object-invariant space for visual SLAM. IEEE Trans Robotics 37(2):433–451

    Article  Google Scholar 

  6. Cavagna A, Melillo S, Parisi L, Ricci-Tersenghi F (2021) SpaRTA tracking across occlusions via partitioning of 3d clouds of points. IEEE Trans Pattern Anal Machine Intell 43(4):1394–1403

    Article  Google Scholar 

  7. Bello SA, Wang C, Wambugu NM, Adam JM (2021) FFPointNet: Local and global fused feature for 3D point clouds analysis. Neurocomputing 461:55–62

    Article  Google Scholar 

  8. Wolf P, Berns K (2021) Data-fusion for robust off-road perception considering data quality of uncertain sensors. In: IEEE/RSJ International conference on intelligent robots and systems, IROS, Prague, Czech Republic, pp 6876–6883

  9. Wei H, Wang L (2018) Visual navigation using projection of spatial right-angle in indoor environment. IEEE Trans Image Process 27(7):3164–3177

    Article  MathSciNet  MATH  Google Scholar 

  10. Wang L, Wei H (2020) Understanding of curved corridor scenes based on projection of spatial right-angles. IEEE Trans Image Process 29:9345–9359

    Article  MATH  Google Scholar 

  11. Wang Z (2022) Recognition of occluded objects by slope difference distribution features. Appl Soft Comput 120:108622

    Article  Google Scholar 

  12. Wang Z (2022) Automatic and robust hand gesture recognition by SDD features based model matching. Appl Intell

  13. Wang Z (2020) Robust three-dimensional face reconstruction by one-shot structured light line pattern. Opt Lasers Eng 124:105798

    Article  Google Scholar 

  14. Wang L, Wei H (2020) Avoiding non-Manhattan obstacles based on projection of spatial corners in indoor environment. IEEE/CAA Journal of Automatica Sinica 7:1190–1200

    Article  Google Scholar 

  15. Wang L, Wei H (2020) Understanding of wheelchair ramp scenes for disabled people with visual impairments. Eng Appl Artif Intell 90:103569

    Article  Google Scholar 

  16. Wang L, Wei H (2021) Indoor scene understanding based on manhattan and non-manhattan projection of spatial right-angles. J Vis Commun Image Represent 80:103307

    Article  Google Scholar 

  17. Hedau V, Hoiem D, Forsyth D (2009) Recovering the spatial layout of cluttered rooms. In: ICCV, pp 1849–1856

  18. Zhang Y, Yu F, Song S, Xu P, Seff A (2016) Largescale scene understanding challenge: Room layout estimation

  19. Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R et al (2016) The Cityscapes Dataset for semantic urban scene understanding. In: IEEE Conference on computer vision and pattern recognition (CVPR), pp 3213–3223

  20. Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Machine Intell 39(12):2481–2495

    Article  Google Scholar 

  21. Chen Y, Li W, Van Gool L (2018) ROAD: reality oriented adaptation for semantic segmentation of urban scenes. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)

  22. Zhang Y, David P, Foroosh H, Gong B (2020) A curriculum domain adaptation approach to the semantic segmentation of urban scenes. IEEE Trans Pattern Anal Machine Intell 42(8):1823–1841

    Article  Google Scholar 

  23. Romera E, Alvarez J M, Bergasa L M, Arroyo R (2018) ERFNet: efficient residual factorized ConvNet for real-time semantic segmentation. IEEE Trans Intell Transp Syst 19(1):263–272

    Article  Google Scholar 

  24. Wang W, Zhou T, Yu F, Dai J, Konukoglu E, Gool LV (2021) Exploring cross-image pixel contrast for semantic segmentation. In: IEEE/CVF international conference on computer vision (ICCV), pp 7283–7293

  25. Zhou T, Wang W, Konukoglu E, Gool LV (2022) Rethinking semantic segmentation: a prototype view accepted to CVPR 2022. arXiv:2203.15102

  26. Lateef F, Kas M, Ruichek Y (2021) Saliency heat-map as visual attention for autonomous driving using generative adversarial network (GAN).IEEE Trans Intell Transp Syst, pp 1–14

  27. Chiaroni F, Rahal MC, Hueber N, Dufaux F (2021) Self-supervised learning for autonomous vehicles perception: a conciliation between analytical and learning methods. IEEE Signal Process Mag 38 (1):31–41

    Article  Google Scholar 

  28. Roncancio H, Becker M, Broggi A, Cattani S (2014) Traversability analysis using terrain mapping and online-trained terrain type classifier. In: IEEE Intelligent vehicles symposium proceedings, Dearborn, MI, USA, June 8-11, 2014, pp 1239–1244

  29. Wang L, Wei H (2021) Curved alleyway understanding based on monocular vision in street scenes. IEEE Trans Intell Transp Syst, pp 1–20

  30. Arena P, Blanco CF, Li Noce A, Taffara S, Patane L (2020) Learning traversability map of different robotic platforms for unstructured terrains path planning. In: International joint conference on neural networks (IJCNN), pp 1–8

  31. Fan DD, Agha-mohammadi Aa, Theodorou EA (2022) Learning risk-aware Costmaps for Traversability in challenging environments. IEEE Robot Autom Lett 7(1):279–286

    Article  Google Scholar 

  32. Nikolovski G, Reke M, Elsen I, Schiffer S (2021) Machine learning based 3D object detection for navigation in unstructured environments. In: IEEE Intelligent vehicles symposium workshops (IV Workshops), pp 236–242

  33. Wigness M, Rogers JG (2017) Unsupervised semantic scene labeling for streaming data. In: IEEE Conference on computer vision and pattern recognition (CVPR), pp 5910–5919

  34. Humblot-Renaux G, Marchegiani L, Moeslund TB, Gade R (2022) Navigation-oriented scene understanding for robotic autonomy: learning to segment driveability in egocentric images. IEEE Robot Autom Lett 7(2):2913–2920

    Article  Google Scholar 

  35. Holder CJ, Drive Breckon TP (2021) Learning to end-to-end off-road path prediction. IEEE Intell Transp Syst Mag 13(2):217–221

    Article  Google Scholar 

  36. Baheti B, Innani S, Gajre SS, Talbar SN (2020) Semantic scene segmentation in unstructured environment with modified DeepLabV3+. Pattern Recognit Lett 138:223–229

    Article  Google Scholar 

  37. Bosilj P, Aptoula E, Duckett T, Cielniak G (2020) Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture. J Field Robot 37(1):7–19

    Article  Google Scholar 

  38. Gao B, Hu S, Zhao X, Zhao H (2021) Fine-grained off-road semantic segmentation and mapping via contrastive learning. In: IEEE/RSJ international conference on intelligent robots and systems, IROS, Prague, Czech Republic, pp 5950–5957

  39. Tassis LM, Tozzi de Souza JE, Krohling RA (2021) A deep learning approach combining instance and semantic segmentation to identify diseases and pests of coffee leaves from in-field images. Comput Electron Agric 186:106191

    Article  Google Scholar 

  40. Zurn J, Burgard W, Valada A (2021) Self-supervised visual terrain classification from unsupervised acoustic feature learning. IEEE Trans Robot 37(2):466–481

    Article  Google Scholar 

  41. Bernuy F, Ruiz-del-Solar J (2015) Semantic mapping of large-scale outdoor scenes for autonomous off-road driving. In: IEEE international conference on computer vision workshop, ICCV workshops, Santiago, Chile. IEEE Computer Society, pp 124– 130

  42. Dong W, Roy P, Isler V (2020) Semantic mapping for orchard environments by merging two-sides reconstructions of tree rows. Journal of Field Robotics 37(1):97–121

    Article  Google Scholar 

  43. Maturana D, Chou P, Uenoyama M, Scherer SA (2017) Real-time semantic mapping for autonomous off-road navigation. In: Field and service robotics, results of the 11th international conference, FSR, Zurich, Switzerland, vol 5, pp 335–350

  44. Yang Y, Tang D, Wang D, Song W, Wang J, Fu M (2020) Multi-camera visual SLAM for off-road navigation. Robotics Auton Syst 128:103505

    Article  Google Scholar 

  45. Arbelaez P, FC Maire M (2019) From contours to regions: An empirical evaluation. In: CVPR, pp 2294–2301

  46. Rother C (2000) A new approach for vanishing point detection in architectural environments. In: Mirmehdi M, Thomas B T (eds) Proceedings of the british machine vision conference, BMVC, Bristol, UK. British machine vision association, pp 1–10

  47. García-Faura Á, Martínez FF, Kleinlein R, San-Segundo-Hernández R, Díaz-de-María F (2019) A multi-threshold approach and a realistic error measure for vanishing point detection in natural landscapes. Eng Appl Artif Intell 85:713–726

    Article  Google Scholar 

  48. Xiao J, Hays J, Ehinger K, Oliva A, Torralba ASUN (2010) Database: large-scale scene recognition from Abbey to Zoo. CVPR, pp 485–3492

  49. Metzger KA, Mortimer P, Wuensche H (2020) A fine-grained dataset and its efficient semantic segmentation for unstructured driving scenarios. In: 25th International conference on pattern recognition, ICPR, Virtual Event / Milan, Italy, pp 7892–7899

  50. Holder CJ, Breckon TP (2021) Learning to drive: end-to-end off-road path prediction. IEEE Intell Trans Syst Mag 13(2):217–221

    Article  Google Scholar 

  51. Geiger A, Lenz P, Stiller C, Urtasun R (2013) Vision meets robotics: the KITTI dataset. Int J Robotics Res 32(11):1231–1237

    Article  Google Scholar 

  52. Wigness MB, Eum S, Rogers JG, Han D, Kwon H (2019) A RUGD Dataset for autonomous navigation and visual perception in unstructured outdoor environments. In: IEEE/RSJ International conference on intelligent robots and systems, IROS, Macau, SAR, China, pp 5000–5007

  53. Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the NSFC Project (Project Nos. 62003212, 61771146 and 61375122).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luping Wang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, L., Wei, H. Winding pathway understanding based on angle projections in a field environment. Appl Intell 53, 16859–16874 (2023). https://doi.org/10.1007/s10489-022-04325-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-04325-2

Keywords

Navigation