Skip to main content

End-to-End Path Estimation and Automatic Dataset Generation for Robot Navigation in Plant-Rich Environments

  • Conference paper
  • First Online:
Intelligent Autonomous Systems 17 (IAS 2022)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 577))

Included in the following conference series:

Abstract

This paper describes a method of estimating a path to follow from an image in plant-rich environments such as greenhouses and unstructured outdoor scenes. In such environments, there are several factors that make it difficult for robots to determine a path, such as plants covering the path and ambiguous path boundaries. Approaches based on segmentation of traversable regions cannot be applied to such environments because the regions may not be clearly defined or may be occluded. In this work, we propose a method of estimating a path from a single image in the end-to-end fashion. We also develop an automatic annotation method utilizing the robot’s trajectory during the data acquisition phase. We conducted a real-world experiment of robot navigation and confirmed that the proposed method is capable of navigation in paths partially covered by plants. We also confirmed that proposed data annotation method can generate training data more efficiently than manual annotation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ort,T.,Paull, L.,Rus, D.: Autonomous vehicle navigation in rural environments without detailed prior maps. In: IEEE International Conference on Robotics and Automation, pp. 2040–2047 (2018)

    Google Scholar 

  2. Ponnambalam, V.R., Bakken, M., Moore, R.J.D., Glenn Omhol Gjevestad, J., Johan From, P.: Autonomous crop row guidance using adaptive multi-roi in strawberry fields. Sensors 20(18) (2020)

    Google Scholar 

  3. Zheng, J., Kargbo, A. H.: A robust lane detection using edge detection with symmetric molecules in visual perception for self-driving cars. Int. J. Eng. Res. Technol. 10(07), 268–273 (2021)

    Google Scholar 

  4. Wang, W., Lin, H., Wang, J.: CNN based lane detection with instance segmentation in edge-cloud computing. J. Cloud Comput. 9(1) (2020)

    Google Scholar 

  5. Lee, D.-H., Liu, J.-L.: End-to-end deep learning of lane detection and path prediction for real-time autonomous driving. Technical report (2021)

    Google Scholar 

  6. Onozuka, Y., Matsumi, R., Shino, M.: Weakly-supervised recommended traversable area segmentation using automatically labeled images for autonomous driving in pedestrian environment with no edges. Sensors 21(2) (2021)

    Google Scholar 

  7. Opiyo, S., Okinda, C., Zhou, J., Mwangi, E., Makange, N.: Medial axis-based machine-vision system for orchard robot navigation. Comput. Electron. Agric. 185, 106153 (2021)

    Article  Google Scholar 

  8. Matsushita, Y., Miura, J.: On-line road boundary modeling with multiple sensory features, flexible road model, and particle filter. Rob. Auton. Syst. 59(5), 274–284 (2011)

    Article  Google Scholar 

  9. Chiku, T., Miura, J.: On-line road boundary estimation by switching multiple road models using visual features from a stereo camera. in: IEEE International Conference on Intelligent Robots and Systems, pp. 4939–4944 (2012)

    Google Scholar 

  10. Phung, S.L., Le, M.C., Bouzerdoum, A.: Pedestrian lane detection in unstructured scenes for assistive navigation. Comput. Vis .Image Underst. 149, 186–196 (2016)

    Google Scholar 

  11. Meyer, A., Salscheider, N.O., Orzechowski, P.F., Stiller, C.: Deep semantic lane segmentation for mapless driving. In: IEEE International Conference on Intelligent Robots and Systems, pp. 869–875 (2018)

    Google Scholar 

  12. Wellhausen, L., Dosovitskiy, A., Ranftl, R., Walas, K., Cadena, C., Hutter, M.: Where should i walk? Predicting terrain properties from images via self-supervised learning. IEEE Rob. Autom. Lett. 4(2), 1509–1516 (2019)

    Google Scholar 

  13. Matsuzaki, Shigemichi, Masuzawa, Hiroaki, Miura, Jun: Image-based scene recognition for robot navigation considering traversable plants and its manual annotation-free training. IEEE Access 10, 5115–5128 (2022)

    Article  Google Scholar 

  14. Giusti, A., Guzzi, J., Ciresan, D.C., He, F.-L., Rodriguez, J.P., Fontana, Flavio, Faessler, M., Forster, C., Schmidhuber, Jurgen, Di Caro, Gianni, Scaramuzza, Davide, Gambardella, Luca M.: A machine learning approach to visual perception of forest trails for mobile robots. IEEE Rob. Autom. Lett. 1(2), 661–667 (2016)

    Article  Google Scholar 

  15. Mehta, S., Rastegari, M., Shapiro, L., Hajishirzi. H.: ESPNetv2: A light-weight, power efficient, and general purpose convolutional neural network. In: CVPR, pp. 9190–9200 (2019)

    Google Scholar 

  16. Matsuzaki, S., Miura, J., Masuzawa, H.:. Multi-source pseudo-label learning of semantic segmentation for the scene recognition of agricultural mobile robots. arXiv:2102.06386 (2021)

  17. Labbe, M., Michaud, F.:. RTAB-Map as an Open-Source Lidar and VisualSLAM Library for Large-Scale and Long-Term Online Operation. J. Field Rob. 36(2), 416–446 (2019)

    Google Scholar 

  18. Buslaev, A., Iglovikov, V.I., Khvedchenya, E., Parinov, A., Druzhinin, M., Kalinin, A. A.: Albumentations: fast and flexible image augmentations. Information 11(2) (2020)

    Google Scholar 

  19. Li , Z., Arora, S.:. An exponential learning rate schedule for deep learning. arXiv:1910.07454 (2019)

  20. Kingma, D. P., Ba, J. L.: ADAM: a method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoshinobu Uzawa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Uzawa, Y., Matsuzaki, S., Masuzawa, H., Miura, J. (2023). End-to-End Path Estimation and Automatic Dataset Generation for Robot Navigation in Plant-Rich Environments. In: Petrovic, I., Menegatti, E., Marković, I. (eds) Intelligent Autonomous Systems 17. IAS 2022. Lecture Notes in Networks and Systems, vol 577. Springer, Cham. https://doi.org/10.1007/978-3-031-22216-0_19

Download citation

Publish with us

Policies and ethics