Abstract
This paper describes a method of estimating a path to follow from an image in plant-rich environments such as greenhouses and unstructured outdoor scenes. In such environments, there are several factors that make it difficult for robots to determine a path, such as plants covering the path and ambiguous path boundaries. Approaches based on segmentation of traversable regions cannot be applied to such environments because the regions may not be clearly defined or may be occluded. In this work, we propose a method of estimating a path from a single image in the end-to-end fashion. We also develop an automatic annotation method utilizing the robot’s trajectory during the data acquisition phase. We conducted a real-world experiment of robot navigation and confirmed that the proposed method is capable of navigation in paths partially covered by plants. We also confirmed that proposed data annotation method can generate training data more efficiently than manual annotation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ort,T.,Paull, L.,Rus, D.: Autonomous vehicle navigation in rural environments without detailed prior maps. In: IEEE International Conference on Robotics and Automation, pp. 2040–2047 (2018)
Ponnambalam, V.R., Bakken, M., Moore, R.J.D., Glenn Omhol Gjevestad, J., Johan From, P.: Autonomous crop row guidance using adaptive multi-roi in strawberry fields. Sensors 20(18) (2020)
Zheng, J., Kargbo, A. H.: A robust lane detection using edge detection with symmetric molecules in visual perception for self-driving cars. Int. J. Eng. Res. Technol. 10(07), 268–273 (2021)
Wang, W., Lin, H., Wang, J.: CNN based lane detection with instance segmentation in edge-cloud computing. J. Cloud Comput. 9(1) (2020)
Lee, D.-H., Liu, J.-L.: End-to-end deep learning of lane detection and path prediction for real-time autonomous driving. Technical report (2021)
Onozuka, Y., Matsumi, R., Shino, M.: Weakly-supervised recommended traversable area segmentation using automatically labeled images for autonomous driving in pedestrian environment with no edges. Sensors 21(2) (2021)
Opiyo, S., Okinda, C., Zhou, J., Mwangi, E., Makange, N.: Medial axis-based machine-vision system for orchard robot navigation. Comput. Electron. Agric. 185, 106153 (2021)
Matsushita, Y., Miura, J.: On-line road boundary modeling with multiple sensory features, flexible road model, and particle filter. Rob. Auton. Syst. 59(5), 274–284 (2011)
Chiku, T., Miura, J.: On-line road boundary estimation by switching multiple road models using visual features from a stereo camera. in: IEEE International Conference on Intelligent Robots and Systems, pp. 4939–4944 (2012)
Phung, S.L., Le, M.C., Bouzerdoum, A.: Pedestrian lane detection in unstructured scenes for assistive navigation. Comput. Vis .Image Underst. 149, 186–196 (2016)
Meyer, A., Salscheider, N.O., Orzechowski, P.F., Stiller, C.: Deep semantic lane segmentation for mapless driving. In: IEEE International Conference on Intelligent Robots and Systems, pp. 869–875 (2018)
Wellhausen, L., Dosovitskiy, A., Ranftl, R., Walas, K., Cadena, C., Hutter, M.: Where should i walk? Predicting terrain properties from images via self-supervised learning. IEEE Rob. Autom. Lett. 4(2), 1509–1516 (2019)
Matsuzaki, Shigemichi, Masuzawa, Hiroaki, Miura, Jun: Image-based scene recognition for robot navigation considering traversable plants and its manual annotation-free training. IEEE Access 10, 5115–5128 (2022)
Giusti, A., Guzzi, J., Ciresan, D.C., He, F.-L., Rodriguez, J.P., Fontana, Flavio, Faessler, M., Forster, C., Schmidhuber, Jurgen, Di Caro, Gianni, Scaramuzza, Davide, Gambardella, Luca M.: A machine learning approach to visual perception of forest trails for mobile robots. IEEE Rob. Autom. Lett. 1(2), 661–667 (2016)
Mehta, S., Rastegari, M., Shapiro, L., Hajishirzi. H.: ESPNetv2: A light-weight, power efficient, and general purpose convolutional neural network. In: CVPR, pp. 9190–9200 (2019)
Matsuzaki, S., Miura, J., Masuzawa, H.:. Multi-source pseudo-label learning of semantic segmentation for the scene recognition of agricultural mobile robots. arXiv:2102.06386 (2021)
Labbe, M., Michaud, F.:. RTAB-Map as an Open-Source Lidar and VisualSLAM Library for Large-Scale and Long-Term Online Operation. J. Field Rob. 36(2), 416–446 (2019)
Buslaev, A., Iglovikov, V.I., Khvedchenya, E., Parinov, A., Druzhinin, M., Kalinin, A. A.: Albumentations: fast and flexible image augmentations. Information 11(2) (2020)
Li , Z., Arora, S.:. An exponential learning rate schedule for deep learning. arXiv:1910.07454 (2019)
Kingma, D. P., Ba, J. L.: ADAM: a method for stochastic optimization. In: ICLR (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Uzawa, Y., Matsuzaki, S., Masuzawa, H., Miura, J. (2023). End-to-End Path Estimation and Automatic Dataset Generation for Robot Navigation in Plant-Rich Environments. In: Petrovic, I., Menegatti, E., Marković, I. (eds) Intelligent Autonomous Systems 17. IAS 2022. Lecture Notes in Networks and Systems, vol 577. Springer, Cham. https://doi.org/10.1007/978-3-031-22216-0_19
Download citation
DOI: https://doi.org/10.1007/978-3-031-22216-0_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-22215-3
Online ISBN: 978-3-031-22216-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)