Original papersOrchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion – Part A: Tree detection
Introduction
Potentially, agricultural robots could play an important role in implementing different agricultural tasks in orchards. Automation and advanced sensing technologies can provide up-to-date information that helps the farmers in orchard management. For example, the data collected from the robot sensors can provide information that helps the farmer detect tree or fruit diseases or damage, measure tree canopy volume and monitor fruit development.
Orchards are usually semi-structured environments, since trees of the same type are planted in nominally straight and parallel rows and the distances between the rows are almost equal. The organised layout of the orchard makes it suitable and promising environment for autonomous mobile robot navigation. However, the environment is only ‘semi-structured’ because the regularity of tree and non-tree object placement is neither precise nor reliable as the orchard grows, branches fall, etc.
The use of spatial sensors with agricultural robots has increased rapidly in recent years as their costs decline. Vision systems and laser scanners are becoming more common in outdoor agricultural applications such as tree detection, map construction, mobile robot localisation and navigation because of their ability to provide instantaneous information that can be used for feature extraction and object detection. Vision systems are low cost solutions for extracting different features (e.g. colour, edge, texture). Laser scanners are popular sensors in outdoor applications as they provide precise range and angle measurements in large angular fields at a fixed height above the ground plane. Fusing images from cameras with range data from laser scanners enables mobile robots and vehicles to more confidently perform a variety of tasks in outdoor environments (Garcia-Alegre et al., 2011).
There are differences between the data acquired from the laser scanner and the camera images. The 2D laser scanner generates a single horizontal scan of the environment, whereas the camera provides an instantaneous image of the local environment with precise depth information. A laser scanner provides range and bearing data, while the camera primarily provides intensity and colour information. There are some common features in both types of data. For example, many corners and edges correspond to a sudden change in the range along the laser scan data and a sudden variation in image intensity (Peynot and Kassir, 2010).
In this work, sensors for tree detection are used on a small mobile robot platform. This is because such platforms are more suitable for individual plant inspection than larger systems (e.g. tractors). They have the potential to sense the environment and ensure a high precision of operation. They usually have less energy requirements, lower environmental impacts and are more cost effective than large agricultural vehicles (Pedersen et al., 2005). In recent years, unmanned aerial vehicles (UAVs) have developed obvious potential for orchard aerial mapping and surveying. However, a small mobile robot is more convenient for collecting mapping and inspection information of the lower parts of the trees such as trunks, which typically cannot be accessed easily by a UAV and, importantly, UAVs require full manual control in such a situation.
Early work was implemented in Shalal et al. (2013) to develop and evaluate a tree trunk detection algorithm using simulated tree trunks. In this present study, a real orchard environment has been used which presents some challenges especially in detecting the trees robustly. This is mainly because of the non-tree objects that might be present between the trees in the tree rows such as posts, tree supports and animals. The core motivation of this research is to develop a robust algorithm to detect trees and discriminate between trees and non-tree objects. It is hard to achieve this using single sensor such as vision only or laser scanner only since trees and non-tree objects might have some common features such as width, colour and parallel edges. The contribution of this study is to develop a new tree trunk detection algorithm using low cost camera and laser scanner data fusion as a component of fully automated operation for such small mobile robot to enhance the detection capability and to discriminate between trees and non-tree objects. Fusion of data from these sensors was found to improve tree detection because the laser scanner can provide reliable ranges, angles and width of the tree trunks and non-tree objects, whilst the vision system can distinguish between tree trunks and other non-tree objects from different features (e.g. colour, edges, etc.).
This paper is organised as follows. Section 2 presents the related published work. Section 3 describes the robot platform architecture and orchard layout. Section 4 explains the proposed method for tree trunk detection using vision and laser scanner, whilst Section 5 presents the experimental results and discussion. Finally, a summary of the significant conclusions is presented. A companion paper (Shalal et al., 2015) sets out the subsequent component of automated robot operation, namely the mapping and localisation, based on the tree detection as set out in this present paper.
Section snippets
Related work
Research work has been reported in the literature for tree detection in orchards and other agricultural environments. Laser scanner and machine vision have been used separately by many authors to detect various parts of the trees such as trunk, stem and canopy. The use of vision sensors only allows the extraction of different features from the environment such as colour, texture, shape, and edges of the trees. Ali et al. (2008) presented a classification based tree trunk detection method for
Mobile robot architecture
The platform used in this research is a CoroWare Explorer (CoroWare Inc., USA) which is designed to operate in outdoor environment conditions. It has a rugged 4 wheel drive chassis with skid steering. The robot has the ability to climb over 150 mm obstacles which allows it to move over places difficult to access for other robot platforms with lower clearance. It is equipped with an on-board computer running the Ubuntu Linux operating system with Robot Operating System (ROS) for building and
Tree trunk detection algorithm
A preliminary tree trunk detection algorithm, denoted here as ‘Detection Algorithm A’, Shalal et al. (2013) was enhanced to become ‘Detection Algorithm B’. Both algorithms use laser scanner and camera data and are described in this section. The three main detection stages of each algorithm are set out in the following Sections 4.1 Laser-based tree trunk detection, 4.2 Projection of laser points on camera image, 4.3 Vision-based tree trunk detection, whilst the two algorithms are described in
Tree trunk feature extraction
The tree trunk feature extraction was achieved by moving the mobile robot between multiple tree rows in the orchard to collect the required image-laser scan pairs for each tree trunk. A set of 100 regular tree trunks selected randomly from the orchard was used to extract the initial distribution of the regular tree trunk width. The sampling frequency of the data collection from the on-board sensors was 7 Hz. It was observed that each tree trunk was detected in a number of image-laser scan pairs (
Conclusion
This paper demonstrates a new method for detecting trees and non-tree objects such as posts and tree supports using a camera and laser scanner data fusion. The utilisation of both camera and laser scanner data enhanced the tree trunk detection. Projection from the laser scanner to the image plane and selecting the region of interest with the required features was effective since it reduces the processing time and minimises the effect of the noise in the other parts of the image. The developed
Acknowledgments
This work has been supported by the University of Southern Queensland, Australia. The authors are grateful to the owner and the manager of Blackboy Ridge farms in Gatton, Queensland for their cooperation. The senior author would like to acknowledge the Ministry of Higher Education, University of Technology in Iraq for funding the PhD scholarship.
References (18)
- et al.
Optimized EIF-SLAM algorithm for precision agriculture mapping based on stems detection
Comput. Electron. Agric.
(2011) - et al.
Simultaneous localization and map building using natural features and absolute information
Robot. Auton. Syst.
(2002) - et al.
Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion – Part B: Mapping and localisation
Comput. Electron. Agric.
(2015) - Ali, W., Georgsson, F., Hellstrom, T., 2008. Visual tree detection for autonomous navigation in forest environment. In:...
- Bouguet, J.-Y., 2009. Camera calibration toolbox for Matlab....
A computational approach to edge detection
IEEE Trans. Pattern Anal. Mach. Intell.
(1986)- Garcia-Alegre, M.C., Martin, D., Guinea, M.D., Guinea, D., 2011. Real-time fusion of visual images and laser data...
- Hamner, B., Singh, S., Bergerman, M., 2010. Improving orchard efficiency with autonomous utility vehicles. In: ‘2010...
- Hansen, S., Bayramoglu, E., Andersen, J., Ravn, O., Andersen, N., Poulsen, N., 2011. Orchard navigation using...
Cited by (84)
Navigation system for orchard spraying robot based on 3D LiDAR SLAM with NDT_ICP point cloud registration
2024, Computers and Electronics in AgricultureOverall integrated navigation based on satellite and lidar in the standardized tall spindle apple orchards
2024, Computers and Electronics in AgricultureResearch progress of autonomous navigation technology for multi-agricultural scenes
2023, Computers and Electronics in AgriculturePloughing furrow recognition for onland ploughing using a 3D-LiDAR sensor
2023, Computers and Electronics in AgricultureTrunk detection in tree crops using RGB-D images for structure-based ICM-SLAM
2022, Computers and Electronics in AgricultureObstacle avoidance for orchard vehicle trinocular vision system based on coupling of geometric constraint and virtual force field method
2022, Expert Systems with Applications