Original papers
Obstacle detection and recognition in farmland based on fusion point cloud data

https://doi.org/10.1016/j.compag.2021.106409Get rights and content

Highlights

  • Data acquisition platform based on 2D/3D LiDAR and GNSS/AHRS was established.

  • Software realized the multiple sensor synchronization and coordinate systems registration.

  • The 3D_MBRs were extracted based on OBBs and MBRs to build box models of obstacles.

  • MCB classification method with two layers was proposed to infer the category of obstacle.

Abstract

Obstacle detection and recognition in the farmland environment is of great significance to unmanned agricultural machinery navigation technology. In this study, a tractor platform based on 3D/2D LiDAR and GNSS/AHRS was built to acquire fusion point cloud data. A complete data processing flow is proposed, including fusion point cloud data acquisition, obstacle detection, and obstacle recognition. The fusing point cloud data acquisition method realized the time synchronization between multiple sensors and the space registration of point cloud at different coordinate systems. The obstacle detection method removed ground, sparse, and ego platform points based on RANSAC and crop-box filtering. Then, a Euclidean clustering method based on the platform width was used for obstacle clustering. The obstacle recognition method established a box model based on PCA for each cluster and extracted its geometric feature information. Next, a MCB classification method is proposed to distinguish false-positive, non-interest, and trackable obstacles (e.g., tractor and pedestrian). To evaluate the results of data collection and processing, a series of experiments was designed and implemented. The average absolute deviation of the fusion point cloud was 0.015 m, and the average rate of lost was 1.24%. The proportion of residual ground points in the ground point cloud was less than 0.012%. The accuracy, precision, recall, and F1-score exceeded 80%. Therefore, the proposed method could meet the operation requirements in the farmland environment and provide highly precise and stable detection and recognition results.

Introduction

Agricultural machinery are important in implementing precision agriculture, which can greatly improve labor efficiency and reduce labor intensity. Navigation technology can further improve the operation accuracy and reduce the fatigue of drivers (Li et al., 2019, Cao et al., 2021, Tang et al., 2021). With the development of science and technology, agricultural machinery navigation technology has experienced three development stages: auxiliary, automatic, and unmanned navigation. Positioning, control, and perception are the corresponding breakthrough technologies. Unmanned agricultural machinery navigation technology aims to realize efficient operation under the premise of ensuring the safety of operation. Therefore, obstacle detection and recognition in farmland are particularly important. The perception of agricultural machinery state and environment information is receiving increasing research attention (He et al., 2018). The comparison of related researches based on ultrasonic (Jia et al., 2015, Dvorak et al., 2016), machine vision (Ball et al., 2016, Fleischmann and Berns, 2016, Xu et al., 2021), and light detection and ranging (LiDAR) (Takahashi et al., 2015, Peng et al., 2015, Asvadi et al., 2016, Malavazi et al., 2018) indicates that LiDAR has become the preferred sensor for environmental information perception because of its high precision, high frequency, and wide range.

The application of LiDAR in environment perception is common in indoor autonomous mobile robots and urban unmanned vehicles. However, the farmland environment is quite different from indoor or urban environment. Global navigation satellite system (GNSS) can provide reliable absolute positioning in open farmland environment, but uneven farmland grounds will cause positioning error and distort LiDAR point cloud data. Freitas et al. (2012) of Federal University, USA built an electric vehicle for automatic navigation and obstacle detection in tree fruit orchards. They used a low-cost inertial measurement unit (IMU) to correct the point cloud of 2D LiDAR. The point cloud was classified as terrain or obstacles by gradient, and the position of obstacles was obtained by edge clustering. In addition, semi-structured or unstructured farmland environment lacks landmark features for navigation. Thanpattranon et al. (2015) of University of Tsukuba, Japan built an automatic navigation tractor based on artificial landmarks. To simulate the scenario of GNSS interruption, the tractor collected information through 2D LiDAR and detected the nearest several landmarks through geometric and point features. A point-to-go algorithm was used to generate navigation path by connecting the nearest landmarks. Besides, weeds are common in farmland, increasing the challenges of detecting ground obstacles. Benet et al. (2017) of IRSTEA, France built an agricultural robot for crop row tracking and traversability operations. They used a 2D LiDAR and a time of flight (ToF) camera to detect far and near obstacles, respectively. A color camera was used to distinguish weeds from solid obstacles by support vector machine (SVM). Furthermore, farmland operation has complex working conditions, so the categories and movement states of obstacles are difficult to predict in advance. Liu et al. (2018) of Northwest A&F University, China built an electric vehicle platform for moving autonomously in non-structural environments. They equipped the platform with a compass equipment, an IMU, and a 2D LiDAR. The point cloud data were processed through a voxel grid filter and the Euclidean cluster method for obstacle detection. The absolute position of the same obstacle was analyzed to distinguish the static and dynamic obstacles.

The above research indicates that the GNSS/IMU system can improve the accuracy of positioning and the LiDAR point cloud and is independent of the characteristics of environmental landmarks. Furthermore, limited by the point cloud distribution of 2D LiDAR, many research used color images to recognize the obstacle category.

Three-dimensional (3D) LiDAR can provide abundant point cloud data. Thus, some scholars have proposed obstacle detection and recognition methods based on the fusion of 3D LiDAR and machine vision. Reina et al. (2016) of University of Salento, Italy developed an obstacle detection and classification platform that could be mounted on a tractor or an off-road land vehicle. Stereovision and 3D LiDAR detect obstacles independently, and then being fused on uncertainties to obtain final classification results (obstacle or ground). Kragh et al. (2017) of Aarhus University, Denmark constructed the FieldSAFE dataset for obstacle detection in agriculture. The dataset contains two hours of data collected on a grass field with multi-sensors, including stereo, web, thermal, and panoramic cameras, LiDAR, and radar. GNSS and IMU were used as proprioceptive sensors. Various static and dynamic obstacles were marked with ground truth labels and geographic coordinates. Korthals et al. (2018) of Bielefeld University, Germany presented a multi-modal obstacle and environment detection and recognition approach for process evaluation in agricultural fields based on the FieldSAFE dataset mentioned above. For each sensor modality, an inverse sensor model (ISM), including an algorithm for detecting several object categories, was established. The results were transformed into a 2D occupancy grid map (OGM) representation in the local sensor frame. The fusion algorithm fused all the OGMs and matched them with the global map according to the location information. Kragh and Underwood (2020) of Aarhus University, Denmark combined the appearance-based and geometry-based detection methods by probabilistically fusing 3D LiDAR and panoramic camera sensing with semantic segmentation using the conditional random field (CRF) method. They used initial 2D and 3D classifiers to generate class-specific heat maps for a synchronized camera image and a LiDAR point cloud individually. The 3D point cloud was projected onto the 2D image, and a CRF fused the two information spatially, temporally, and across the two modalities.

According to above mentioned researches, obstacles could be detected but hardly recognized as specific categories by 2D LiDAR. Obstacle classification is mostly realized by fusing 3D LiDAR and machine vision. However, most of the current fusion methods use top-level fusion, which could not make full use of the advantages of machine vision for obstacle classification. Therefore, the use of 3D point cloud data without color images has significance for obstacle detection and recognition in unstructured farmland. This paper aims to build a data acquisition platform based on 2D/3D LiDAR and the GNSS/AHRS (attitude and heading reference system) system, design a software for obtaining the collection platform status and farmland environment point cloud, and then achieve obstacle detection and recognition in farmland.

Section snippets

Materials of data acquisition platform

The data acquisition platform was built on the LOVOL Euro-leopard M904-D tractor, which consists of three modules: the point cloud data acquisition, vehicle position and posture acquisition, and data fusion modules. The schematic of the data acquisition platform is illustrated in Fig. 1. The framework is depicted in Fig. 2.

The point cloud data acquisition module includes a 3D LiDAR, which is mounted on the roof of the platform, and a 2D LiDAR, which is mounted in the front of the platform. The

Experimental design

A series of experiments was designed to evaluate the results of data collection and processing. Two typical scenes, flat road and farmland ground, were selected to simulate different working environments. The data acquisition platform was tested in static and moving states to simulate different working states. Harvesters, electric tricycles, pedestrians, and cartons were used to simulate typical obstacles. A typical static acquisition scene with carton obstacles is exhibited in Fig. 12.

Evaluation of point cloud data collection

To

Conclusions

Aiming at the uneven ground in farmland, an acquisition platform was built to collect fusion point cloud data. Furthermore, an obstacle detection and recognition method is proposed. The data acquisition experiments of multiple kinds of obstacles in different scenes and motion states verify that the acquisition platform and the data processing method can meet the needs of complex farmland environments. On the basis of the experimental analysis above, the following conclusions can be summarized:

  • (1)

    A

CRediT authorship contribution statement

Yuhan Ji: Conceptualization, Methodology, Software. Shichao Li: Resources, Investigation. Cheng Peng: Formal analysis, Investigation. Hongzhen Xu: Validation, Investigation. Ruyue Cao: Data curation, Investigation. Man Zhang: Supervision, Project administration, Funding acquisition.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgement

This work was supported by the Natioanl Key Research and Development Program of China (Grant No. 2019YFB1312300-2019YFB1312305), National Key Research and Development Program of China (Grant No.2017YFD0700400-2017YFD0700403), the National Natural Science Foundation of China (Grant No.31571570) and CAU special fund to build world-class university (in disciplines) and guide distinctive development (Grant No. 2021AC006).

References (28)

  • D. Ball et al.

    Vision-based obstacle detection and navigation for an agricultural robot

    J. Field Rob.

    (2016)
  • J. Behley et al.

    Efficient radius neighbor search in three-dimensional point clouds

    Proc. – IEEE Int. Conf. Robot. Automat.

    (2015)
  • Boulaassal, H., Landes, T., Grussenmeyer, P., Tarsha-Kurdi, F., 2007. Automatic segmentation of building facades using...
  • J.S. Dvorak et al.

    Object detection for agricultural and construction environments using an ultrasonic sensor

    J. Agric. Safe, Health

    (2016)
  • Cited by (32)

    View all citing articles on Scopus
    View full text