Original papersA modified U-Net with a specific data argumentation method for semantic segmentation of weed images in the field☆
Introduction
Weeds appear in the field randomly and compete for water, nutrients, and sunlight with crops, which have a detrimental impact on crop yields and quality (Wang et al., 2019, Hamuda et al., 2016). Researchers and farmers have made great efforts to control weeds to overcome the challenges posed by weeds (Berge et al., 2008). The chemical wedding has been the most widely used method for weed control since the 1940s (Hamuda et al., 2016). Conventional chemical weeding sprays herbicides uniformly to the total field, regardless of the presence of weeds or not, resulting in a high cost on herbicides. Furthermore, the overuse of herbicides in agriculture has caused catastrophic environmental pollution problems (Rodrigo et al., 2014). Under this situation, site-specific weed management (SSWM) was introduced. The main idea of SSWM is to spray weed patches only and/or adjust herbicide applications according to weed density or weed species composition. Weed detection plays a critical role in SSWM. Precise discrimination of weed from crop benefits weed management while a wrongly detected weed information may fail SSWM or even cause crop damage (Haug et al., 2014).
In recent years, along with advanced machine vision techniques, several weed detection methods have been developed for SSWM (Wang et al., 2019, Kamath et al., 2019, Stroppiana et al., 2018, Taghadomisaberi and Hemmat, 2015). Three-dimensional cameras, spectral cameras, and thermal cameras were used to obtain images of the field and detect weed (Khan et al., 2018, Lammie et al., 2019, Stroppiana et al., 2018, Kazmi et al., 2015, Ge et al., 2019, Alenya et al., 2013, Kusumam et al., 2017, Kazmi et al., 2014). But the cost of such types of equipment are too high, so it is hard to widely used in agriculture (Zhang et al., 2020). A serial of studies based on computer vision and machine learning has been carried out to separate weeds from crops on color images (Sabzi et al., 2018, Bakhshipour and Jafari, 2018, Wang et al., 2019, Abdalla et al., 2019). A general image analysis pipeline for segment crops and weeds on the images is to extract image features first and then classify the pixels or regions in the image into crops and weeds according to these features (Abdalla et al., 2019, Ricofernandez et al., 2019). Color, shape, and texture are commonly used as individual features or as a combination of multiple features. Classifiers are usually based on machine learning algorithms, such as Decision Trees, Support Vector Machine (SVM), and Gaussian process. Zheng et al. (2017) segmented maize from weeds with color features. And the Principal Component Analysis (PCA) was used to reduce noise and redundancy data. The highest accuracy was 93.87% achieved by the SVM classifier. Zou et al. (2019) developed an algorithm, which can segment broccoli seedling from weeds. This algorithm was based on an SVM classifier combined with color-texture features. The segmentation accuracy was 90%. In ideal conditions and at specific plant growth stages, these methods provide high classification accuracy in the range of 80–98% (Lottes et al., 2018). However, the performance of these methods is influenced by complex and diverse factors, including weed density, weed distribution characteristics, lighting conditions, occlusion or overlapping of the leaves of crops and weeds, and different growth stages of plants. So, an efficient, automated, and robust algorithm is highly needed to deal with complex and diverse situations (Abdalla et al., 2019).
Deep learning techniques have made great progress in the computer vision field because of the convolutional neural networks (CNNs) (He et al., 2016). These technologies are also widely used in image processing for agriculture (Kamilaris and Prenafetaboldu, 2018, Chen et al., 2019, Yang et al., 2019, Fuentes et al., 2017). The convolutional neural network has been widely used for the plant classification, identification, and segmentation (Kamilaris and Prenafetaboldu, 2018, Kalin et al., 2019). You et al. (2020) proposed a sugar beet/crop segmentation network. This algorithm used a deep neural network and performed well on segmentation tasks. The highest IoU of this method was 89.01%. Yu et al. (2019) reported several deep convolutional neural networks that can accurately detect weeds in growing bermudagrass. And the F1 score of the network was 0.95. Researches mentioned above showed the significance of CNNs on weed segmentation. However, training the CNNs for semantic segmentation usually requires a large number of images along with labels (Brostow et al., 2009). Unfortunately, providing per-pixel class labels for images acquired from a field with high weed occlusions and environmental variability is a difficult task (Deng et al., 2018, Kemker et al., 2018, Pan et al., 2017). Transfer learning is a way to reduce the needed number of labeled data (Abdalla et al., 2019). The weights of a pre-trained network were transferred to the required network and fine-tuned using new image datasets.Transfer learning makes the process of training networks easier and faster than randomly initialized weights. In order to reduce the difficulty of training deep neural networks by transfer learning, there are two important criteria for the selection of pre-training samples: The first is that the pre-training samples should be as similar as possible to the target samples. The second is that labels should be easy to make. Transfer learning and fine-tuning have been widely used in a great number of applications including plant species identification, plant disease detection, and weed detection (Dyrmann et al., 2016, Ferreira et al., 2017, Barbedo, 2018, Picon et al., 2019). For example, Abdalla et al. (2019) studied segmenting oilseed and weed in images based on a convolutional neural network trained by transfer learning and fine-tuning, resulting in a 96% semantic segmentation accuracy. However, owing to it is crop-targeted segmentation algorithm, this network can only segment weed from the oilseed field. It can not be used in other fields, such as maize, wheat, rice. This drawback limits the use of this kind of algorithms.
Green bristlegrass is one of the most common weeds (Bennetzen et al., 2012). It is also the most harmful weed to crops (Bogdanov et al., 2016). This study aimed to develop a weed-targeted image segmentation method based on deep learning for segmenting Green bristlegrass from different crop fields. This algorithm could quickly segment Green bristlegrass from images and could be used on weeding robots to guide the precise spraying of herbicides.
Section snippets
Imaging system
The field images used in this study were obtained by a caterpillar-vehicle based imaging system, as shown in Fig. 1. The caterpillar-vehicle is a moving platform. The computing platform of this system was a Raspberry Pi 4B. It was used to control the camera and read images from the camera (RERVISION USB8MP02G, China) and POS sensor (WIT-IMU, WIT-MOTION Technology Co, China). The camera was an RGB camera. The POS sensor could obtain both geographic location information and attitude information
Results of two stage training
The training process only using the fine-tuning set is shown in Fig. 6. It could be seen that the accuracy and loss of the training set were different from that of the validation set in 50 iterations. With the training going by, this difference gradually expanded. The loss of the validation set did not continue to decline, and the accuracy did not increase. This indicated that the network had overfitted, and the number of training samples was too small. So data augmentation was needed. Fig. 7
Conclusions
In this paper, an imaging system was used to collect images of the field. By modifying U-Net, a more suitable network structure for weed segmentation on images was designed. In order to solve the difficulty in data labeling and shortage of training data, a data augmentation method based on foreground and background was designed. The neural network was trained by the pre-training stage and fine-tuning stage. Finally, a weed image segmentation method was obtained. Through the analysis of the
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
References (52)
- et al.
Fine-tuning convolutional neural network with transfer learning for semantic segmentation of ground-level oilseed rape images in a field with high weed pressure
Computers and Electronics in Agriculture
(2019) - et al.
Infield oilseed rape images segmentation via improved unsupervised learning models combined with supreme color features
Computers and Electronics in Agriculture
(2019) - et al.
Evaluation of support vector machine and artificial neural networks in weed detection using shape features
Computers and Electronics in Agriculture
(2018) Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification
Computers and Electronics in Agriculture
(2018)- et al.
Semantic object classes in video: A high-definition ground truth database
Pattern Recogn. Lett.
(2009) - et al.
Multi-scale object detection in remote sensing imagery with convolutional neural networks
Isprs Journal of Photogrammetry and Remote Sensing
(2018) - et al.
Plant species classification using deep convolutional neural network
Biosyst. Eng.
(2016) - et al.
A survey of image processing techniques for plant extraction and segmentation in the field
Computers and Electronics in Agriculture
(2016) - et al.
Defoliation estimation of forest trees from ground-level images
Remote Sens. Environ.
(2019) - et al.
Deep learning in agriculture: A survey
Computers and Electronics in Agriculture
(2018)
Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: analysis and comparison
Isprs Journal of Photogrammetry and Remote Sensing
Exploiting affine invariant regions and leaf edge shapes for weed detection
Computers and Electronics in Agriculture
Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning
Isprs Journal of Photogrammetry and Remote Sensing
Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild
Computers and Electronics in Agriculture
A contextualized approach for segmentation of foliage in different crop species
Computers and Electronics in Agriculture
A fast and accurate expert system for weed identification in potato crops using metaheuristic algorithms
Comput. Ind.
A review on weed detection using ground-based machine vision and image processing techniques
Computers and Electronics in Agriculture
Deep convolutional neural networks for rice grain yield estimation at the ripening stage using uav-based remotely sensed images
Field Crops Research
A dnn-based semantic segmentation for detecting weed and crop
Computers and Electronics in Agriculture
Deep learning for image-based weed detection in turfgrass
Eur. J. Agron.
Counting of grapevine berries in images via semantic segmentation using convolutional neural networks
ISPRS Journal of Photogrammetry and Remote Sensing
Maize and weed classification using color indices with support vector data description in outdoor fields
Computers and Electronics in Agriculture
Detection of ground straw coverage under conservation tillage based on deep learning
Computers and Electronics in Agriculture
Robotized plant probing: Leaf segmentation utilizing time-of-flight data
IEEE Robotics & Automation Magazine
Reference genome sequence of the model plant setaria
Nat. Biotechnol.
Evaluation of an algorithm for automatic detection of broad-leaved weeds in spring cereals
Precision Agric.
Cited by (0)
- ☆
This research was funded by National Key Research and Development Project of China (2019YFB1312303).