Exploiting affine invariant regions and leaf edge shapes for weed detection
Introduction
Sugar beet (Beta Vulgaris) is among the world’s important crops with an estimated 278 million tonnes global production (FAOSTAT, 2011). Creeping thistle (Cirsium Arvensis (L.) Scop.) is an invasive weed species which is one of the biggest threats to the sugar beet as 5–6 plants/m2 can halve the crop yield (Miller et al., 1994). Treating thistles requires huge quantities of herbicides because it is becoming increasingly frequent (Andreasen and Stryhn, 2012). On the other hand, an indiscriminate use of chemicals is detrimental to the environment as it ends up contaminating underground water. Site Specific Weed Management is therefore becoming the focus of the future farming technologies (Christensen et al., 2009, Lopez-Granadoz, 2011).
In Kazmi et al. (2015), we investigated the potential of color imaging for the detection of creeping thistles in sugar beet fields. A high accuracy (up to 97%) of detecting weeds was achieved since both the species showed a noticeable separation in the visible spectrum. Reliance, however, only on the color limits the scope of the system because the variation in outdoor illumination affects the perceived colors by the cameras or else inclusion of a second weed species with color characteristics closer to the crop may compromise the performance. Therefore, in order to increase the robustness of the weed detection system and to incorporate a flexibility towards including more species, shape features are indispensable. Involving shape will employ a broader set of discriminating features as used by the human vision system. Plant canopies in general are composed of leaves, therefore, leaf shapes for weed detection are explored in this article.
Leaf shapes have been widely used for plant classification.
Agarwal et al., 2006, Ling and Jacobs, 2009 introduced Inner-Distance based Shape Context (IDSC) for leaves, comparing the distance between the selected points on a leaf boundary, somewhat similar to the Shape Context (SC) by Belongie et al. (2002). Leaf shape identification by multi-scale triangular representations has recently been introduced by Mouine et al. (2013). Kumar et al. (2012) developed a mobile phone app, LeafSnap, for leaf recognition using leaf curvatures.
In general, these algorithms extract global features and therefore require isolated leaf images with plain or homogeneous backgrounds such as the publicly available databases (Pl@ntNet,3 Swedish Leaves (Söderkvist, 2001) or Smithsonian databases (Belhumeur et al., 2008)). For example LeafSnap, which is a state-of-the-art tool for leaf recognition, rejects the images with non-planar background (Kumar et al., 2012).
This demands controlled imaging and sometimes a destructive analysis of plants. However, the situation in agricultural farm applications is much different. In unconditioned field imaging, unfortunately, plants or leaves cannot be arranged for proper frontal snapshots. Effects of wind add to the challenge. Along with that, nothing much can be done about the background. Although strong sunlight can be diffused by introducing a shade, most of the other conditions cannot be avoided. Therefore, for field data, the feature set for plant recognition has mostly been limited either to color, multi- or hyper-spectral signatures.
Still, avoiding destructive analysis, some simple morphological features such as the leaf area or waddle disk diameter have worked well for indoor systems (Golzarian and Frick, 2011). But in the outdoor farm applications, the measurement of the plant morphology must be done at a very early growth stage so that the canopies are simpler (Åstrand and Baerveldt, 2002, Jeon et al., 2011). The classification problem may be reduced only to a few classes, in most cases just two, such as, crop/weed and infected/healthy, but due to the variations in plant size, water stress (color), perceived change in shape due to wind, light and occlusion, plants fall into the category of deformable objects with a range of intra-class variations (Campbell and Flynn, 2001). Features based on simple plant morphology may not be sufficient as slight changes in many of the aforementioned variables can make the segmentation of plant organs difficult, and therefore, may require 3D information (Šeatović, 2008, Dellen et al., 2011, Alenya et al., 2013). Acquiring 3D under outdoor conditions is constrained by the sensor technology and the processing overhead (Kazmi et al., 2014). On the other hand, applications such as weed or disease detection require a high degree of accuracy as well. One missed weed or infected plant can spread out and affect several crop plants in due time reducing the overall production.
In such cases, advanced computer vision techniques which have addressed a variety of problems in, for example, outdoor navigation, image registration, object recognition and medical imaging, to name a few, hold promise. Local features in computer vision detect characteristics of shapes in a scene such as corners, edges or homogeneous regions and extract a high dimensional description of the contents of the scenes in their immediate neighborhood. By design, they are more connected to the local geometry of the objects or the scene and hence are tolerant to occlusion (Tuytelaars and Mikolajczyk, 2007). Therefore the significant progress done in computer vision research in local features should be taken into consideration.
The subject species in this work have distinct edge shapes, especially the groovy edge of a thistle is more prominent (Fig. 1). But the number of grooves may not always be consistent. Depending on the growth stage, it may change as well as the edges may get damaged over time. Still, the fact that one species has a non-planer edge as compared to the other is a notable distinction which can be exploited.
Leaf edge shapes or teeth are difficult to automatically detect (Royer and Wilf, 2005, Cope et al., 2012). But local features detectors from computer vision such as affine regions can be used to detect regions around edges and shape feature descriptors can then be used to record their characteristics. So, instead of counting the number or size of the grooves, we can rely on such descriptors to register the edge shapes with the under laying hypothesis that such features would be sufficient to distinguish a smooth edge (sugar beet) from a groovy or jagged one (thistle).
Affine invariance though, comes at a cost of local information (Mikolajczyk et al., 2003). In the process of seeking affine invariance, the regions are iteratively mapped onto an ellipse and the shape of the boundary contributing to the initial detection is usually offset. Therefore, we proposed a graph based multi-scale edge shape detector, the Twin Leaf Region (TLR) which avoids affine adaptation (Kazmi and Andersen, 2015).
The objective in this article is to use and evaluate the potential of local features for weed detection. They will first be evaluated on a public database establishing a performance baseline. Their performance on field data will then highlight the complexity of the field challenges.
Section snippets
Data acquisition
Image acquisition is described in detail in Kazmi et al. (2015). The 474 images of sugar beet and thistle used in this study are the same as those used in Kazmi et al. (2015) where thistles detection in sugar beets was based on color vegetation indices only. Images were captured with an industrial grade camera (Model: Bumblebee XB3 by Point Grey Research) mounted on a remotely operated ground vehicle. The camera uses three progressive scan CCD’s. One of the three cams were used and the images
Foliage database retrieval
Biological and environmental factors such as the age of the plant and water stress affect the greenness of the leaves. As the Swedish Leaf database contains images of plants leaves which were cut from the trees and scanned in a flatbed scanner (Söderkvist, 2001), this destructive analysis changes the leaf color. Therefore, CVI descriptors were not applied on Swedish leaf database.
MAP values of all the other detector/descriptor combinations are reported in Table 8. As it can be observed from
Conclusions
In this article, a practical solution to the problem of weed detection in sugar beet fields using advanced computer vision techniques was presented. It was shown that local features based on affine invariant regions and scale invariant keypoints work well for leaf images when low deformations of the shapes are involved such as for the foliage retrieval task using scanned leaf images. However, for an outdoor system, with biological and environmental factors in play, their efficacy was reduced.
Acknowledgments
This research was supported by the Danish Council for Strategic Research under ASETA project, Grant No. 09-067027.
References (51)
- et al.
Increasing weed flora in Danish beet, pea and winter barley fields
Crop Protect.
(2012) - et al.
Speeded-up robust features (SURF)
Comp. Vis. Image Understand.
(2008) - et al.
A survey of free-form object representation and recognition techniques
Comp. Vis. Image Understand.
(2001) - et al.
Plant species identification using digital morphometrics: a review
Expert Syst. Appl.
(2012) - et al.
Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: Analysis and comparison
ISPRS J. Photogram. Rem. Sens.
(2014) - et al.
Detecting creeping thistle in sugar beet fields using vegetation indices
Comp. Electron. Agricult.
(2015) - et al.
A comparison of interest point and region detectors on structured, range and texture images
J. Vis. Commun. Image R.
(2015) - et al.
Verification of color vegetation indices for automated crop imaging applications
Comp. Electron. Agricult.
(2008) - et al.
An agricultural mobile robot with vision-based perception for mechanical weed control
Auton. Robots
(2002) - et al.
First steps toward an electronic field guide for plants
Taxon
(2006)
Robotized plant probing: leaf segmentation utilizing time-of-flight data
Robot. Autom. Magaz., IEEE
Shape matching and object recognition using shape contexts
IEEE Trans. Pattern Anal. Mach. Intell.
Site-specific weed control technologies
Weed Res.
Statistical precision of information retrieval evaluation
The design and use of steerable filters
IEEE Trans. Pattern Anal. Mach. Intell.
Classification of images of wheat, ryegrass and brome grass species at early growth stages using principal component analysis
Plant Meth.
Robust crop and weed segmentation under uncontrolled outdoor illumination
Sensors (Basel, Switzerland)
Cited by (51)
Image patch-based deep learning approach for crop and weed recognition
2023, Ecological InformaticsSemi-supervised learning and attention mechanism for weed detection in wheat
2023, Crop ProtectionDeveloping an extreme learning machine based approach to weed segmentation in pastures
2023, Smart Agricultural TechnologyEarly weed identification based on deep learning: A review
2023, Smart Agricultural TechnologyCitation Excerpt :In different network architectures, the highest classification accuracy reached more than 98%, and transfer learning proved effective. Transfer learning has gained success in real-world applications [37,38]. Deep learning has been widely used in various recognition tasks, including machine translation and image recognition, which achieved remarkable effects.
A segmentation network for smart weed management in wheat fields
2022, Computers and Electronics in AgricultureRapid estimation of fractional vegetation cover in grasslands using smartphones
2022, Journal of Arid Environments