Elsevier

Pattern Recognition

Volume 48, Issue 7, July 2015, Pages 2119-2128
Pattern Recognition

Registration of thermal and visible light images of diseased plants using silhouette extraction in the wavelet domain

https://doi.org/10.1016/j.patcog.2015.01.027Get rights and content

Highlights

  • Silhouette-based registration of thermal/visible light images of diseased plants.

  • Novel silhouette extraction method based on stationary wavelet transform – SWT.

  • High accuracy of SWT-based method compared to ground truth silhouettes.

  • High accuracy of silhouette-based registration on several diseased plant images.

Abstract

The joint analysis of thermal and visible light images of plants can help to increase the accuracy of early disease detection. Registration of thermal and visible light images is an important pre-processing operation to perform this joint analysis correctly. In the case of diseased plants, registration using common methods based on mutual information is particularly challenging since the plant texture in the thermal image significantly differs from the corresponding texture in the visible light image. Registration methods based on silhouette extraction are therefore more appropriate. This paper proposes an algorithm for registration of thermal and visible light images of diseased plants based on silhouette extraction. The algorithm is based on a novel multi-scale method that employs the stationary wavelet transform to extract the silhouette of diseased plants in thermal images, in which common gradient-based methods usually fail due to the high noise content. Experimental results show that silhouettes extracted using this method can be used to register thermal and visible light images with high accuracy.

Introduction

Thermal imaging may assist in early detection of disease and stress in plants and canopies and thus, allow for the design of timely control treatments [1], [2]. Various studies show that the temperature information captured in thermal images of plants may be affected by several factors such as the amount of incident sunlight, the leaf angles and the distance between the thermal camera and the plant [3], [4]. Information about the effect of these factors can be obtained by using a stereo visual and thermal imaging setup [5], [6]. Therefore, early disease detection accuracy may be increased by performing a joint analysis of temperature data from thermal images and imaging data from visible light images [7], [8], [9]. Thermal and visible light images are usually captured using different types of sensors from different viewpoints and with different resolutions. As a pre-processing step before joint analysis, thermal and visible light images of plants must be aligned so that the pixel locations in both images correspond to the same physical locations in the plant.

To the best of our knowledge, there is no existing literature on automatic registration of thermal and visible light images of diseased plants. However, in the past researchers have manually registered thermal and colour images for multi-modal image analysis of plants [8]. Automatic registration of thermal and visible images of diseased plants is a challenging task due to the fact that there is a mismatch in texture information and edge information is often missing in the corresponding visible/thermal image. The reason for this information mismatch is that the thermal profile of a leaf in a diseased plant can show symptoms of disease before they visibly appear. In other words, a leaf with a smooth green profile (colour) in the visible light image may have a textured profile in the thermal image with a temperature higher or lower compared to that of the surrounding environment because of the changes in the plant which visibly appear at a later stage.

Infrared thermal imaging has been previously employed in video surveillance e.g., traffic, airport security, detection of concealed weapons, smoke detection and patient monitoring [4], [10], [11], [12]. One approach for registration is to calibrate the stereo visual + thermal imaging camera setup and use transformations to align the resulting images [13], [14], [15]. One disadvantage of this approach is that the calibration parameters of the cameras may not be readily available. In such cases, a possible solution is to align the thermal and visible light images using exclusively image based information. Various researchers have proposed methods that use line, edge and gradient information to register thermal and visible images of scenes with strong edge and gradient information [16], [17], [18], [19]. In general, line, edge and corner based methods are reliable for images of man-made environments, however they perform poorly on images of natural objects. Jarc et al. [20] proposed a registration method based on texture features; however, the method is not automatic and requires manual selection of features. Other methods based on mutual information and cross correlation of image patches rely on texture similarities between the two kinds of images [15], [19], [21]. Since there is a high probability that texture information may be missing in the corresponding visible/thermal image(s) of diseased plants, methods based on mutual information and cross-correlation may not be a good choice for registration.

Region-based methods, such as those based on silhouette extraction, usually provide more reliable correspondence between visible and thermal images than feature based methods [11], [21], [22]. Bilodeau et al. [21] proposed registering thermal and visible images of people by extracting features from human silhouettes. Torabi et al. [23] suggested a RANSAC trajectory-to-trajectory matching based registration method that maximizes human silhouette overlap in video sequences. Han et al. [12] proposed a hierarchical genetic algorithm (HGA) for silhouette extraction using an automatic registration method for human movement detection. The authors improve the accuracy of the extracted human silhouette by combining silhouette and thermal/colour information from coarsely registered thermal and visible images. Human body temperature is generally higher than that of the background region and this characteristic has been used by researchers in [11], [22] to extract human silhouettes. However, in the case of thermal images of diseased plants, the temperature profile does not exhibit this characteristic. It is possible that within the same plant the temperature of different regions is higher or lower than that of the background. Another common method for silhouette extraction in video sequences is background subtraction. This method usually provides very good results because of the high frame rate of the sequences and the fact that the background between two consecutive frames is usually very similar. For the case of images of diseased plants, background subtraction is not efficient due to the limited number of consecutive still images and the fact that there may be a large interval between two consecutive still images.

In this paper, we propose an algorithm for registration of thermal and visible light images of diseased plants based on silhouette registration. The algorithm features a novel multi-scale method for silhouette extraction of plants in thermal images. An overview of the proposed algorithm is shown in Fig. 1. For the visible light images, the algorithm uses the strength of edges/gradient to detect and extract the silhouette whereas for the thermal images it uses a method based on the stationary wavelet transform (SWT). The latter follows a multi-scale approach that first estimates of the silhouette at coarse scales by using the curvature strength as computed from the Hessian matrix of coefficients at each pixel location. It then uses these estimates to refine the silhouette at finer scales. After silhouette extraction, the algorithm employs a rigid + non-rigid registration method based on the non-rigid method proposed by Rueckert et al. [24] to register the thermal and visible light images. The remainder of the paper is organised as follows. Section 2 describes the image acquisition process. Section 3 presents the proposed SWT-based method. Section 4 describes the rigid+non-rigid registration method and Section 5 presents the experimental results.

Section snippets

Image acquisition

An experimental setup was designed and developed at the Department of Computer Science, University of Warwick, UK, to simultaneously acquire visual and thermal images of diseased/healthy plants. The setup consisted of two visible light imaging cameras Canon Powershot S100, and a thermal imaging camera Cedip Titanium. The setup was used to image tomato plants infected with the fungus Oidium neolycopersici which causes powdery mildew disease. 106 conidia/ml and various control treatments were

Thermal image

Extraction of plant silhouettes from thermal images obtained in our experiments is a difficult step because of high noise content, and thus common methods based on gradient information usually fail. Since thermal images were obtained from diseased plants inoculated with powdery mildew, the intensity of the thermal profile changes within leaves. Fig. 2(c) shows an enhanced (by truncating the lower and upper 1% of pixel values and by contrast stretching) thermal image of a diseased plant where

Registration

The goal of registration is to align the thermal and visible light images in such a way that the same pixel locations in both the images correspond to same physical location in the plant. Our particular registration method is a two-step process: rigid and non-rigid registration.

In rigid registration, a similarity transformation is parameterised by four degrees of freedom. A general similarity transformation matrix for a 2D image can be written as[x2y2]=S[cosαsinαsinαcosα][x1y1]+[txty]where S

Results

In this section, we first show that registration of thermal and visible images of diseased plants using silhouette extraction performs better than registration using exclusively intensity values (see Fig. 5). To this end, we computed the mutual information of a pair of registered thermal and visible light images. Mutual information is a similarity metric commonly used for registration of multi-modal images [40]. We first converted the visible light image to grayscale image. We then computed the

Conclusions

In this paper we proposed an algorithm for registration of thermal and visible light images of diseased plants using silhouette extraction. The main novelty of the algorithm is a multi-scale method based on the stationary wavelet transform (SWT) capable of extracting the silhouettes of diseased plants with high accuracy. Our proposed algorithm employs a gradient-based method to extract the plant silhouette in visible light images and the multi-scale SWT-based method to extract the plant

Conflict of interest

None.

Acknowledgements

S.E.A. Raza is funded by the Horticultural Development Company (HDC) (Grant no. CP60a) and the Department of Computer Science, University of Warwick. The authors would like to thank Prof. David Epstein for his valuable comments and suggestions in this work. The authors would also like to thank the Engineering and Physical Sciences Research Council (EPSRC) (EPSRC Loan Pool request No 3497) for providing the thermal imaging camera Cedip Titanium.

Shan-e-Ahmed Raza was graduated in 2008 from the University of Engineering and Technology, Taxila, in Electrical Engineering and received M.S. degree in Systems Engineering from Pakistan Institute of Engineering and Applied Sciences, Islamabad, in 2010. He was graduated in Ph.D. in Computer Science from the University of Warwick, UK, in 2014. He is currently working as a research fellow at the Computer Science Department in the University of Warwick.

References (40)

  • S.-h. Chen et al.

    Fusing remote sensing images using à trous wavelet transform and empirical mode decomposition

    Pattern Recognit. Lett.

    (2008)
  • L. Chaerle et al.

    Presymptomatic visualization of plant–virus interactions by thermography

    Nature

    (1999)
  • M. Stoll et al.

    Thermal imaging as a viable tool for monitoring plant stress

    Int. J. Vine Wine Sci.

    (2007)
  • X. Ju et al.

    3D thermography imaging standardization technique for inflammation diagnosis

  • Y. Song, R. Wilson, R. Edmondson, N. Parsons, Surface modelling of plants from stereo images, in: Sixth International...
  • D. Scharstein et al.

    A taxonomy and evaluation of dense two-frame stereo correspondence algorithms

    Int. J. Comput. Vis.

    (2002)
  • Y. Cohen et al.

    Use of aerial thermal imaging to estimate water status of palm trees

    Precision Agric.

    (2011)
  • I. Leinonen et al.

    Combining thermal and visible imagery for estimating canopy temperature and identifying plant stress

    J. Exp. Bot.

    (2004)
  • H.-M. Chen et al.

    Imaging for concealed weapon detectiona tutorial overview of development in imaging sensors and processing

    Signal Process. Mag. IEEE

    (2005)
  • S. Verstockt et al.

    Silhouette-based multi-sensor smoke detection

    Mach. Vis. Appl.

    (2011)
  • Cited by (0)

    Shan-e-Ahmed Raza was graduated in 2008 from the University of Engineering and Technology, Taxila, in Electrical Engineering and received M.S. degree in Systems Engineering from Pakistan Institute of Engineering and Applied Sciences, Islamabad, in 2010. He was graduated in Ph.D. in Computer Science from the University of Warwick, UK, in 2014. He is currently working as a research fellow at the Computer Science Department in the University of Warwick.

    Victor Sanchez received his M.Sc. degree in 2003 from the University of Alberta, Canada, and his Ph.D in 2010 from the University of British Columbia, Canada. He is currently an assistant professor at the Department of Computer Science, University of Warwick, UK. From 2011 to 2012 he was with the Video and Image Processing (VIP) Lab, University of California, Berkeley, as a post-doctoral researcher. Sanchez׳ main research interests are in multimedia coding and image analysis for medical applications. He has published over 25 technical papers in these areas and co-authored a book on Simulation of Healthcare Systems (Springer, 2012). His research has been funded by the ConsejoNacional de Ciencia y Tecnologia (CONCYT) Mexico, the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canadian Institutes of Health Research (CIHR).

    Gillian Prince received a B.Sc. degree in Zoology from the University of Aberdeen in 1992 and an M.Sc. degree in Crop Protection from the Scottish Agricultural College in Aberdeen in 1994. She is an entomologist and microbiologist focussing on arthropod–microorganism interactions, in particular the ecology and the use of entomopathogens in sustainable farming systems. She is a research scientist in the Warwick Crop Centre (formerly Horticulture Research International), part of the School of Life Sciences, University of Warwick.

    John Clarkson received a B.Sc. degree in Agricultural Science from the University of Leeds in 1988 and a Ph.D. degree in biological control of plant pathogens from the University of Nottingham on 1993. He has worked on a wide variety of pathosystems and his research has encompassed aspects of ecology, epidemiology, disease forecasting, population biology and biological/integrated control. He is now a principal research fellow in the Warwick Crop Centre (formerly Horticulture Research International), part of the School of Life Sciences, University of Warwick.

    Nasir M. Rajpoot (Senior Member, IEEE) received his Ph.D. degree in Computer Science from the University of Warwick, UK, in 2001. He was a postgraduate research fellow in the Applied Mathematics Program at Yale University, USA, during 1998–2000. Prior to his Ph.D. degree, he obtained his first degree in Computer Science from the Zakariya University, Pakistan in 1994 and his M.Sc. degree in Systems Engineering from the Quaid-e-Azam University, Pakistan in 1996, both with the highest distinction. His group at Warwick has been internationally recognized for its research in digital pathology image analysis and computational biology. A recent focus of research in his lab has been on algorithms for computerized analysis and modeling of sub-cellular objects in multi-channel fluorescence microscopy images. Rajpoot has recently chaired several meetings in the area of histopathology image analysis (for example, CHiP@ISBI׳2008, OPTIMHisE׳2009, MIUA׳2010, PRinHIMA׳2010, HIMA@MICCAI׳2011, HIMA@MICCAI׳2012). He was the guest co-editor for a special issue of Machine Vision and Applications on Microscopy Image Analysis and its Applications in Biology in 2012.

    View full text