Elsevier

Cognitive Systems Research

Volume 53, January 2019, Pages 31-41
Cognitive Systems Research

Three-channel convolutional neural networks for vegetable leaf disease recognition

https://doi.org/10.1016/j.cogsys.2018.04.006Get rights and content

Abstract

The color information of diseased leaf is the main basis for leaf based plant disease recognition. To make use of color information, a novel three-channel convolutional neural networks (TCCNN) model is constructed by combining three color components for vegetable leaf disease recognition. In the model, each channel of TCCNN is fed by one of three color components of RGB diseased leaf image, the convolutional feature in each CNN is learned and transmitted to the next convolutional layer and pooling layer in turn, then the features are fused through a fully connected fusion layer to get a deep-level disease recognition feature vector. Finally, a softmax layer makes use of the feature vector to classify the input images into the predefined classes. The proposed method can automatically learn the representative features from the complex diseased leaf images, and effectively recognize vegetable diseases. The experimental results validate that the proposed method outperforms the state-of-the-art methods of the vegetable leaf disease recognition.

Introduction

Vegetables are often threatened by various kinds of diseases. Leaf based vegetable leaf disease recognition is always an important research topic in many fields such as ecology, pattern recognition and image processing, and many methods have been proposed for vegetable leaf disease recognition (Barbedo, 2016). However, the performance of these methods is unsatisfactory because of the limited discriminative description ability of low-level features extracted from diseased leaf images. It is known that the disease recognition rates of the traditional approaches rely heavily on the lesion segmentation and hand-designed features that require expensive work and expert knowledge (Wang and Huang, 2009, Wang et al., 2010). To improve disease recognition rate, many researchers tried to use various algorithms to extract large various features from each diseased leaf image, such as seven invariant moments, SIFT (Scale-invariant feature transform), Gabor transform, global-local singular value and sparse representation (Guo et al., 2007, Zhang and Wang, 2016, Zhang et al., 2017). By the existing leaf based disease recognition methods, it is easy to extract over 100 kinds of features from only one diseased leaf image, but the contributions of the different features to the disease recognition are very different, and it is difficult to determine which features are optimal and robust for disease recognition. Especially, if the background is complex with other leaves or plants, the diseased leaf image segmentation may be questionable. Most methods fail to effectively segment the leaf and corresponding lesion image from its background which will lead to unreliable disease recognition results. By now, leaf based vegetable disease recognition is still a challenge research because of the very complexity of diseased leaf image, as shown in Fig. 1. From Fig. 1, it can be seen that three leaf images of the same kind disease (i.e., Blotch) and their lesions vary a lot from each other (i.e., Gray leaf spot) in lighting, brightness, scale, position and background.

Many feature extraction and selection, dimensionality reduction (Huang & Jiang, 2012), subspace learning (Li and Huang, 2008, Li et al., 2006), sparse representation algorithms (Guo et al., 2007, Wright et al., 2009) and various neural networks (Huang, 1996, Huang, 1999, Huang and Du, 2008, Huang et al., 2017, Huang et al., 2004, Zhao et al., 2004, Zhao et al., 2003) and classifiers (Bao et al., 2014, Bao et al., 2017, Huang, 2004, Huang et al., 2005, Huang and Du, 2008, Liu and Huang, 2008) can be applied to disease recognition, but it is difficult to segment diseased leaf image and extract robust features for disease recognition. Recently, deep learning has been widely applied to image classification, computer vision and pattern recognition applications, and has achieved state-of the-art performance (Huang and Zhang, 2017, Zhang and Zhang, 2017, Schmidhuber, 2015). Deep learning can automatically learn the high-level characteristics from the massive original complex images, instead of a lot of image preprocessing, lesion segmentation, and artificially-designed feature extraction. Deep learning has been applied to plant species recognition and plant disease recognition. Mohanty, Hughes, and Marcel (2016) trained a deep learning model for recognizing 14 crop species and 26 crop diseases. The trained model achieves an accuracy of 99.35% on a held-out test set. CNNs can performs both feature extraction and image classification in the same architecture, and outperform the latest leaf based plant disease recognition methods (Wiatowski & Bölcskei, 2016). Srdjan, Marko, Andras, et al. (2016) proposed a plant disease recognition approach based on CNNs to distinguish between healthy leaves and 13 different diseases. Amara, Bouaziz, and Algergawy (2017) proposed a banana disease recognition approach based on LeNet architecture. The results demonstrate the effectiveness of the proposed approach even under the challenging conditions. The presence of a large amount of vegetable diseased leaf images and powerful computing infrastructure makes CNN be a suitable candidate for the disease recognition.

The color characteristics of the diseased leaf image can be described its three color components of R, G and B. Because the actual plant diseased leaf images are complex with miscellaneous background, as shown in Fig. 1, the color information provided by a single color component is limited and the extracted features by the classical disease recognition methods are often one-sided and incomplete. Representing the disease class by the diseased leaf image through multi-color-components can somewhat alleviate this problem. Inspired by the recent developments of plant disease recognition and multi-column CNN (He and Tian, 2016, Schmidhuber, 2012), a TCCNN model is proposed for vegetable leaf disease recognition by integrating three different color components of the diseased leaf image. Its main contributions are as follows:

  • A TCCNN model is constructed for vegetable leaf disease recognition.

  • The model requires minimal image preprocessing process, and omits the process of diseased leaf image segmentation and hand-crafted feature extraction.

  • The model can automatically learn to extract the high-level classifying features directly from the color diseased leaf image, and then the recognition rate will be greatly improved.

The rest of the paper is organized as follows: in Section 2, the architecture and technical details of CNNs are introduced. A vegetable leaf disease recognition approach based on TCCNN is proposed in Section 3. Experimental results are reported in Section 4. Section 5 concludes the paper and provides the future work.

Section snippets

Convolutional neural networks (CNN)

The general architecture of CNN is shown in Fig. 2, including an input layer, three convolutional layers (C1, C2, C3), two pooling layers (P1, P2), two fully-connected layers (FC) and an output layer (Wiatowski & Bölcskei, 2016), where the convolutional and pooling layers act as feature descriptors, the fully-connected and output layers act as a classifier, and the neurons between adjacent layers are not fully-connected but partially connected. CNN is regarded as a black-box classifier. Its

The proposed method for vegetable leaf disease recognition

For fact CNN based recognition tasks, network depth, number of feature planes, size of convolutional kernel and moving step of CNN should be properly set up. Because the color characteristic of the diseased leaf is very important for recognizing plant diseases, by making use of the color characteristics of the diseased leaf image, a TCCNN is proposed for vegetable leaf disease recognition by integrating the different components of color diseased leaf image.

Experimental results

To verify the performance of the TCCNN based vegetable disease identification method, we conduct a set of experiments on cucumber and tomato diseased leaf image databases, and compare with four existing vegetable leaf disease recognition methods based on: image processing technology (IPT) (Pixia & Xiangdong, 2013), global-local singular value decomposition (GLSVD) (Zhang et al., 2017), SVM classification algorithm (SVM) (Zhou, Xu, & Zhao, 2015), and sparse representation based classification

Conclusion

In this paper, a vegetable disease recognition approach is presented based on TCCNN. In the method, the high-level discriminant features can be automatically extracted through TCCNN directly from the color diseased leaf image, instead of complex preprocessing, lesion segmentation and hand-crafted feature extraction. The proposed method avoids the lesion segmentation and hand-crafted feature process. The experimental results demonstrate the multi-channel CNN is effective and feasible.

Conflict of interest statement

The authors declare that there is no potential conflict of interest referring to this article.

Acknowledgement

This work is supported by the China National Natural Science Foundation under grant Nos. 61473237 & 61472280, Key Research and Development Projects (2017ZDXM-NY-088), Key Project (2016GY-141) of Shaanxi Department of Science and Technology and Tianjin Jinnan Technology Research Program (2017). The authors would like to thank all the editors and anonymous reviewers for their constructive advice.

References (36)

  • W. Bao et al.

    Prediction of protein structure classes with flexible neural tree

    Bio-Medical Materials and Engineering

    (2014)
  • W. Bao et al.

    Classification of protein structure classes on flexible neutral tree

    IEE ACM Transactions on Computational Biology and Bioinformatics

    (2017)
  • A. Barbedo

    A review on the main challenges in automatic plant disease identification based on visible range images

    Biosystems Engineering

    (2016)
  • P.O. Glauner

    Deep convolutional neural networks for smile recognition

    IEEE ACM Transactions on Audio Speech & Language Processing

    (2015)
  • Y. Guo et al.

    Regularized linear discriminant analysis and its application in microarrays

    Biostatistics

    (2007)
  • He, A. & Tian, X. (2016). Multi-organ plant identification with multi-column deep convolutional neural networks. In...
  • Huang, D. S. (1996). Systematic theory of neural networks for pattern recognition. Publishing House of Electronic...
  • D.S. Huang

    Radial basis probabilistic neural networks: Model and application

    International Journal of Pattern Recognition & Artificial Intelligence

    (1999)
  • Cited by (0)

    View full text