Skip to main content
Log in

P &GGD: A Joint-Way Model Optimization Strategy Based on Filter Pruning and Filter Grafting For Tea Leaves Classification

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In the automatic processing of tea, the category of fresh tea leaves determines the processing parameters for the automatic control systems. The automatic classification of fresh tea leaves is a necessity. Previous works used deep and wide architectures of CNNs to classify the tea leaves, which have a limitation on the model deployment. Despite the filter pruning achieve superior performance about network compression and acceleration, the pruning-based approaches have to trade accuracy for low computational cost and model size. we propose a novel joint-way model optimization strategy, named Filter Pruning and Grafting via Geometric Distance(P &GGD), to solve the above problems. Specifically, we utilize the filter pruning method to compress models and uses the filter grafting method to boost accuracy. To graft pruned models effectively, we develop the grafting criteria based on the geometric distance to measure the information of layers. Extensive experiments validate our approach on the fresh-tea-leaves dataset and two image classification benchmarks. For example, with the fresh-tea-leaves dataset, we achieve 96.296\(\%\) top-1 accuracy, with more than 45\(\%\) FLOPs-reduction, even outperforms the non-compress MobileNetV2 by 2.395\(\%\) accuracy. Our proposed method can provide more accurate, more compact, and faster models. Thus, it can promote the deployment of the model at the edge of the automatic production line.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. FAO (2018) Current market situation and medium term outlook. Intergovern Group Tea Twenty-Third Session 13:123–456

    Google Scholar 

  2. Tang Z, Su Y, Er M, Qi F, Zhang L, Zhou J (2015) A local binary pattern based texture descriptors for classification of tea leaves. Neurocomputing 168:1011–1023

    Article  Google Scholar 

  3. Krizhevsky A, Sutskever I, Hinton GE (2017) ImageNet classification with deep convolutional neural networks. CACM

  4. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)

  5. Huang G, Liu Z, Weinberger KQ (2017) Densely connected convolutional networks. IEEE Conf Comput Vis Pattern Recogn (CVPR) 2017:2261–2269

    Google Scholar 

  6. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. IEEE Conf Comput Vis Pattern Recogn (CVPR) 2015:3431–3440

    Google Scholar 

  7. Zhang K, Zuo W, Chen Y, Meng D, Zhang L (2016) Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans Image Process 26:3142–3155

    Article  MathSciNet  Google Scholar 

  8. Li Y, Tsiminaki V, Timofte R, Pollefeys M, Gool L (2019) 3D appearance super-resolution with deep learning. IEEE/CVF Conf Comput Vis Pattern Recogn (CVPR) 2019:9663–9672

    Google Scholar 

  9. Lin M, Ji R, Wang Y, Zhang Y, Zhang B, Tian Y, Shao L (2020) HRank: filter pruning using high-rank feature map. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 1529–1538

  10. He Y, Ding Y, Liu P, Zhu L, Zhang H, Yang Y (2020) Learning filter pruning criteria for deep convolutional neural networks acceleration. IEEE/CVF Conf Computer Vis Pattern Recogn (CVPR) 2020:2006–2015. https://doi.org/10.1109/CVPR42600.2020.00208

    Article  Google Scholar 

  11. Yu R, Li A, Chen CF, Lai JH, Morariu VI, Han X, Gao M, Lin CY, Davis LS (2018) NISP: pruning networks using neuron importance score propagation. IEEE/CVF Conf Comput Vis Pattern Recogn 2018:9194–9203

    Google Scholar 

  12. Li Y, Gu S, Mayer C, Van Gool L, Timofte R (2020) Group sparsity: the Hinge between filter pruning and decomposition for network compression. IEEE/CVF Conf Comput Vis Pattern Recogn (CVPR) 2020:8015–8024

    Google Scholar 

  13. He Y, Liu P, Wang Z, Hu Z, Yang Y (2019) Filter pruning via geometric median for deep convolutional neural networks acceleration. IEEE/CVF Conf Comput Vis Pattern Recogn (CVPR) 2019:4335–4344

    Google Scholar 

  14. He Y, Kang G, Dong X, Fu Y, Yang Y (2018) Soft filter pruning for accelerating deep convolutional neural networks. ArXiv arXiv:1808.06866

  15. Qin H, Gong R, Liu X, Shen M, Wei Z, Yu F, Song J (2020) Forward and backward information retention for accurate binary neural networks. IEEE CVPR

  16. Molchanov P, Mallya A, Tyree S, Frosio I, Kautz J (2019) Importance estimation for neural network pruning. IEEE/CVF Conf Comput Vis Pattern Recogn (CVPR) 2019:11256–11264

    Google Scholar 

  17. Meng F, Cheng H, Li K, Xu Z, Ji R, Sun X, Lu G (2020) Filter grafting for deep neural networks. IEEE/CVF Conf Comput Vis Pattern Recogn (CVPR) 2020:6598–6606

    Google Scholar 

  18. Fletcher PT, Venkatasubramanian S, Joshi S (2008) Robust statistics on Riemannian manifolds via the geometric median. IEEE Conf Comput Vis Pattern Recogn 2008:1–8

    Google Scholar 

  19. Yu X, Lu Y, Gao Q (2021) Pipeline image diagnosis algorithm based on neural immune ensemble learning. Int J Press Vessels Pip 189:104249

    Article  Google Scholar 

  20. Liu X, Zhu X, Li M, Wang L, Zhu E, Liu T, Kloft M, Shen D, Yin J, Gao W (2020) Multiple Kernel \(k\)k-Means with incomplete kernels. IEEE Trans Pattern Anal Mach Intell 42:1191–1204

    Google Scholar 

  21. Yu X, Zhou Z (2020) Infrared image extraction algorithm based on adaptive growth immune field. Neural Process Lett, 51

  22. He K, Cao X, Shi Y, Nie D, Gao Y, Shen D (2019) Pelvic organ segmentation using distinctive curve guided fully convolutional networks. IEEE Trans Med Imaging 38:585–595

    Article  Google Scholar 

  23. Chen Q, Zhao J, Cai J (2008) Identification of tea varieties using computer vision. Trans ASABE 51:623–628

    Article  Google Scholar 

  24. Wang S, Yang X, Zhang Y, Phillips P, Yang J, Yuan T (2015) Identification of green, Oolong and black teas in china via wavelet packet entropy and fuzzy support vector machine. Entropy 17:6663–6682

    Article  Google Scholar 

  25. Chavan TR, Nandedkar A (2018) AgroAVNET for crops and weeds classification: a step forward in automatic farming. Comput Electron Agric 154:361–372

    Article  Google Scholar 

  26. Hu G, Wu H, Zhang Y, Wan M (2019) A low shot learning method for tea leaf’s disease identification. Comput Electron Agric, 163

  27. Trong VH, Yu G, Vu DT, Kim J (2020) Late fusion of multimodal deep neural networks for weeds classification. Comput Electron Agric 175:105506

    Article  Google Scholar 

  28. Tang Z, Yang J, Li Z, Qi F (2020) Grape disease image classification based on lightweight convolution neural networks and channelwise attention. Comput Electron Agric 178:105735

    Article  Google Scholar 

  29. Hu H, Peng R, Tai YW, Tang C (2016) Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. ArXiv arXiv:1607.03250

  30. He Y, Dong X, Kang G, Fu Y, Yan C, Yang Y (2020) Asymptotic soft filter pruning for deep convolutional neural networks. IEEE Trans Cybern 50:3594–3604

    Article  Google Scholar 

  31. He Y, Liu P, Zhu L, Yang Y (2019) Meta filter pruning to accelerate deep convolutional neural networks. ArXiv arXiv:1904.03961

  32. Wang X, Zheng Z, He Y, Yan F, Zeng Z, Yang Y (2020) Progressive local filter pruning for image retrieval acceleration. ArXiv arXiv:2001.08878

  33. Alemu HZ, Zhao J, Li F, Wu W (2019) Group \(L_{1/2}\) regularization for pruning hidden layer nodes of feedforward neural networks. IEEE Access 7:9540–9557

    Article  Google Scholar 

  34. Mitsuno K, Kurita T (2021) Filter pruning using hierarchical group sparse regularization for deep convolutional neural networks. In: 2020 25th international conference on pattern recognition (ICPR), pp 1089–1095

  35. Ye J, Lu X, Lin ZL, Wang JZ (2018) Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. ArXiv arXiv:1802.00124

  36. Luo JH, Wu J, Lin W (2017) ThiNet: a filter level pruning method for deep neural network compression. IEEE Int Conf Comput Vis (ICCV) 2017:5068–5076

    Google Scholar 

  37. Wang D, Zhou L, Zhang X, Bai X, Zhou J (2018) Exploring linear relationship in feature map subspace for ConvNets compression. ArXiv arXiv:1803.05729

  38. Lin S, Ji R, Yan C, Zhang B, Cao L, Ye Q, Huang F, Doermann D (2019) Towards optimal structured CNN pruning via generative adversarial learning. IEEE/CVF Conf Comput Vis Pattern Recogn (CVPR) 2019:2785–2794

    Google Scholar 

  39. Molchanov P, Tyree S, Karras T, Aila T, Kautz J (2016) Pruning convolutional neural networks for resource efficient transfer learning. ArXiv arXiv:1611.06440

  40. Huang Q, Zhou S, You S, Neumann U (2018) Learning to prune filters in convolutional neural networks. IEEE Winter Conf Appl Comput Vis (WACV) 2018:709–718

    Google Scholar 

  41. Huang Z, Wang N (2018) Data-driven sparse structure selection for deep neural networks. ArXiv arXiv:1707.01213

  42. Yu R, Li A, Chen C, Lai JH, Morariu V, Han X, Gao M, Lin CY, Davis L (2018) NISP: pruning networks using neuron importance score propagation. IEEE/CVF Conf Comput Vis Pattern Recogn 2018:9194–9203

    Google Scholar 

  43. Zhuang Z, Tan M, Zhuang B, Liu J, Guo Y, Wu Q, Huang J, Zhu JH (2018) Discrimination-aware channel pruning for deep neural networks. NeurIPS

  44. Chin TW, Ding R, Zhang C, Marculescu D (2020) Towards efficient model compression via learned global ranking. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR)

  45. Shwartz-Ziv R, Tishby N (2017) Opening the black box of deep neural networks via information. ArXiv arXiv:1703.00810

  46. Cheng H, Lian D, Gao S, Geng Y (2019) Utilizing information bottleneck to evaluate the capability of deep neural networks for image classification. Entropy 21:456

    Article  MathSciNet  Google Scholar 

  47. Krizhevsky A (2009) Learning multiple layers of features from tiny images. Citeseer

  48. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. CoRR arXiv:1409.1556

  49. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC (2018) MobileNetV2: inverted residuals and linear bottlenecks. IEEE/CVF Conf Comput Vis Pattern Recogn 2018:4510–4520

    Google Scholar 

  50. Paszke A, Gross S, Chintala S, Chanan G, Yang E, Devito Z, Lin Z, Desmaison A, Antiga L, Lerer A (2017) Automatic differentiation in PyTorch

  51. Dong X, Huang J, Yang Y, Yan S (2017) More is less: a more complicated network with less inference complexity. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)

Download references

Funding

This work is supported by China Agriculture Research System of MOF and MARA, the Project of Scientific and Technological Innovation Planning of Hunan Province (2020NK2008), Hunan Province Modern Agriculture Technology System for Tea Industry, the National Natural Science Foundation of China (42130716). We are grateful to the High Performance Computing Center of Central South University for partial support of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fang Qi.

Ethics declarations

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Ethical Approval

This article does not contain any studies with human participants or animals.

Informed Consent

As this article does not contain any studies with human participants or animals, the informed consent is not applicable.

Sample Availability

Samples of the compounds...... are available from the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tang, Z., Li, Z., Yang, J. et al. P &GGD: A Joint-Way Model Optimization Strategy Based on Filter Pruning and Filter Grafting For Tea Leaves Classification. Neural Process Lett 54, 4391–4407 (2022). https://doi.org/10.1007/s11063-022-10813-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-022-10813-w

Keywords

Navigation