Skip to main content
Log in

PT-CNN: A Non-linear Lightweight Texture Image Classifier

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Deep learning models provide better performance for image classification problems. The increased depth of the model, for enhancing the ability to represent features significantly raises the number of redundant parameters. These models require extensive memory usage and computational power that limits their ability to implement on platforms with constrained resources. This paper develops a novel lightweight deep learning architecture termed Parallel Texture Convolution Neural Network (PT-CNN) for multi-class texture classification. The proposed network is well suited to analyse texture images by learning and classifying complex texture features. The core building blocks of the network with convolution layer have a bank of filters that capture edge-like features from the input channels. These blocks act as a feature extractor that works in concurrent sessions to sought more complex features by lowering required training time. This block collaborates in lightweight processing of the proposed model. To analyse the performance of PT-CNN, experiments are conducted on the depth of building blocks that extract inherent texture attributes. The effectiveness of the suggested methodology is illustrated by experimenting on five texture datasets: Kth-tips2b, outex, kylberg sintorn rotation texture, alot and, kylberg rotation texture database. The proposed model outperforms on Adamax optimizer with high classification accuracy. Finally, the results from proposed lightweight model are compared with traditional methods, deep learning models and lightweight models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Aggarwal A, Kumar M (2021) Image surface texture analysis and classification using deep learning. Multimed Tools Appl 80(1):1289–1309

    Article  Google Scholar 

  2. Ruichek Y, Chetverikov D, Touahni R (2020) O3S-MTP: oriented star sampling structure based multi-scale ternary pattern for texture classification. Signal Process Image Commun 84:115830

    Article  Google Scholar 

  3. Varma M, Zisserman A (2005) A statistical approach to texture classification from single images. Int J Comput Vis 62(1):61–81

    Article  Google Scholar 

  4. Hu S, Pan Z, Dong J, Ren X (2022) A novel adaptively binarizing magnitude vector method in local binary pattern based framework for texture classification. IEEE Signal Process Lett 29:852–856

    Article  Google Scholar 

  5. Varma M, Zisserman A (2003) Texture classification: are filter banks necessary? In: 2003 IEEE computer society conference on computer vision and pattern recognition, Proceedings., vol 2, pp II-691

  6. Affonso C, Rossi ALD, Vieira FHA, de Leon Ferreira ACP (2017) Deep learning for biological image classification. Expert Syst Appl 85:114–122

    Article  Google Scholar 

  7. Dixit U, Mishra A, Shukla A, Tiwari R (2019) Texture classification using convolutional neural network optimized with whale optimization algorithm. SN Appl Sci 1(6):1–11

    Article  Google Scholar 

  8. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Article  Google Scholar 

  9. Cimpoi M, Maji S, Vedaldi A (2015) Deep filter banks for texture recognition and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3828–3836

  10. Vincent A, Whelan PF (2016) Using filter banks in convolutional neural networks for texture classification. Pattern Recogn Lett 84:63–69

    Article  Google Scholar 

  11. Agrawal A, Mittal N (2020) Using CNN for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy. Vis Comput 36(2):405–412

    Article  Google Scholar 

  12. Wu S, Wang G, Tang P, Chen F, Shi L (2019) Convolution with even-sized kernels and symmetric padding. Adv Neural Inf Process Syst 32

  13. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  14. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826

  15. Zhang X, Zhou X, Lin M, Sun J (2018) Shufflenet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6848–6856

  16. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications.In: International conference on learning representations. arXiv preprint arXiv:1704.04861

  17. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141

  18. Ye Mu, Ruiwen Ni, Chang Z, Gong He Hu, Tianli LS, Sun Yu, Tong Z, Ying G (2021) A Lightweight Model of VGG-16 for Remote Sensing Image Classification. IEEE J Sel Top Appl Earth Obs Remote Sens 14:6916–6922

    Article  Google Scholar 

  19. Tang Z, Yang J, Li Z, Qi F (2020) Grape disease image classification based on lightweight convolution neural networks and channelwise attention. Comput Electron Agric 178:105735

    Article  Google Scholar 

  20. Sun W, Zhang X, He X (2020) Lightweight image classifier using dilated and depthwise separable convolutions. J Cloud Comput 9(1):1–12

    Article  Google Scholar 

  21. Wu H, Yi Niu Fu, Li YL, Boxun Fu, Shi G, Dong M (2019) A parallel multiscale filter bank convolutional neural networks for motor imagery EEG classification. Front Neurosci 13:1275

    Article  Google Scholar 

  22. Shebiah RN, Selvaraj A (2022) Leaf species and disease classification using multiscale parallel deep CNN architecture. Neural Comput Appl 1–21

  23. Liang M, Cao P, Tang J (2021) Rolling bearing fault diagnosis based on feature fusion with parallel convolutional neural network. Int J Adv Manuf Technol 112(3):819–831

    Article  Google Scholar 

  24. Wang L, He K, Feng X, Ma X (2022) Multilayer feature fusion with parallel convolutional block for fine-grained image classification. Appl Intell 52(3):2872–2883

    Article  Google Scholar 

  25. Wang K, Yan Hu, Chen J, Xianyun Wu, Zhao Xi, Li Y (2019) Underwater image restoration based on a parallel convolutional neural network. Remote Sens 11(13):1591

    Article  Google Scholar 

  26. Zhang J, Chaoquan Lu, Wang J, Wang L, Yue X-G (2019) Concrete cracks detection based on FCN with dilated convolution. Appl Sci 9(13):2686

    Article  Google Scholar 

  27. Zhou L, Zhang C, Wu M (2018) D-LinkNet: LinkNet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 182–186

  28. Lei X, Pan H, Huang X (2019) A dilated CNN model for image classification. IEEE Access 7:124087–124095

    Article  Google Scholar 

  29. Kudo Y, Aoki Y (2017) Dilated convolutions for image classification and object localization. In: 2017 Fifteenth IAPR international conference on machine vision applications (MVA), pp 452–455

  30. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2017) Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848

    Article  Google Scholar 

  31. Ismail A, Elpeltagy M, Zaki MS, Eldahshan K (2022) An integrated spatiotemporal-based methodology for deepfake detection. Neural Comput Appl 1–15

  32. Le D-N, Parvathy VS, Gupta D, Khanna A, Rodrigues JJPC, Shankar K (2021) IoT enabled depthwise separable convolution neural network with deep support vector machine for COVID-19 diagnosis and classification. Int J Mach Learn Cybern 12(11):3235–3248

    Article  Google Scholar 

  33. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning. PMLR, pp 448–456

  34. Mallikarjuna P, Targhi AT, Fritz M, Hayman E, Caputo B, Eklundh J-O (2006) The kth-tips2 database. Computational Vision and Active Perception Laboratory, Stockholm, Sweden 11. https://www.csc.kth.se/cvap/databases/kth-tips/index.html

  35. Khaldi B, Kherfi ML (2016) Modified integrative color intensity co-occurrence matrix for texture image representation. J Electron Imaging 25(5):053007

    Article  Google Scholar 

  36. Outex Database, Center for Machine Vision Research Department of Computer Science and Engineering, University of Oulu, Finland. http://www.outex.oulu.f/index.php?page=test_suites

  37. Ojala T, Mäenpää T, Pietikäinen M, Viertola J, Kyllönen J, Huovinen S (2002) Outex—new framework for empirical evaluation of texture analysis algorithms. In: ICPR, pp 701–706

  38. Kylberg G, Sintorn I (2016) On the influence of interpolation method on rotation invariance in texture recognition. J Image Video Process 2016:17. https://doi.org/10.1186/s13640-016-0117-6

    Article  Google Scholar 

  39. Burghouts GJ, Geusebroek JM (2009) Material-specific adaptation of color invariant features. Pattern Recognit Lett 30:306–313

    Article  Google Scholar 

  40. Kylberg G (2011) The Kylberg Texture Dataset v. 1.0. External report (Blue series) 35, Centre for image analysis, Swedish University of Agricultural Sciences and Uppsala University, Uppsala, Sweden, September. https://kylberg.org/kylberg-texture-dataset-v-1-0/

  41. Alex Krizhevsky (2009) Learning multiple layers of features from tiny images. https://www.cs.toronto.edu/~kriz/cifar.html

  42. Rusiecki A (2019) Trimmed categorical cross-entropy for deep learning with label noise. Electron Lett 55(6):319–320

    Article  Google Scholar 

  43. Han K, Luo J, Xiao Q, Ning Z, Zhang Yu (2021) Light-weight cross-view hierarchical fusion network for joint localization and identification in Alzheimer’s disease with adaptive instance-declined pruning. Phys Med Biol 66(8):085013

    Article  Google Scholar 

  44. Alpaslan N, Hanbay K (2020) Multi-scale shape index-based local binary patterns for texture classification. IEEE Signal Process Lett 27:660–664

    Article  Google Scholar 

  45. El Merabet Y, Ruichek Y, El Idrissi A (2019) Attractive-andrepulsive center-symmetric local binary patterns for texture classification. Eng Appl Artif Intell 78:158–172. https://doi.org/10.1016/j.engappai.2018.11.011

    Article  Google Scholar 

  46. Dubey SR (2019) Face retrieval using frequency decoded local descriptor. Multimed Tools Appl 78(12):16411–16431. https://doi.org/10.1007/s11042-018-7028-8

    Article  Google Scholar 

  47. Dubey SR (2019) Local directional relation pattern for unconstrained and robust face retrieval. Multimed Tools Appl 78(19):28063–28088. https://doi.org/10.1007/s11042-019-07908-3

    Article  Google Scholar 

  48. Ojala T, Pietikainen M, Maenpaa T (2002) ‘Multiresolution gray-scale and rotation invariant texture classification with local binary patterns.’ IEEE Trans Pattern Anal Mach Intell 24(7):971–987

    Article  MATH  Google Scholar 

  49. Ojala T, Pietikäinen M, Harwood D (1996) ‘A comparative study of texture measures with classification based on featured distributions. Pattern Recognit 29(1):51–59

    Article  Google Scholar 

  50. Zang Y, Ding C, Wenjun Hu, Chenglong Fu (2023) HRANet: histogram-residual-attention network used to measure neatness of toy placement. SIViP 17(2):295–303

    Article  Google Scholar 

  51. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Proceedings of international conference on learning representations (ICLR), pp 1–14

  52. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 1–9

  53. Goyal V, Sharma S (2022) Texture classification for visual data using transfer learning. Multimed Tools Appl 1–24

  54. Vieira RT, Negri TT, Gonzaga A (2018) Improving the classification of rotated images by adding the signal and magnitude information to a local texture descriptor. Multimed Tools Appl 77:31041–31066. https://doi.org/10.1007/s11042-018-6204-1

    Article  Google Scholar 

  55. Dixit U, Mishra A, Shukla A, Tiwari R (2019) Texture classification using convolutional neural network optimized with whale optimization algorithm. SN Appl Sci 1:1–11

    Article  Google Scholar 

  56. Agarwal P, Alam M (2020) A lightweight deep learning model for human activity recognition on edge devices. Procedia Comput Sci 167:2364–2373

    Article  Google Scholar 

  57. Prasetyo E, Purbaningtyas R, Adityo RD, Suciati N, Fatichah C (2022) Combining MobileNetV1 and depthwise separable convolution bottleneck with expansion for classifying the freshness of fish eyes. Inf Process Agric 9(4):485–496

    Google Scholar 

Download references

Acknowledgements

We thank the management of Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam, Tamil Nadu for the facilities to carry this work.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

All author confirms sole responsibility for the following: study conception and design, data collection, analysis and interpretation of results, draft manuscript preparation also reviewed the results and approved the final version of the manuscript. All authors have participated in (a) conception and design, or analysis and interpretation of the data; (b) drafting the article or revising it critically for important intellectual content; and (c) approval of the final version. This manuscript has not been submitted to, nor is under review at, another journal or other publishing venue. The authors have no affiliation with any organization with a direct or indirect financial interest in the subject matter discussed in the manuscript.

Corresponding author

Correspondence to G. Sakthi Priya.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Priya, G.S., Padmapriya, N. PT-CNN: A Non-linear Lightweight Texture Image Classifier. Neural Process Lett 55, 8483–8507 (2023). https://doi.org/10.1007/s11063-023-11322-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-023-11322-0

Keywords

Navigation