Skip to main content
Log in

Image classification based on tensor network DenseNet model

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Image classification, the primary domain where deep neural networks significantly contribute to image analysis, requires a substantial amount of computer memory to train. This is particularly true in the fully connected layer, which accounts for 90% of the total memory. Moreover, the flattening operation could potentially result in the loss of the multi-linear structure of the image data. The tensor regression network, however, minimally impacts the performance of the neural network while achieving a high compression rate. This effectively mitigates the issue of large memory occupation in the neural network model. The DenseNet model, in particular, can alleviate the vanishing-gradient problem and strengthen feature propagation and outperform other existing networks. This article proposes a novel tensor network model that embeds the tensor regression layer into the DenseNet model. The framework of this tensor DenseNet model has been established, and its estimation procedure is developed. Tensor network model is applied to the classification of the following datasets: Fruits 360, 100 Sports Image, ASL Alphabet, and Mini-ImageNet. The experimental results indicate that the combination of the DenseNet model with the tensor regression layer not only conserves a significant amount of memory but also maintains a high accuracy of classification, compared with existing tensor network models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability and access

All data, models, or code generated or used during the study are available from the corresponding author by request.

References

  1. Xiaowu D, Yuanquan S, Dunhong Y (2023) Theories, algorithms and applications in tensor learning. Appl Intell 53:20514–20534

    Article  Google Scholar 

  2. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Commun ACM 60:84–90

    Article  Google Scholar 

  3. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. CoRR

  4. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein MS, Berg AC, Fei-Fei L (2015) Imagenet large scale visual recognition challenge. Int J Comput Vis 115:211–252

    Article  MathSciNet  Google Scholar 

  5. Lebedev V, Ganin Y, Rakhuba M, Oseledets I, Lempitsky VS (2015) Speeding-up convolutional neural networks using fine-tuned cp-decomposition. CoRR

  6. Kiers H (2000) Towards a standardized notation and terminology in multiway analysis. J Chemom 14:105–122

    Article  Google Scholar 

  7. Mocks J (1988) Topographic components model for event-related potentials and some biophysical considerations. IEEE Trans Biomed Eng 35(6):482–484

    Article  Google Scholar 

  8. Tai C, Xiao T, Wang X, Weinan E (2016) Convolutional neural networks with low-rank regularization. 4th International Conference on learning representations, ICLR

  9. Kim Y-D, Park E, Yoo S, Choi T, Yang L, Shin D (2015) Compression of deep convolutional neural networks for fast and low power mobile applications. CoRR

  10. Yang Y, Hospedales TM (2017) Deep multi-task representation learning: A tensor factorisation approach. In: International conference on learning representations

  11. Chen Y, Jin X, Kang B, Feng J, Yan S (2018) Sharing residual units through collective tensor factorization to improve deep neural networks. In: Twenty-seventh international joint conference on artificial intelligence IJCAI-18

  12. Novikov A, Podoprikhin D, Osokin A, Vetrov D (2015) Tensorizing neural networks. Neural Inform Process Syst

  13. Kossaifi J, Lipton ZC, Kolbeinsson A, Khanna A, Furlanello T, Anandkumar A (2020) Tensor regression networks. J Mach Learn Rese 21(123):1–21

    MathSciNet  Google Scholar 

  14. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE Conference on computer vision and pattern recognition (CVPR), pp 770–778

  15. Gao Y, Yang LT, Zheng D, Yang J, Zhao Y (2021) Quantized tensor neural network. ACM/IMS Trans Data Sci (TDS) 2:1–18

    Article  Google Scholar 

  16. Oldfield J, Georgopoulos M, Panagakis Y, Nicolaou MA, Patras I (2021) Tensor component analysis for interpreting the latent space of gans. In: British machine vision conference

  17. Xiaowu D, Yuanquan S, Dunhong Y (2023) Auto-weighted multiple kernel tensor clustering. Complex Intell Syst 9:6863–6874

    Article  Google Scholar 

  18. Chen L, Luo X (2023) Tensor distribution regression based on the 3D conventional neural networks. IEEE/CAA J Autom Sin 10(7):1628–1630

    Article  Google Scholar 

  19. Zou B-J, Guo Y-D, He Q, Ouyang P-B, Liu K, Chen Z-L (2018) 3D filtering by block matching and convolutional neural network for image denoising. J Comput Sci Technol 33:838–848

    Article  Google Scholar 

  20. Arvanitis G, Lalos AS, Moustakas K (2020) Image-based 3D MESH denoising through a block matching 3D convolutional neural network filtering approach. In: 2020 IEEE international conference on multimedia and expo (ICME), IEEE, pp 1–6

  21. Lu Z, Whalen I, Dhebar Y, Deb K, Goodman ED, Banzhaf W, Boddeti VN (2020) Multiobjective evolutionary design of deep convolutional neural networks for image classification. IEEE Trans Evol Comput 25(2):277–291

    Article  Google Scholar 

  22. Lu Z, Liang S, Yang Q, Du B (2022) Evolving block-based convolutional neural network for hyperspectral image classification. IEEE Trans Geosci Remote Sens 60:1–21

    Google Scholar 

  23. Yang J, Xiao L, Zhao Y-Q, Chan JC-W (2023) Unsupervised deep tensor network for hyperspectral–multispectral image fusion. IEEE Trans Neural Netw Learn Syst

  24. Wang D, Zhao G, Chen H, Liu Z, Deng L, Li G (2021) Nonlinear tensor train format for deep neural network compression. Neural Netw 144:320–333

    Article  Google Scholar 

  25. Kolbeinsson A, Kossaifi J, Panagakis Y, Bulat A, Anandkumar A, Tzoulaki I, Matthews PM (2021) Tensor dropout for robust learning. IEEE J Sel Top Signal Process 15(3):630–640

    Article  Google Scholar 

  26. Nie C, Wang H (2022) Tensor neural networks via circulant convolution. Neurocomputing 483:22–31

    Article  Google Scholar 

  27. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: 2017 IEEE Conference on computer vision and pattern recognition (CVPR), pp 2261–2269

  28. Kolda T (2009) Tensor decompositions and applications. Siam Rev 51(3):455–500

    Google Scholar 

  29. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE Conference on computer vision and pattern recognition (CVPR), pp 1–9

  30. Tan M, Le QV (2019) Efficientnet: Rethinking model scaling for convolutional neural networks

  31. Kolbeinsson A, Kossaifi J, Panagakis Y, Bulat A, Anandkumar A, Tzoulaki I, Matthews P (2021) Tensor Dropout for Robust Learning. IEEE J Sel Top Signal Process 15:630–640

    Article  Google Scholar 

  32. Zhang S, Zhao J, Zhou Z, Du X (2018) Hybridized block modular mode for image classification. Pattern Recog 83:77–87

    Article  Google Scholar 

Download references

Acknowledgements

We are grateful to the Editor, an Associate Editor and four anonymous referees for their insightful comments and suggestions on this article, which have led to significant improvements. This work was supported in part by the National Social Science Fund (22BTJ025) and in part by the National Natural Science Fund (12271272).

Author information

Authors and Affiliations

Authors

Contributions

Chunyang Zhu contributed to the conception of the study; Lei Wang performed the experiment; Weihua Zhao contributed significantly to analysis and manuscript preparation; Heng Lian helped perform the analysis with constructive discussions.

Corresponding authors

Correspondence to Lei Wang or Weihua Zhao.

Ethics declarations

Ethical and informed consent for data used

There is no ethical conflict and no need to informed consent for data.

Competing Interests

There are no competing interests to declare.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, C., Wang, L., Zhao, W. et al. Image classification based on tensor network DenseNet model. Appl Intell 54, 6624–6636 (2024). https://doi.org/10.1007/s10489-024-05472-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-024-05472-4

Keywords