Skip to main content
Log in

Res-CapsNet: Residual Capsule Network for Data Classification

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Capsule network is a new network structure which can encode the properties and spatial relations of image features. It overcomes the shortcomings of CNN that requiring large number of training samples and parameters, and information loss during the pooling process. Capsule network can only extract shallow features, which makes it perform poorly on complex datasets. In this paper, a new residual capsule network model (Res-CapsNet) is proposed by fusing capsule network, residual network and deconvolution. The Res-CapsNet extracts deep features and sends them to capsule module using the dense connections of residual, which effectively strengthens the feature transfer and feature utilization. Capsule module converts scalar neurons to vector neurons through the main capsule layer, and uses dynamic routing algorithm to selectively activate the high-level capsule in the main capsule layer and the digital capsule layer, and obtains the recognition results. Deconvolution reconstruction module is the last part of Res-CapsNet, responsible for reconstructing recognition results by using 4-layer deconvolution. Res-CapsNet utilizes beta-mish activation function to reduce the "death" of neurons caused by ReLU, thus activating more neurons to further improve the classification accuracy. The experimental results show that Res-CapsNet has better performance on datasets, such as SVHN, FASH-MNIST and CIFAR-10. Compared with the baseline model CapsNet, the model parameters of Res-CapsNet on dataset CIFAR-10 are reduced by 65.73%, while the classification accuracy is significantly improved by 33.66%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Zheng Q, Yang M, Yang J, Zhang Q, Zhang X (2018) Improvement of generalization ability of deep cnn via implicit regularization in two-stage training process. IEEE Access 6:15844–15869

    Article  Google Scholar 

  2. Zheng Q, Zhao P, Zhang D, Wang H (2021) Mr-dcae: manifold regularization-based deep convolutional autoencoder for unauthorized broadcasting identification. Int J Intell Syst 36(12):7204–7238

    Article  Google Scholar 

  3. Zheng Q, Yang M, Tian X, Jiang N, Wang D (2020) A full stage data augmentation method in deep convolutional neural network for natural image classification. Discret Dyn Nat Soc 2:1–11

    MATH  Google Scholar 

  4. Zheng Q, Zhao P, Li Y, Wang H, Yang Y (2021) Spectrum interference-based two-level data augmentation method in deep learning for automatic modulation classification. Neural Comput Appl 33(13):7723–7745

    Article  Google Scholar 

  5. Zhao B, Dong X, Guo Y, Jia X, Huang Y (2021) PCA dimensionality reduction method for image classification. Neural Process Lett. https://doi.org/10.1007/s11063-021-10632-5

    Article  Google Scholar 

  6. Jia X, Du S, Guo Y, Huang Y, Zhao B (2021) Multi-attention ghost residual fusion network for image classification. IEEE Access 9:81421–81431

    Article  Google Scholar 

  7. Sabour S, Frosst N, Hinton GE (2017) Dynamic routing between capsules. In: Proceedings international conference on neural information processing systems, pp 3856–3866

  8. Lu R, Liu J, Lian S, Zuo X (2018) Affine transformation capsule net. In: Trends and applications in knowledge discovery and data mining, pp 233–242

  9. Xiang C, Zhang L, Tang Y, Zou W, Xu C (2018) MS-CapsNet: a novel multi-scale capsule network. IEEE Signal Process Lett 25(12):1850–1854. https://doi.org/10.1109/LSP.2018.2873892

    Article  Google Scholar 

  10. Han T, Sun R, Shao F, Sui Y (2020) Feature and spatial relationship coding capsule network. J Electron Imaging 29(2):023004

    Article  Google Scholar 

  11. Yang S, Lee F, Miao R, Cai J, Chen Q (2020) RS-CapsNet: an advanced capsule network. IEEE Access 8:85007–85018. https://doi.org/10.1109/ACCESS.2020.2992655

    Article  Google Scholar 

  12. Hu J, Shen L, Albanie S, Sun G, Wu E (2020) Squeeze-and-excitation networks. IEEE Trans Pattern Anal Mach Intell 42(8):2011–2023. https://doi.org/10.1109/TPAMI.2019.2913372

    Article  Google Scholar 

  13. Huang W, Zhou F (2020) DA-CapsNet: dual attention mechanism capsule network. Sci Rep 10(1):11383

    Article  MathSciNet  Google Scholar 

  14. Xiong Y, Su G, Ye S, Sun Y, Sun Y (2019) Deeper capsule network for complex data. In: Proceedings of the international joint conference on neural networks (IJCNN), pp 1–8

  15. Xin N, Tian W, Li W, Lu Y, Chen Z (2020) BDARS_CapsNet: bi-directional attention routing sausage capsule network. IEEE Access 8:59059–59068. https://doi.org/10.1109/ACCESS.2020.2982782

    Article  Google Scholar 

  16. Misra D (2019) Mish: a self regularized non-nonotonic neural activation function. https://arxiv.org/abs/1908.08681

  17. Andrew NY (2004) Feature selection, L1 vs. L2 regularization, and rotational invariance. In: Proceedings of the Twenty-First International Conference on Machine Learning. ACM, p 78

  18. He KM, Zhang XY, Ren SQ, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  19. Xie S, Girshick R, Dollár P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 5987–5995. https://doi.org/10.1109/CVPR.2017.634

  20. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. https://arxiv.org/abs/1409.1556

  21. Szegedy C, Liu W, Jia YQ, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9

  22. Mobiny A, Van Nguyen H (2018) Fast CapsNet for lung cancer screening. In: Proceedings of 21st international conference on medical image computing and computer assisted intervention, pp 741–749

  23. Kim M, Chi S (2019) Detection of centerline crossing in abnormal driving using CapsNet. J Supercomp 75:189–196

    Article  Google Scholar 

  24. Kumar AD (2018) Novel deep learning model for traffic sign detection using capsule networks. https://arxiv.org/abs/1805.04424

  25. Zhao W, Ye J, Yang M, Lei Z, Zhang S, Zhao Z (2018) Investigating capsule networks with dynamic routing for text classification. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 3110–3119

  26. Vesperini F, Gabrielli L, Principi E, Squartini S (2019) Polyphonic sound event detection by using capsule neural networks. IEEE J Sel Top Signal Process 13(2):310–322. https://doi.org/10.1109/JSTSP.2019.2902305

    Article  Google Scholar 

  27. Yin J, Li S, Zhu H, Luo X (2019) Hyperspectral image classification using CapsNet with well-initialized shallow layers. IEEE Geosci Remote Sens Lett 16(7):1095–1099. https://doi.org/10.1109/LGRS.2019.2891076

    Article  Google Scholar 

  28. Phaye SSR, Sikka A, Dhall A, Bathula D (2018) Dense and diverse capsule networks: making the capsules learn better. :https://arxiv.org/abs/1805.04001

Download references

Acknowledgements

We would like to thank the authors of the literatures compared to Res-CapsNet for providing their codes.

Funding

This work was supported in part by the Anhui Provincial Natural Science Foundation of China under Grant 2108085ME158, in part by the National Natural Science Foundation of China under Grant 52174141, in part by the University Synergy Innovation Program of Anhui Province under Grant GXXT-2020-54, in part by the Key Research and Development Projects in Anhui Province under Grant 202004b11020029.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongcun Guo.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jia, X., Li, J., Zhao, B. et al. Res-CapsNet: Residual Capsule Network for Data Classification. Neural Process Lett 54, 4229–4245 (2022). https://doi.org/10.1007/s11063-022-10806-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-022-10806-9

Keywords