Skip to main content

TDRConv: Exploring the Trade-off Between Feature Diversity and Redundancy for a Compact CNN Module

  • Conference paper
  • First Online:
Advanced Intelligent Computing Technology and Applications (ICIC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14089))

Included in the following conference series:

  • 952 Accesses

Abstract

Rich or even redundant features without losing diversity of feature maps can undoubtedly help to improve network performance. In this work, we propose a compact CNN module by exploring the trade-off between feature diversity and redundancy, namely TDRConv, to retain and generate features with moderate redundancy and rich diversity but require less computation. Specifically, the input features are split into the main part and the expansion part by certain proportion, where the main part can extract intrinsic and diverse features in different ways, while the expansion part can enhance the extraction ability of diverse information. Finally, a series of experiments are conducted to verify the effectiveness of the proposed TDRConv on CIFAR10 and ImageNet. The results show the network models equipped with TDRConv can all outperform the state-of-the-art methods in terms of accuracy, but with significantly lower FLOPs and parameters. More importantly, the proposed TDRConv can readily replace the existing convolution modules as a plug-and-play component, and it is promising to further extend CNNs to wider scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Tang, C., Xue, D., Chen, D.: Feature diversity learning with sample dropout forunsupervised domain adaptive person re-identification. CoRR abs/2201.10212 (2022)

    Google Scholar 

  2. Ayinde, B.O., Inanc, T., Zurada, J.M.: Regularizing deep neural networks by enhancing diversity in feature extraction. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2650–2661 (2019)

    Article  Google Scholar 

  3. Ayinde, B.O., Zurada, J.M.: Nonredundant sparse feature extraction using autoencoders with receptive fields clustering. Neural Netw. 93, 99–109 (2017)

    Article  MATH  Google Scholar 

  4. Ogundijo, O.E., Elmas, A., Wang, X.: Reverse engineering gene regulatory networks from measurement with missing values. EURASIP J. Bioinf. Syst. Biol. 2017(1), 1–11 (2017)

    Google Scholar 

  5. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: alexnet-level accuracy with 50x fewer parameters and 0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016)

  6. Dieleman, S., De Fauw, J., Kavukcuoglu, K.: Exploiting cyclic symmetry in convolutional neural networks. In: ICML2016 - Volume 48, pp. 1889–1898 (2016)

    Google Scholar 

  7. Zhai, S., Cheng, Y., Lu, W., Zhang, Z.M.: Doubly convolutional neural networks. In: NIPS2016, pp. 1090–1098 (2016)

    Google Scholar 

  8. Ayinde, B.O., Zurada, J.M.: Deep learning of constrained autoencoders for enhanced understanding of data. IEEE Trans. Neural Netw. Learn. Syst. 29(9), 3969–3979 (2018)

    Article  Google Scholar 

  9. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(56), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  10. Wan, L., Zeiler, M.D., Zhang, S., LeCun, Y., Fergus, R.: Regularization of neural networks using dropconnect. In: ICML (3). JMLR Workshop and Conference Proceedings, vol. 28, pp. 1058–1066. JMLR.org (2013)

    Google Scholar 

  11. Mellor, J., Turner, J., Storkey, A., Crowley, E.J.: Neural architecture search without training. In: International Conference on Machine Learning, pp. 7588–7598 (2021)

    Google Scholar 

  12. Sifre, L, Mallat, S.: Rigid-motion scattering for texture classification. arXiv preprint arXiv:1403.1687 (2014)

  13. Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., Xu, C.: Ghostnet: more featuresfrom cheap operations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1580–1589 (2020)

    Google Scholar 

  14. Zhang, Q., et al.: Split to be slim: an overlooked redundancy in vanilla convolution. arXiv preprint arXiv:2006.12085 (2020)

  15. Wang, X., Stella, X.Y.: Tied block convolution: leaner and better cnns with sharedthinner filters. In: AAAI2021, vol. 35, pp. 10227–10235 (2021)

    Google Scholar 

  16. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012)

    Google Scholar 

  17. Szegedy, C., et al.: Going deeper with convolutions. In: CVPR2015. pp. 1–9 (2015)

    Google Scholar 

  18. Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  19. Xie, S., Girshick, R., Dolla´r, P., Tu, Z., He, K.: Aggregated residual transformationsfor deep neural networks. In: CVPR2017, pp. 1492–1500 (2017)

    Google Scholar 

  20. Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: CVPR2018, pp. 6848–6856 (2018)

    Google Scholar 

  21. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR2018, pp. 7132–7141 (2018)

    Google Scholar 

  22. Krizhevsky, A., Nair, V., Hinton, G.: Cifar-10 (canadian institute for advancedresearch) (2010). http://www.cs.toronto.edu/kriz/cifar.html

  23. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Li, F.F.: Imagenet: a largescale hierarchical image database. In: CVPR, pp. 248–255. IEEE Computer Society (2009)

    Google Scholar 

  24. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)

    Google Scholar 

  25. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR2016, pp. 770–778 (2016)

    Google Scholar 

  26. Luo, J.H., Wu, J., Lin, W.: Thinet: a filter level pruning method for deep neuralnetwork compression. In: ICCV2017, pp. 5058–5066 (2017)

    Google Scholar 

  27. Singh, P., Verma, V.K., Rai, P., Namboodiri, V.P.: Hetconv: heterogeneous Kernel-based convolutions for deep CNNs. In: CVPR2019, pp. 4835–4844 (2019)

    Google Scholar 

  28. Yang, D., Yu, X., Sun, Y., Zhuang, F., He, Q., Ye, S.: BFConv: Improving Convolutional Neural Networks with Butterfly Convolution. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds.) ICONIP 2021. LNCS, vol. 13111, pp. 40–50. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-92273-3_4

    Chapter  Google Scholar 

  29. Yang, D., Chen, Z., Sun, Y., He, Q., Ye, S., Chen, D.: Ekconv: compressing convolutional neural networks with evolutionary kernel convolution. In: Journal of Physics: Conference Series, vol. 2425, p. 012011. IOP Publishing (2023)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by Zhejiang Provincial Natural Science Foundation of China(Grant No. LGF22F030016 and LGF20H180002), and in part by National Natural Science Foundation of China (Grant No. 62271448 and 61972354).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qianwei Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, H., Zhou, D., Xu, H., Chen, Q., Guan, Q., Zhou, Q. (2023). TDRConv: Exploring the Trade-off Between Feature Diversity and Redundancy for a Compact CNN Module. In: Huang, DS., Premaratne, P., Jin, B., Qu, B., Jo, KH., Hussain, A. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2023. Lecture Notes in Computer Science(), vol 14089. Springer, Singapore. https://doi.org/10.1007/978-981-99-4752-2_28

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-4752-2_28

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-4751-5

  • Online ISBN: 978-981-99-4752-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics