Abstract
We propose a novel efficient deep convolutional neural networks model for low-computational equipment, referred to as SharedNet. And on ImageNet, Our SharedNet has achieved the best accuracy and efficiency in similar convolutional neural networks. It is due to a new efficient convolution called group sharing convolution which applies parameter sharing to reduce computational cost. Referring to other efficient network design methods, we design our network units which called SharedNet unit by combining group convolution with group shared convolution. The experimental results show that the accuracy and efficiency of this unit are superior to the depthwise separable convolution. Interestingly, SharedNet also shows flexibility according to application requirements that the number of groups of a unit could be adjusted to balance accuracy and efficiency. Besides, for both neural architecture search and manually-designed CNNs, a model unit plays an import role and, therefore, we stack SharedNet units to build models referring to ResNet164 to test accuracy and efficiency of SharedNet units on CIFAR10 and CIFAR100.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)
Szegedy, C., et al.: Going deeper with convolutions. In: CVPR, vol. 07, pp. 1–9, 12 June 2015
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: ICLR, CVPR, vol. 2016, pp. 770–778, December 2016
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR, vol. 2017, pp. 2261–2269, January 2017
Li, H., Samet, H., Kadav, A., Durdanovic, I., Graf, H.P.: Pruning filters for efficient convnets. In: ICLR. ICLR (2017)
Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.-S.: Learning efficient convolutional networks through network slimming. In ICCV, pp. 2755–2763. IEEE Computer Society (2017)
He, Y., Liu, P., Wang, Z., Hu, Z., Yang, Y.: Filter pruning via geometric median for deep convolutional neural networks acceleration. In: CVPR, pp. 4340–4349. Computer Vision Foundation. IEEE (2019)
Courbariaux, M., Bengio, Y., David, J.P.: Binaryconnect: training deep neural networks with binary weights during propagations. In: NIPS, vol. 2015, pp. 3123–3131. Neural Information Processing Systems Foundation (2015)
Chen, S., Zhao, Q.: Shallowing deep networks: layer-wise pruning based on feature representations. IEEE Trans. Pattern Anal. Mach. Intell. 41(12), 3048–3056 (2018)
Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications (2017)
Zhang, X., Zhou, X.Y., Lin, M.X., Sun, R.: ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In: CVPR, pp. 6848–6856. IEEE, New York (2018)
Tan, M., et al.: MnasNet: platform-aware neural architecture search for mobile. In: CVPR, pp. 2820–2828. Computer Vision Foundation (2018)
Cai, H., Zhu, L., Han, S.: Proxylessnas: direct neural architecture search on target task and hardware. In: ICLR (2019)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: inverted residuals and linear bottlenecks. In: CVPR, pp. 4510–4520 (2018)
Howard, A., et al.: Searching for MobileNetV3. CoRR, abs/1905.0 (2019)
Ma, N., Zhang, X., Zheng, H.-T., Sun, J.: ShuffleNet V2: practical guidelines for efficient CNN architecture design. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 122–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_8
Zhang, T., Qi,G.J., Xiao, B., Wang, J.: Interleaved group convolutions. In: CVPR, vol. 2017, pp. 4383–4392. Institute of Electrical and Electronics Engineers Inc., December 2017
Xie, G., Wang, J., Zhang, T., Lai, J., Hong, R., Qi, G.J.: Interleaved structured sparse convolutional neural networks. In: CVPR, pp. 8847–8856. IEEE Computer Society, December 2018
Sun, K., Li, M., Liu, D., Wang, J.: IGCv3: interleaved low-rank group convolutions for efficient deep neural networks. In BMVC. BMVA Press (2019)
Liu, H., Simonyan, K., Yang, Y.: DARTS: differentiable architecture search. In: ICLR (2019)
Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: CVPR, vol. 2017, pp. 5987–5995, January 2017
Tan, M., Le, Q.V.: EfficientNet: rethinking model scaling for convolutional neural networks. In: Chaudhuri, K., Salakhutdinov, R. (eds.) ICML, Volume 97 of Proceedings of Machine Learning Research, PMLR, pp. 6105–6114 (2019)
Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: ICLR (2017)
Liu, H., Simonyan, K., Yang, Y.: DARTS: differentiable architecture search. In: ICLR. OpenReview.net (2019)
He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38
Acknowledgement
This work was sponsored by Natural Science Foundation of Chongqing (Grant No. cstc2018jcyjAX0532).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Mi, JX., Feng, J. (2020). SharedNet: A Novel Efficient Convolutional Architecture Based on Group Sharing Convolution. In: Huang, DS., Bevilacqua, V., Hussain, A. (eds) Intelligent Computing Theories and Application. ICIC 2020. Lecture Notes in Computer Science(), vol 12463. Springer, Cham. https://doi.org/10.1007/978-3-030-60799-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-60799-9_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60798-2
Online ISBN: 978-3-030-60799-9
eBook Packages: Computer ScienceComputer Science (R0)