Skip to main content
Log in

Convolution neural network with low operation FLOPS and high accuracy for image recognition

  • Special Issue Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

The convolution neural network makes deeper and wider for better accuracy, but requires higher computations. When the neural network goes deeper, some information loss is more. To improve this drawback, the residual structure was developed to connect the information of the previous layers. This is a good solution to prevent the loss of information, but it requires a huge amount of parameters for deeper layer operations. In this study, the fast computational algorithm is proposed to reduce the parameters and to save the operations with the modification of DenseNet deep layer block. With channel merging procedures, this solution can reduce the dilemma of multiple growth of the parameter quantity for deeper layer. This approach is not only to reduce the parameters and FLOPs, but also to keep high accuracy. Comparisons with the original DenseNet and RetNet-110, the parameters can be efficiency reduced about 30–70%, while the accuracy degrades little. The lightweight network can be implemented on a low-cost embedded system for real-time application.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Wang, C., Li, X., Chen, H., Zhou, X.: MALOC: a fully pipelined FPGA accelerator for convolutional neural networks with all layers mapped on chip Lei Gong. IEEE Trans. Comput. Aided Des. Integr. Circuit Syst. 37(11), 2601–2612 (2018)

    Article  Google Scholar 

  2. Lian, X., Liu, Z., Song, Z., Dai, J., Zhou, W., Ji, X.: High-performance FPGA-based CNN accelerator with block-floating-point arithmetic. IEEE Trans. Very Large Scale Integr. Syst. 27(8), 1874–1885 (2019)

    Article  Google Scholar 

  3. Saponara, S., Elhanashi, A., Gagliardi, A.: Real-time video fire/smoke detection based on CNN in antifire surveillance systems. J. Real-Time Image Process. 18, 889–900 (2021)

    Article  Google Scholar 

  4. Pashaei, A., Ghatee, M., Sajedi, H.: Convolution neural network joint with mixture of extreme learning machines for feature extraction and classification of accident images. J. Real-Time Image Process. 17, 1051–1066 (2020)

    Article  Google Scholar 

  5. Yang, L., Qin,, Y., Zhang, X.: Lightweight densely connected residual network for human pose estimation. J. Real-Time Image Process. 18, 825–837 (2021)

  6. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016)

    Article  Google Scholar 

  7. Park, J., Lee, J., Sim, D.: Low-complexity CNN with 1D and 2D filters for super-resolution. J. Real-Time Image Proc. 17, 2065–2076 (2020)

    Article  Google Scholar 

  8. Lin, M., Chen, Q., Yan, S.: Network in network. arXiv:1312.4400 (2013)

  9. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  10. Simonyan, K., Zisserman, A.: Very deep convolution networks for large-scale image recognition. arXiv:1409.1556 (2014)

  11. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv:1512.03385 (2015)

  12. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolution networks. arXiv:1608.06993 (2016)

  13. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. arXiv:1611.05431 (2016)

  14. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.: Inception-v4, inception-ResNet and the impact of residual connections on learning. arXiv:1602.07261 (2016)

  15. Tan, M., Le, Q.V.: EfficientNet: rethinking model scaling for convolutional neural networks. arXiv:1905.11946v3 (2019)

  16. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: MobileNets: efficient convolution neural networks for mobile vision applications. arXiv:1704.04861 (2017)

  17. Howard, A, Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., Le, Q.V., Adam, H.: Searching for MobileNetV3. arXiv:1905.02244 (2019)

  18. Bai, L., Zhao, Y., Huang, X.: A CNN accelerator on FPGA using depthwise separable convolution. IEEE Trans. Circuits Syst. II 65(10), 1415–1419 (2018)

    Article  Google Scholar 

  19. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Tech Report, 2009

  20. Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: ICML, 2013

  21. Krizhevsky, A.: The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million. https://www.cs.toronto.edu/~kriz/cifar.html. Accessed 6 Mar 2021

  22. Yang, J., Shi, R., Ni, B.: MedMNIST classification decathlon: a lightweight AutoML benchmark for medical image analysis. arXiv:2010.14925 (2020) (medmnist)

  23. https://projects.raspberrypi.org/en/projects/raspberry-pi-getting-started. Accessed 2 May 2021

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shih-Chang Hsia.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hsia, SC., Wang, SH. & Chang, CY. Convolution neural network with low operation FLOPS and high accuracy for image recognition. J Real-Time Image Proc 18, 1309–1319 (2021). https://doi.org/10.1007/s11554-021-01140-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-021-01140-9

Keywords

Navigation