Skip to main content

HAHANet: Towards Accurate Image Classifiers with Less Parameters

  • Conference paper
  • First Online:
Image and Video Technology (PSIVT 2023)

Abstract

Utilizing classical convolutional networks results in lackluster performance in certain classification tasks. To address this problem, recent solutions add extra layers or sub-networks to increase the classification performance of existing networks. More recent methods employ multiple networks coupled with varying learning strategies. However, these approaches demand larger memory and computational requirement due to additional layers, prohibiting usage in devices with limited computing power. In this paper, we propose an efficient convolutional block which minimizes the computational requirements of a network while maintaining information flow through concatenation and element-wise addition. We design a classification architecture, called Half-Append Half-Add Network (HAHANet), built using our efficient convolutional block. Our approach achieves state-of-the-art accuracy on several challenging fine-grained classification tasks. More importantly, HAHANet outperforms top networks while reducing parameter count by at most 54 times. Our code and trained models are publicly available at https://github.com/dlsucivi/HAHANet-PyTorch.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Aguilar, E., Radeva, P.: Uncertainty-aware integration of local and flat classifiers for food recognition. Pattern Recogn. Lett. 136, 237–243 (2020)

    Article  Google Scholar 

  2. Antioquia, A.M.C.: Accurate thoracic disease classification via ensemble networks. In: Proceedings of the 2022 5th International Conference on Image and Graphics Processing, ICIGP 2022, New York, NY, USA, pp. 196–201. Association for Computing Machinery (2022). https://doi.org/10.1145/3512388.3512417

  3. Antioquia, A.M.C.: Effsemble: faster, smaller and more accurate ensemble networks for thoracic disease classification. Int. J. Comput. Appl. Technol. 71(4), 332–339 (2023). https://doi.org/10.1504/IJCAT.2023.132406

    Article  Google Scholar 

  4. Antioquia, A.M.C., Stanley Tan, D., Azcarraga, A., Cheng, W.H., Hua, K.L.: ZipNet: ZFNet-level accuracy with 48\(\times \) fewer parameters. In: 2018 IEEE Visual Communications and Image Processing (VCIP), pp. 1–4 (2018)

    Google Scholar 

  5. Ayan, E., Erbay, H., Varçın, F.: Crop pest classification with a genetic algorithm-based weighted ensemble of deep convolutional neural networks. Comput. Electron. Agric. 179, 105809 (2020)

    Article  Google Scholar 

  6. Cao, X., Guo, S., Lin, J., Zhang, W., Liao, M.: Online tracking of ants based on deep association metrics: method, dataset and evaluation. Pattern Recogn. 103, 107233 (2020)

    Article  Google Scholar 

  7. Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, pp. 1800–1807. IEEE Computer Society, July 2017

    Google Scholar 

  8. Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc. (2018)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, pp. 770–778. IEEE Computer Society, June 2016

    Google Scholar 

  10. He, T., Zhang, Z., Zhang, H., Zhang, Z., Xie, J., Li, M.: Bag of tricks for image classification with convolutional neural networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, pp. 558–567. IEEE Computer Society, June 2019

    Google Scholar 

  11. Huang, G., Liu, Z., Maaten, L.V.D., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, pp. 2261–2269. IEEE Computer Society, July 2017

    Google Scholar 

  12. Jahani Heravi, E., Habibi Aghdam, H., Puig, D.: An optimized convolutional neural network with bottleneck and spatial pyramid pooling layers for classification of foods. Pattern Recognit. Lett. 105, 50–58 (2018). Machine Learning and Applications in Artificial Intelligence

    Google Scholar 

  13. Kaur, P., Sikka, K., Wang, W., Belongie, S.J., Divakaran, A.: FoodX-251: a dataset for fine-grained food classification. CoRR abs/1907.06167 (2019)

    Google Scholar 

  14. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)

    Article  Google Scholar 

  15. Lin, T., RoyChowdhury, A., Maji, S.: Bilinear CNN models for fine-grained visual recognition. In: 2015 IEEE International Conference on Computer Vision (ICCV), Los Alamitos, CA, USA, pp. 1449–1457. IEEE Computer Society, December 2015

    Google Scholar 

  16. Lu, L., Wang, P., Cao, Y.: A novel part-level feature extraction method for fine-grained vehicle recognition. Pattern Recogn. 131, 108869 (2022)

    Article  Google Scholar 

  17. Malach, E., Shalev-Shwartz, S.: Decoupling “when to update” from “how to update”. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc. (2017)

    Google Scholar 

  18. Mery, D., et al.: On skin lesion recognition using deep learning: 50 ways to choose your model. In: Wang, H., et al. (eds.) Image and Video Technology. LNCS, vol. 13763, pp. 103–116. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-26431-3_9

    Chapter  Google Scholar 

  19. Nanni, L., Maguolo, G., Pancino, F.: Insect pest image detection and recognition based on bio-inspired methods. Eco. Inform. 57, 101089 (2020)

    Article  Google Scholar 

  20. Norouzifard, M., Nemati, A., Abdul-Rahman, A., GholamHosseini, H., Klette, R.: A fused pattern recognition model to detect glaucoma using retinal nerve fiber layer thickness measurements. In: Lee, C., Su, Z., Sugimoto, A. (eds.) PSIVT 2019. LNCS, vol. 11854, pp. 1–12. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-34879-3_1

    Chapter  Google Scholar 

  21. Ren, F., Liu, W., Wu, G.: Feature reuse residual networks for insect pest recognition. IEEE Access 7, 122758–122768 (2019)

    Article  Google Scholar 

  22. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.: MobileNetV 2: inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, pp. 4510–4520. IEEE Computer Society, June 2018

    Google Scholar 

  23. Satyanarayana, G., Deshmukh, P., Das, S.K.: Vehicle detection and classification with spatio-temporal information obtained from CNN. Displays 75, 102294 (2022)

    Article  Google Scholar 

  24. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  25. Sun, Z., et al.: Webly supervised fine-grained recognition: benchmark datasets and an approach. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10602–10611, October 2021

    Google Scholar 

  26. Szegedy, C., et al.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, pp. 1–9. IEEE Computer Society, June 2015

    Google Scholar 

  27. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-ResNet and the impact of residual connections on learning. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI 2017, pp. 4278–4284. AAAI Press (2017)

    Google Scholar 

  28. Wang, H., et al.: Score-CAM: score-weighted visual explanations for convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2020

    Google Scholar 

  29. Wang, J., et al.: Logo-2k+: a large-scale logo dataset for scalable logo classification. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, pp. 6194–6201 (2020)

    Google Scholar 

  30. Wu, X., Zhan, C., Lai, Y., Cheng, M., Yang, J.: IP102: a large-scale benchmark dataset for insect pest recognition. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, pp. 8779–8788. IEEE Computer Society, June 2019

    Google Scholar 

  31. Yang, Y., Wu, Q., Feng, X., Akilan, T.: Recomputation of the dense layers for performance improvement of DCNN. IEEE Trans. Pattern Anal. Mach. Intell. 42(11), 2912–2925 (2020)

    Google Scholar 

  32. Yang, Z., Luo, T., Wang, D., Hu, Z., Gao, J., Wang, L.: Learning to navigate for fine-grained classification. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 438–454. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_26

    Chapter  Google Scholar 

  33. Yu, X., Zhao, Y., Gao, Y., Xiong, S.: MaskCOV: a random mask covariance network for ultra-fine-grained visual categorization. Pattern Recogn. 119, 108067 (2021)

    Article  Google Scholar 

  34. Zhang, W., Yang, Y., Wu, J.: Deep networks with fast retraining (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arren Matthew C. Antioquia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Antioquia, A.M.C., Cordel II, M.O. (2024). HAHANet: Towards Accurate Image Classifiers with Less Parameters. In: Yan, W.Q., Nguyen, M., Nand, P., Li, X. (eds) Image and Video Technology. PSIVT 2023. Lecture Notes in Computer Science, vol 14403. Springer, Singapore. https://doi.org/10.1007/978-981-97-0376-0_19

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0376-0_19

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0375-3

  • Online ISBN: 978-981-97-0376-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics