Skip to main content
Log in

A hybrid deep model with cumulative learning for few-shot learning

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Few-shot learning (FSL) aims to recognize unseen classes with only a few samples for each class. This challenging research endeavors to narrow the gap between the computer vision technology and the human visual system. Recently, mainstream approaches for FSL can be grouped into meta-learning and classification learning. These two methods train the FSL model from local and global classification viewpoints respectively. In our work, we find the former method can effectively learn transferable knowledge (generalization capacity) with an episodic training paradigm but encounters the problem of slow convergence. The latter method can build an essential classification ability quickly (classification capacity) with a mini-batch training paradigm but easily causes an over-fitting problem. In light of this issue, we propose a hybrid deep model with cumulative learning to tackle the FSL problem by absorbing the advantages of the both methods. The proposed hybrid deep model innovatively integrates meta-learning and classification learning (IMC) in a unified two-branch network framework in which a meta-learning branch and a classification learning branch can work simultaneously. Besides, by considering the different characteristics of the two branches, we propose a cumulative learning strategy to take care of both generalization capacity learning and classification capacity learning in our IMC model training. With the proposed method, the model can quickly build the basic classification capability at the initial stage and continually mine discriminative class information during the remaining training for better generalization. Extensive experiments on CIFAR-FS, FC100, mini-ImageNet and tiered-ImageNet datasets are implemented to demonstrate the promising performance of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data availability

The used datasets are well-known benchmark datasets, and the corresponding references have been cited in this work.

References

  1. Allen KR, Shelhamer E, Shin H, Tenenbaum JB (2019) Infinite mixture prototypes for few-shot learning. In: International Conference on Machine Learning, pp 232–241

  2. Bertinetto L, Henriques J, Torr PHS, Vedaldi A (2019) Meta-learning with differentiable closed-form solvers. In: International Conference on Learning Representations

  3. Cai W, Liu D, Ning X, Wang C, Xie G (2021) Voxel-based three-view hybrid parallel network for 3D object classification. Displays 69:102076

    Article  Google Scholar 

  4. Chen J, Zhan L, Wu X, Chung F (2020) Variational metric scaling for metric-based meta-learning. In: AAAI Conference on Artificial Intelligence, pp 3478–3485

  5. Chen W, Liu Y, Kira Z, Wang YF, Huang J (2019) A closer look at few-shot classification. In: International Conference on Learning Representations

  6. Chu W, Li Y, Chang J, Wang YF (2019) Spot and learn: a maximum-entropy patch sampler for few-shot image classification. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 6251–6260

  7. Finn C, Abbeel P, Levine S (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, pp 1126–1135

  8. Gidaris S, Komodakis N (2018) Dynamic few-shot visual learning without forgetting. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 4367–4375

  9. Huang H, Wu Z, Li W, Huo J, Gao Y (2021) Local descriptor-based multi-prototype network for few-shot learning. Pattern Recogn 116:107935

  10. Huang S, Zeng X, Wu S, Yu Z, Azzam M, Wong HS (2021) Behavior regularized prototypical networks for semi-supervised few-shot image classification. Pattern Recogn 112:107765

    Article  Google Scholar 

  11. Huang S, Zhang M, Kang Y, Wang D (2021) Attributes-guided and pure-visual attention alignment for few-shot recognition. In: AAAI Conference on Artificial Intelligence, pp 7840–7847

  12. Jamal MA, Qi G (2019) Task agnostic meta-learning for few-shot learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 11719–11727

  13. Jian Y, Torresani, L (2022) Label hallucination for few-shot classification. In: AAAI Conference on Artificial Intelligence, pp 7005–7014

  14. Kingma DP, Ba JL (2015) ADAM: a method for stochastic optimization. In: International Conference on Learning Representations

  15. Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images Technical reports, Computer ScienceDepartment, University of Toronto

  16. Li W, Xu J, Huo J, Wang L, Gao Y, Luo J (2019) Distribution consistency based covariance metric networks for few-shot learning. In: AAAI Conference on Artificial Intelligence, pp 8642–8649

  17. Li A, Huang W, Lan X, Feng J, Li Z, Wang L (2020) Boosting few-shot learning with adaptive margin loss. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 12573–12581

  18. Liang M, Huang S, Pan S, Gong M, Liu W (2022) Learning multi-level weight-centric features for few-shot learning. Pattern Recogn 128:108662

    Article  Google Scholar 

  19. Liu B, Cao Y, Lin Y, Li Q, Zhang Z, Long M, Hu H (2020) Negative margin matters: understanding margin in few-shot classification. In: European Conference on Computer Vision, pp 438–455

  20. Liu J, Song L, Qin Y (2020) Prototype rectification for few-shot learning. In: European Conference on Computer Vision, pp 741–756

  21. Lu J, Gong P, Ye J, Zhang J, Zhang C (2020) Learning from very few samples: a survey. arXiv preprint arXiv:2009.02653

  22. Luo Y, Huang Z, Zhang Z, Wang Z, Baktashmotlagh M, Yang Y (2020) Learning from the past: continual meta-learning with bayesian graph neural networks. In: AAAI Conference on Artificial Intelligence, pp 5021–5028

  23. Miller EG, Matsakis NE, Viola PA (2002) Learning from one example through shared densities on transforms. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 1464–1471

  24. Nichol A, Achiam J, Schulman J (2018) On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999

  25. Oreshkin BN, Rodriguez P, Lacoste A (2018) TADAM: task dependent adaptive metric for improved few-shot learning. In: Advances in Neural Information Processing Systems, pp 719–729

  26. Pytorch. https://pytorch.org/

  27. Qi H, Brown M, Lowe DG (2018) Low-shot learning with imprinted weights. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 5822–5830

  28. Ravi S, Larochelle H (2017) Optimization as a model for few-shot learning. In: International Conference on Learning Representations

  29. Ren M, Trinatafillou E, Ravi S, Snell J, Swersky K, Tenenbaum JB, Larochelle H, Zemel RS (2018) Meta-learning for semi-supervised few-shot classification. In: International Conference on Learning Representations

  30. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252

    Article  MathSciNet  Google Scholar 

  31. Rusu AA, Rao D, Sygnowski J, Vinyals O, Pascanu R, Osindero S, Hadsell R (2019) Meta-learning with latent embedding optimization. In: International Conference on Learning Representations

  32. Snell J, Swersky K, Zemel R (2017) Prototypical networks for few-shot learning. In: Advances in Neural Information Processing Systems, pp 4077–4087

  33. Sung F, Yang Y, Zhang L, Xiang T, Torr PHS, Hospedales TM (2018) Learning to compare: relation network for few-shot learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 1199–1208

  34. Tang H, Yuan C, Li Z, Tang J (2022) Learning attention-guided pyramidal features for few-shot fine-grained recognition. Pattern Recogn 130:108792

    Article  Google Scholar 

  35. Thrun S, Pratt L (1998) Learning to learn: introduction and overview. Learning to learn. Springer, pp 3–17

  36. Tian Y, Wang Y, Krishnan D, Tenenbaum JB, Isola P (2020) Rethinking few-shot image classification: a good embedding is all you need? In: European Conference on Computer Vision, pp 266–282

  37. Tokmakov P, Wang Y, Hebert M (2019) Learning compositional representations for few-shot recognition. In: IEEE International Conference on Computer Vision, pp 6371–6380

  38. Vilalta R, Drissi Y (2002) A perspective view and survey of meta-learning. Artif Intell Rev 18(2):77–95

    Article  Google Scholar 

  39. Vinyals O, Blundell C, Lillicrap T, Kavukcuoglu K, Wierstra D (2016) Matching networks for one shot learning. In: Advances in Neural Information Processing Systems, pp 3630–3638

  40. Wang P, Liu L, Shen C, Huang Z, Hengel A, Shen HT (2017) Multi-attention network for one shot learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 6212–6220

  41. Wang H, Wang Y, Zhou Z, Ji X, Gong D, Zhou J, Li Z, Liu W (2018) CosFace: large margin cosine loss for deep face recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 5265–5274

  42. Wu Z, Li Y, Guo L, Jia K (2019) PARN: position-aware relation networks for few-shot learning. In: IEEE International Conference on Computer Vision, pp 6658–6666

  43. Xu J, Le H (2022) Generating representative samples for few-shot classification. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 8993–9003

  44. Xue Z, Xie Z, Xing Z, Duan L (2020) Relative position and map networks in few-shot learning for image classification. In: IEEE Conference on Computer Vision and Pattern Recognition Workshop, pp 4032–4036

  45. Yan S, Zhang S, He X (2019) A dual attention network with semantic embedding for few-shot learning. In: AAAI Conference on Artificial Intelligence, pp 979–9086

  46. Yang L, Li L, Zhang Z, Zhou X, Zhou E, Liu Y (2020) DPGN: distribution propagation graph network for few-shot learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 13387–13396

  47. Ye H, Hu H, Zhan D, Sha F (2020) Few-shot learning via embedding adaptation with set-to-set functions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 8805–8814

  48. Yu Z, Chen L, Cheng Z, Luo J (2020) TransMatch: a transfer-learning scheme for semi-supervised few-shot learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 12853–12861

  49. Zhang R, Yang S, Zhang Q, Xu L, He Y, Zhang F (2022) Graph-based few-shot learning with transformed feature propagation and optimal class allocation. Neurocomputing 470:247–256

    Article  Google Scholar 

  50. Zhou B, Cui Q, Wei X, Chen Z (2020) BBN: bilateral-branch network with cumulative learning for long-tailed visual recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 9716–9725

Download references

Acknowledgments

This research was supported by Guangzhou University’s training program for excellent new-recruited doctors (No. YB201712).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhao Yang.

Ethics declarations

Conflict of interest

Authors declared that they have no conflict of interests in this work.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, J., Yang, Z., Luo, L. et al. A hybrid deep model with cumulative learning for few-shot learning. Multimed Tools Appl 82, 19901–19922 (2023). https://doi.org/10.1007/s11042-022-14218-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-14218-8

Keywords

Navigation