Skip to main content
Log in

Task-specific method-agnostic metric for few-shot learning

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Metric-based few-shot learning (FSL) methods have been attracting more and more research attention since they reflect a simpler and more effective inductive bias in the limited-data regime. The episodic evaluating method is widely used in the metric-based FSL methods, and the task-wise relative metric is critical to improving the performance of the episodic method. However, the commonly used metrics in existing metric-based FSL methods typically measure the absolute distance in a smooth and uniform feature space. Observing this, this paper proposed mapping the features into the task-specific sub-space by designing the correlation matrix of task-specific prototypical vectors, which induces a task-specific method-agnostic (TSMA) metric. The TSMA can be viewed as an adaptive linear classifier and hence is method-agnostic. In addition, the TSMA is manually designed and thus is parameter-free. The extensive experiments evaluated on various datasets show that TSMA outperformed the SOTA methods by 1.5–4.4%. And the ablation study shows that TSMA could adaptively adjust the scale of the similarity items and the scaling items, allowing for the models to easily optimized.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. For example, the original test dataset contains 100 classes, and in each 5-way-1-shot task, the 5-classification is conducted, and the labels are set to \(\{1, \ldots , 5\}\)

  2. This dataset can be downloaded from http://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html..

References

  1. Vinyals O, Blundell C, Lillicrap T, kavukcuoglu k, Wierstra D: Matching networks for one shot learning. In: Advances in Neural Information Processing Systems, vol. 29 (2016). https://proceedings.neurips.cc/paper/2016/file/90e1357833654983612fb05e3ec9148c-Paper.pdf

  2. Snell J, Swersky K, Zemel R (2017)Prototypical networks for few-shot learning. In: advances in neural information processing systems, vol. 30. https://proceedings.neurips.cc/paper/2017/file/cb8da6767461f2812ae4290eac7cbc42-Paper.pdf

  3. Cao K Brbić, MJ Leskovec (2021) Concept learners for few-shot learning. In: International Conference on learning representations (ICLR)

  4. Finn C, Abbeel P, Levine S. Model-agnostic meta-learning for fast adaptation of deep networks. InInternational conference on machine learning 2017 Jul 17 (pp 1126-1135). PMLR

  5. Nichol A, Achiam J, Schulman J (2018) On first-order meta-learning algorithms. ArXiv abs/1803.02999

  6. Qilong Wang PZPLWZ Banggu Wu HQ (2020) Eca-net: efficient channel attention for deep convolutional neural networks. In: The IEEE Conference on computer vision and pattern recognition (CVPR)

  7. Sung F, Yang Y, Zhang L, Xiang T, Torr PH, Hospedales TM (2018) Learning to compare: Relation network for few-shot learning. InProceedings of the IEEE conference on computer vision and pattern recognition 2018 (pp 1199-1208).

  8. Liu Y, Lee J, Park M, Kim S, Yang E, Hwang SJ, Yang Y (2019) Learning to propagate labels: Transductive propagation network for few-shot learning. arXiv preprint arXiv:1805.10002. 2018 May 25

  9. Ji Z, Hou Z, Liu X, Pang Y, Han J (2022) Information symmetry matters: a modal-alternating propagation network for few-shot learning. IEEE Trans Image Process 31:1520–1531. https://doi.org/10.1109/TIP.2022.3143005

    Article  Google Scholar 

  10. Antoniou A, Edwards H, Storkey A (2019) How to train your MAML. In: International Conference on Learning Representations. https://openreview.net/forum?id=HJGven05Y7

  11. Ye HJ, Chao WL (2021) How to train your MAML to excel in few-shot classification. In: 10th International Conference on Learning Representations (ICLR)

  12. Zhao Y, Li C, Yu P, Chen C (2021) Remp: rectified metric propagation for few-shot learning. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition (CVPR) workshops, pp 2581–2590

  13. Oreshkin BN, Lacoste, AT (2018) Task dependent adaptive metric for improved few-shot learning. In: NeurIPS

  14. Qiao L, Shi Y, Li J, Wang Y, Huang T, Tian Y (2019) Transductive episodic-wise adaptive metric for few-shot learning. In: Proceedings of the IEEE/CVF International Conference on computer vision (ICCV)

  15. Bateni P, Goyal R, Masrani V, Wood F, Sigal L (2020) Improved few-shot visual classification. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition (CVPR)

  16. Dhillon GS, Chaudhari P, Ravichandran A, Soatto S (2020) A baseline for few-shot image classification. In: International conference on learning representations. https://openreview.net/forum?id=rylXBkrYDS

  17. Wah C, Branson S, Welinder P, Perona P, Belongie S (2011) The Caltech-UCSD Birds-200-2011 Dataset. Technical report

  18. Ravi S, Larochelle H (2017) Optimization as a model for few-shot learning. In: In International Conference on learning representations (ICLR)

  19. Russakovsky O, Deng J, Su H et al (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis (IJCV) 115(3):211–252. https://doi.org/10.1007/s11263-015-0816-y

    Article  Google Scholar 

  20. Kingma DP, Ba JA (2015) A method for stochastic optimization. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. arxiv:1412.6980

  21. Liu J, Song L, Qin Y (2020) Prototype rectification for few-shot learning. In: ECCV

  22. Lake B, Salakhutdinov R, Tenenbaum J (2015) Human-level concept learning through probabilistic program induction. Science 350(6266):1332–1338. https://doi.org/10.1126/science.aab3050

    Article  MATH  Google Scholar 

  23. Lee K, Maji S, Ravichandran A, Soatto S (2019) Meta-learning with differentiable convex optimization. In: CVPR

  24. Zhang C, Cai Y, Lin G, Shen C (2020) Deepemd: Few-shot image classification with differentiable earth mover’s distance and structured classifiers. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

  25. Medina C, Devos A, Grossglauser M (2020) Self-Supervised Prototypical Transfer Learning for Few-Shot Classification. arXiv preprint arXiv:2006.11325

  26. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: European Conference on computer vision, Springer. pp 818–83

Download references

Funding

This work was supported by the National Natural Science Foundation of China (No. 62071060) and the Beijing Key Laboratory of Work Safety and Intelligent Monitoring Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Li.

Ethics declarations

conflict of interest

The authors declared that this paper had no conflict of interest. Also, the research did not involve Human Participants and/or Animals.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, H., Li, Y. Task-specific method-agnostic metric for few-shot learning. Neural Comput & Applic 35, 3115–3124 (2023). https://doi.org/10.1007/s00521-022-07858-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07858-2

Keywords

Navigation