Skip to main content

Enhancing Robustness of Prototype with Attentive Information Guided Alignment in Few-Shot Classification

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13935))

Included in the following conference series:

  • 1158 Accesses

Abstract

In this paper, we carefully revisit the issues of conventional few-shot learning: i) gaps in highlighted features between objects in support and query samples, and ii) losing the explicit local properties due to global pooled features. Motivated by them, we propose a novel method to enhance robustness in few-shot learning by aligning prototypes with abundantly informed ones. As a way of providing more information, we smoothly augment the support image by carefully manipulating the discriminative part corresponding to the highest attention score to consistently represent the object without distorting the original information. In addition, we leverage word embeddings of each class label to provide abundant feature information, serving as the basis for closing gaps between prototypes of different branches. The two parallel branches of explicit attention modules independently refine support prototypes and information-rich prototypes. Then, the support prototypes are aligned with superior prototypes to mimic rich knowledge of attention-based smooth augmentation and word embeddings. We transfer the imitated knowledge to queries in a task-adaptive manner and cross-adapt the queries and prototypes to generate crucial features for metric-based few-shot learning. Extensive experiments demonstrate that our method consistently outperforms existing methods on four benchmark datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bertinetto, L., Henriques, J.F., Torr, P., Vedaldi, A.: Meta-learning with differentiable closed-form solvers. In: ICLR (2018)

    Google Scholar 

  2. Bülthoff, H.H., Lee, S.W., Poggio, T., Wallraven, C., et al.: Biologically Motivated Computer Vision: Second International Workshop, BMCV 2002, Tübingen, Germany, November 22–24, 2002, Proceedings, vol. 2525. Springer Science & Business Media (2002)

    Google Scholar 

  3. Chen, C., Yang, X., Xu, C., Huang, X., Ma, Z.: ECKPN: explicit class knowledge propagation network for transductive few-shot learning. In: CVPR, pp. 6596–6605 (2021)

    Google Scholar 

  4. Chen, W.Y., Liu, Y.C., Kira, Z., Wang, Y.C.F., Huang, J.B.: A closer look at few-shot classification. In: ICLR (2018)

    Google Scholar 

  5. Chikontwe, P., Kim, S., Park, S.H.: Cad: co-adapting discriminative features for improved few-shot classification. In: CVPR, pp. 14554–14563 (2022)

    Google Scholar 

  6. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR, pp. 248–255 (2009)

    Google Scholar 

  7. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML, pp. 1126–1135 (2017)

    Google Scholar 

  8. Gidaris, S., Bursuc, A., Komodakis, N., Pérez, P., Cord, M.: Boosting few-shot visual learning with self-supervision. In: ICCV, pp. 8059–8068 (2019)

    Google Scholar 

  9. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)

    Google Scholar 

  11. He, Y., et al.: Attribute surrogates learning and spectral tokens pooling in transformers for few-shot learning. In: CVPR, pp. 9119–9129 (2022)

    Google Scholar 

  12. Hilliard, N., Phillips, L., Howland, S., Yankov, A., Corley, C.D., Hodas, N.O.: Few-shot learning with metric-agnostic conditional embeddings. arXiv preprint arXiv:1802.04376 (2018)

  13. Hou, R., Chang, H., Ma, B., Shan, S., Chen, X.: Cross attention network for few-shot classification. In: NeurIPS (2019)

    Google Scholar 

  14. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR, pp. 7132–7141 (2018)

    Google Scholar 

  15. Huang, H., Zhang, J., Zhang, J., Wu, Q., Xu, C.: PTN: a poisson transfer network for semi-supervised few-shot learning. In: AAAI, pp. 1602–1609 (2021)

    Google Scholar 

  16. Kang, D., Kwon, H., Min, J., Cho, M.: Relational embedding for few-shot classification. In: ICCV, pp. 8822–8833 (2021)

    Google Scholar 

  17. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NeurIPS (2012)

    Google Scholar 

  18. Lee, M.S., Yang, Y.M., Lee, S.W.: Automatic video parsing using shot boundary detection and camera operation analysis. Pattern Recogn. 34(3), 711–719 (2001)

    Article  Google Scholar 

  19. Li, J., Wang, Z., Hu, X.: Learning intact features by erasing-inpainting for few-shot classification. In: AAAI, pp. 8401–8409 (2021)

    Google Scholar 

  20. Liu, Y., Zheng, T., Song, J., Cai, D., He, X.: DMN4: few-shot learning via discriminative mutual nearest neighbor neural network. In: AAAI, pp. 1828–1836 (2022)

    Google Scholar 

  21. Mangla, P., Kumari, N., Sinha, A., Singh, M., Krishnamurthy, B., Balasubramanian, V.N.: Charting the right manifold: manifold mixup for few-shot learning. In: WACV, pp. 2218–2227 (2020)

    Google Scholar 

  22. Nam, W.J., Lee, S.W.: Gradient hedging for intensively exploring salient interpretation beyond neuron activation. arXiv preprint arXiv:2205.11109 (2022)

  23. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, pp. 1532–1543 (2014)

    Google Scholar 

  24. Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2016)

    Google Scholar 

  25. Ren, M., et al.: Meta-learning for semi-supervised few-shot classification. In: ICLR (2018)

    Google Scholar 

  26. Roh, M.C., Kim, T.Y., Park, J., Lee, S.W.: Accurate object contour tracking based on boundary edge selection. Pattern Recogn. 40(3), 931–943 (2007)

    Article  Google Scholar 

  27. Rusu, A.A., et al.: Meta-learning with latent embedding optimization. In: ICLR (2018)

    Google Scholar 

  28. Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: NeurIPS (2017)

    Google Scholar 

  29. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: relation network for few-shot learning. In: CVPR, pp. 1199–1208 (2018)

    Google Scholar 

  30. Vaswani, A., et al.: Attention is all you need. In: NeurIPS (2017)

    Google Scholar 

  31. Vinyals, O., et al.: Matching networks for one shot learning. In: NeurIPS, pp. 3630–3638 (2016)

    Google Scholar 

  32. Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The caltech-ucsd birds-200-2011 dataset (2011)

    Google Scholar 

  33. Wertheimer, D., Tang, L., Hariharan, B.: Few-shot classification with feature map reconstruction networks. In: CVPR, pp. 8012–8021 (2021)

    Google Scholar 

  34. Xi, D., Podolak, I.T., Lee, S.W.: Facial component extraction and face recognition with support vector machines. In: Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition, pp. 83–88. IEEE (2002)

    Google Scholar 

  35. Xing, C., Rostamzadeh, N., Oreshkin, B., Pinheiro, O.P.O.: Adaptive cross-modal few-shot learning. In: NeurIPS (2019)

    Google Scholar 

  36. Yang, F., Wang, R., Chen, X.: Sega: semantic guided attention on visual prototype for few-shot learning. In: WACV, pp. 1056–1066 (2022)

    Google Scholar 

  37. Ye, H.J., Hu, H., Zhan, D.C., Sha, F.: Few-shot learning via embedding adaptation with set-to-set functions. In: CVPR, pp. 8808–8817 (2020)

    Google Scholar 

  38. Zhang, C., Cai, Y., Lin, G., Shen, C.: DeepEMD: few-shot image classification with differentiable earth mover’s distance and structured classifiers. In: CVPR, pp. 12203–12213 (2020)

    Google Scholar 

  39. Zhang, H., Koniusz, P., Jian, S., Li, H., Torr, P.H.: Rethinking class relations: absolute-relative supervised and unsupervised few-shot learning. In: CVPR, pp. 9432–9441 (2021)

    Google Scholar 

Download references

Acknowledgement

This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2019-0-00079, Artificial Intelligence Graduate School Program(Korea University) and No. 2022-0-00984, Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation and No.2019-0-01371, Development of brain-inspired AI with human-like intelligence).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seong-Whan Lee .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kim, TH., Nam, WJ., Lee, SW. (2023). Enhancing Robustness of Prototype with Attentive Information Guided Alignment in Few-Shot Classification. In: Kashima, H., Ide, T., Peng, WC. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2023. Lecture Notes in Computer Science(), vol 13935. Springer, Cham. https://doi.org/10.1007/978-3-031-33374-3_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33374-3_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33373-6

  • Online ISBN: 978-3-031-33374-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics