Skip to main content

PGADA: Perturbation-Guided Adversarial Alignment for Few-Shot Learning Under the Support-Query Shift

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13280))

Included in the following conference series:

Abstract

Few-shot learning methods aim to embed the data to a low-dimensional embedding space and then classify the unseen query data to the seen support set. While these works assume that the support set and the query set lie in the same embedding space, a distribution shift usually occurs between the support set and the query set, i.e., the Support-Query Shift, in the real world. Though optimal transportation has shown convincing results in aligning different distributions, we find that the small perturbations in the images would significantly misguide the optimal transportation and thus degrade the model performance. To relieve the misalignment, we first propose a novel adversarial data augmentation method, namely Perturbation-Guided Adversarial Alignment (PGADA), which generates the hard examples in a self-supervised manner. In addition, we introduce Regularized Optimal Transportation to derive a smooth optimal transportation plan. Extensive experiments on three benchmark datasets manifest that our framework significantly outperforms the eleven state-of-the-art methods on three datasets. Our code is available at https://github.com/772922440/PGADA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We employ a 3-layer convolutional neural network as our G.

  2. 2.

    Note that it is valid to access the images from testing set in few-shot learning, which is named transductive few-shot learning [22].

References

  1. Antoniou, A., Storkey, A., Edwards, H.: Data augmentation generative adversarial networks. In: ICLR (2017)

    Google Scholar 

  2. Bennequin, E., Bouvier, V., Tami, M., Toubhans, A., Hudelot, C.: Bridging few-shot learning and adaptation: new challenges of support-query shift. ECML-PKDD (2021)

    Google Scholar 

  3. Bottou, L.: Stochastic gradient descent tricks. In: Montavon, G., Orr, G.B., Müller, KR. (eds) Neural Networks: Tricks of the Trade. LNCS, vol. 7700. Springer, Berlin (2012). https://doi.org/10.1007/978-3-642-35289-8_25

  4. Boudiaf, M., Masud, Z.I., Rony, J., Dolz, J., Piantanida, P., Ayed, I.B.: Transductive information maximization for few-shot learning. arXiv preprint arXiv:2008.11297 (2020)

  5. Caldas, S., et al.: LEAF: a benchmark for federated settings. NeurIPS (2019)

    Google Scholar 

  6. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: ICML, pp. 1597–1607. PMLR (2020)

    Google Scholar 

  7. Chen, W.Y., Liu, Y.C., Kira, Z., Wang, Y.C.F., Huang, J.B.: A closer look at few-shot classification. arXiv preprint arXiv:1904.04232 (2019)

  8. Courty, N., Flamary, R., Tuia, D., Rakotomamonjy, A.: Optimal transport for domain adaptation. IEEE TPAMI (2016)

    Google Scholar 

  9. Cuturi, M.: Sinkhorn distances: lightspeed computation of optimal transport. NeurIPS 26, 2292–2300 (2013)

    Google Scholar 

  10. Dhillon, G.S., Chaudhari, P., Ravichandran, A., Soatto, S.: A baseline for few-shot image classification. In: ICLR (2019)

    Google Scholar 

  11. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML (2017)

    Google Scholar 

  12. Garcia, V., Bruna, J.: Few-shot learning with graph neural networks. In: ICLR (2017)

    Google Scholar 

  13. Gong, C., Ren, T., Ye, M., Liu, Q.: MaxUp: Lightweight adversarial training with data augmentation improves neural network training. In: CVPR, pp. 2474–2483 (2021)

    Google Scholar 

  14. Goodfellow, I., et al.: Generative adversarial nets. NeurIPS 27, 2672–2680 (2014)

    Google Scholar 

  15. Hariharan, B., Girshick, R.: Low-shot visual recognition by shrinking and hallucinating features. In: CVPR, pp. 3018–3027 (2017)

    Google Scholar 

  16. Huang, S.W., Lin, C.T., Chen, S.P., Wu, Y.Y., Hsu, P.H., Lai, S.H.: AugGAN: cross domain adaptation with GAN-based data augmentation. In: ECCV, pp. 718–731 (2018)

    Google Scholar 

  17. Jiang, S., Chen, H.W., Chen, M.S.: Dataflow systolic array implementations of exploring dual-triangular structure in QR decomposition using high-level synthesis. In: ICFPT (2021)

    Google Scholar 

  18. Jiang, S., Yao, X., Long, Q., Chen, J., Jiang, H.: Fund investment decision in support vector classification based on information entropy. Rev. Econ. Finance 15, 57–66 (2019)

    Google Scholar 

  19. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2014)

    Google Scholar 

  20. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  21. Liu, Y., et al.: Learning to propagate labels: Transductive propagation network for few-shot learning. In: ICLR (2018)

    Google Scholar 

  22. Phoo, C.P., Hariharan, B.: Self-training for few-shot transfer across extreme task differences. In: ICLR (2020)

    Google Scholar 

  23. Samangouei, P., Kabkab, M., Chellappa, R.: Defense-GAN: protecting classifiers against adversarial attacks using generative models. In: ICLR (2018)

    Google Scholar 

  24. Schonfeld, E., Ebrahimi, S., Sinha, S., Darrell, T., Akata, Z.: Generalized zero-and few-shot learning via aligned variational autoencoders. In: CVPR. pp. 8247–8255 (2019)

    Google Scholar 

  25. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)

    Google Scholar 

  26. Snell, J., Swersky, K., Zemel, R.S.: Prototypical networks for few-shot learning. In: NeurIPS (2017)

    Google Scholar 

  27. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: relation network for few-shot learning. In: CVPR, pp. 1199–1208 (2018)

    Google Scholar 

  28. Theagarajan, R., Chen, M., Bhanu, B., Zhang, J.: ShieldNets: Defending against adversarial attacks using probabilistic adversarial robustness. In: CVPR, pp. 6988–6996 (2019)

    Google Scholar 

  29. Triantafillou, E., et al.: Meta-dataset: a dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096 (2019)

  30. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: NeurIPS (2016)

    Google Scholar 

  31. Wang, Y.X., Girshick, R., Hebert, M., Hariharan, B.: Low-shot learning from imaginary data. In: CVPR, pp. 7278–7286 (2018)

    Google Scholar 

  32. Xie, Q., Luong, M.T., Hovy, E., Le, Q.V.: Self-training with noisy student improves ImageNet classification. In: CVPR, pp. 10687–10698 (2020)

    Google Scholar 

  33. Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: CutMix: regularization strategy to train strong classifiers with localizable features. In: ICCV, pp. 6023–6032 (2019)

    Google Scholar 

  34. Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. In: ICLR (2017)

    Google Scholar 

  35. Zhao, A., et al.: Domain-adaptive few-shot learning. In: WACV, pp. 1390–1399 (2021)

    Google Scholar 

  36. Zhao, L., Liu, T., Peng, X., Metaxas, D.: Maximum-entropy adversarial data augmentation for improved generalization and robustness. arXiv preprint arXiv:2010.08001 (2020)

Download references

Acknowledgement

S. Jiang is supported by the science and technology plan project in Huizhou (No. 2020SD0402030).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming-Syan Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, S., Ding, W., Chen, HW., Chen, MS. (2022). PGADA: Perturbation-Guided Adversarial Alignment for Few-Shot Learning Under the Support-Query Shift. In: Gama, J., Li, T., Yu, Y., Chen, E., Zheng, Y., Teng, F. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2022. Lecture Notes in Computer Science(), vol 13280. Springer, Cham. https://doi.org/10.1007/978-3-031-05933-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05933-9_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05932-2

  • Online ISBN: 978-3-031-05933-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics