Abstract
Few-shot learning for visual recognition aims to classify images from unseen classes with only a few labeled samples. Many previous works address such a challenge by using a base set consisting of massive labeled samples to learn a feature extractor, which is transferred to categorize unseen classes from a novel set. However, a challenging issue is how to make the learned feature extractor transferable in few-shot learning because the categories extracted from the base set are different from those in the novel set. To address this issue, this paper proposes a novel Random Erasing Network(RENet) to make the network better utilize the full context of the input image, yielding a more transferable network than previous networks that only use the most discriminative features. Further, we present a Task-Relevant Feature Transforming(TRFT) framework based on CrossTransformers to generate embedding that can better exploit the information within the current task. Then, we combine RENet and TRFT to implement a cooperative training model RE-TRFT for the episodic training. We conduct extensive experiments on two benchmarks and the results show that our approach outperforms recent state-of-the-art methods.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Tian, Q., Wan, S., Jin, P., et al.: A novel feature fusion with self-adaptive weight method based on deep learning for image classification. In: PCM, pp. 426–436 (2018)
Yang, X., Wan, S., Jin, P., et al.: MHEF-TripNet: mixed triplet loss with hard example feedback network for image retrieval. In: ICIG, pp. 35–46 (2019)
Yang, X., Wan, S., Jin, P.: Domain-invariant region proposal network for cross-domain detection. In: ICME, pp. 1–6 (2020)
Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection. In: CVPR, pp. 779–788 (2016)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, pp. 3431–3440 (2015)
Chen, L.C., Papandreou, G., et al.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)
Vinyals, O., Blundell, C., Lillicrap, T., et al.: Matching networks for one shot learning. In: NeurIPS, pp. 3630–3638 (2016)
Wei, Y., Feng, J., Liang, X., et al.: Object region mining with adversarial erasing: a simple classification to semantic segmentation approach. In: CVPR, pp. 6488–6496 (2017)
DeVries, T., Taylor, G.: Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552 (2017)
Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, PMLR, pp. 1126–1135 (2017)
Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2016)
Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999 (2018)
Li, Z., Zhou, F., Chen, F., et al.: Meta-sgd: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835 (2017)
Lee, K., Maji, S., Ravichandran, A., et al.: Meta-learning with differentiable convex optimization. In: CVPR, pp. 10657–10665 (2019)
Rusu, A., Rao, D., Sygnowski, J., et al.: Meta-learning with latent embedding optimization. In: International Conference on Learning Representations (2018)
Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: NeurIPS, pp. 4080–4090 (2017)
Sung, F., Yang, Y., Zhang, L., et al.: Learning to compare: relation network for few-shot learning. In: CVPR, pp. 1199–1208 (2018)
Li, W., Wang, L., Xu, J., et al.: Revisiting local descriptor based image-to-class measure for few-shot learning. In: CVPR, pp. 7260–7268 (2019)
Hou, R., Chang, H., Ma, B., et al.: Cross attention network for few-shot classification. In: NeurIPS, pp. 4005–4016 (2019)
Lifchitz, Y., Avrithis, Y., Picard, S., et al.: Dense classification and implanting for few-shot learning. In: CVPR, pp. 9258–9267 (2019)
Simon, C., Koniusz, P., Nock, R., et al.: Adaptive subspaces for few-shot learning. In: CVPR, pp. 4136–4145 (2020)
Doersch, C., Gupta, A., Zisserman, A.: CrossTransformers: spatially-aware few-shot transfer. In: NeurIPS (2020)
Wang, Y.K., Xu, C.M., et al.: Instance credibility inference for few-shot learning. In: CVPR, pp. 12836–12845 (2020)
Li, K., Zhang, Y., Li, K., et al.: Adversarial feature hallucination networks for few-shot learning. In: CVPR, pp. 13470–13479 (2020)
Liu, Y., Schiele, B., Sun, Q.: An ensemble of epoch-wise empirical bayes for few-shot learning. In: ECCV, pp. 404–421 (2020)
Ye, H., Hu, H., Zhan, D., et al.: Few-shot learning via embedding adaptation with set-to-set functions. In: CVPR, pp. 8808–8817 (2020)
Xu, W., Xu, Y., Wang, H., et al.: Attentional constellation nets for few-shot learning. In: ICLR (2021)
Li, J., Wang, Z., Hu, X.: learning intact features by erasing-inpainting for few-shot classification. In: AAAI (2021)
Acknowledgments
This paper is supported by the National Science Foundation of China (grant no. 62072419).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, X., Wan, S., Jin, P. (2021). Few-Shot Learning with Random Erasing and Task-Relevant Feature Transforming. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2021. ICANN 2021. Lecture Notes in Computer Science(), vol 12892. Springer, Cham. https://doi.org/10.1007/978-3-030-86340-1_41
Download citation
DOI: https://doi.org/10.1007/978-3-030-86340-1_41
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86339-5
Online ISBN: 978-3-030-86340-1
eBook Packages: Computer ScienceComputer Science (R0)