skip to main content
10.1145/3348445.3348481acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicccmConference Proceedingsconference-collections
research-article

Attention Guided Relation Network for Few-Shot Image Classification

Published: 27 July 2019 Publication History

Abstract

Few-shot Learning is an object categorization problem where the classifier attempts to distinguish new classes with very few labeled examples. There has been significant progress in this field, which includes complex network architectures. Most of the works done in this field were focused on small datasets and longer training. In this paper, the experimentation was done with limited episodic training architecture, which consists of Relation Network as classification network, ResNet Embedding as embedding module, and Self Attention as attention mechanism. The experimentation and comparison with the state-of-the-art models show that attention with metric-based meta-learning generalizes quicker in short training and yields good results. The architecture was tested on the complex dataset miniImageNet. The accuracy was found to be 62.9%, which is close to the state-of-the-art architecture described on metric based meta-learning.

References

[1]
H. Zhang, I. Goodfellow, D. Metaxas and A. Odena, "Self-attention generative adversarial networks," arXiv preprint arXiv:1805.08318, 2018.
[2]
J. Snell, K. Swersky and R. Zemel, "Prototypical networks for few-shot learning," in Advances in Neural Information Processing Systems, 2017.
[3]
M. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swersky, J. B. Tenenbaum, H. Larochelle and R. S. Zemel, "Meta-learning for semi-supervised few-shot classification," arXiv preprint arXiv:1803.00676, 2018.
[4]
B. Hariharan and R. Girshick, "Low-shot visual recognition by shrinking and hallucinating features," in Proceedings of the IEEE International Conference on Computer Vision, 2017.
[5]
F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr and T. M. Hospedales, "Learning to compare: Relation network for few-shot learning," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
[6]
R. Zhang, T. Che, Z. Ghahramani, Y. Bengio and Y. Song, "MetaGAN: An Adversarial Approach to Few-Shot Learning," in Advances in Neural Information Processing Systems, 2018.
[7]
K. Bailey and S. Chopra, "Few-Shot Text Classification with Pre-Trained Word Embeddings and a Human in the Loop," arXiv preprint arXiv:1804.02063, 2018.
[8]
S. Gidaris and N. Komodakis, "Dynamic few-shot visual learning without forgetting," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
[9]
O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra and others, "Matching networks for one shot learning," in Advances in neural information processing systems, 2016.
[10]
K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.
[11]
N. Parmar, A. Vaswani, J. Uszkoreit, Ł. Kaiser, N. Shazeer, A. Ku and D. Tran, "Image transformer," arXiv preprint arXiv:1802.05751, 2018.
[12]
T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford and X. Chen, "Improved techniques for training gans," in Advances in neural information processing systems, 2016.
[13]
T. Karras, T. Aila, S. Laine and J. Lehtinen, "Progressive growing of gans for improved quality, stability, and variation," in International Conference of Legal Regulators, 2018.
[14]
X. Wang, R. Girshick, A. Gupta and K. He, "Non-local neural networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
[15]
S. Ravi and H. Larochelle, "Optimization as a model for few-shot learning," 2016.
[16]
C. Finn, P. Abbeel and S. Levine, "Model-agnostic meta-learning for fast adaptation of deep networks," in Proceedings of the 34th International Conference on Machine Learning-Volume 70, 2017.
[17]
A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero and R. Hadsell, "Meta-learning with latent embedding optimization," arXiv preprint arXiv:1807.05960, 2018.
[18]
K. Lee, S. Maji, A. Ravichandran and S. Soatto, "Meta-Learning with Differentiable Convex Optimization," arXiv preprint arXiv:1904.03758, 2019.
[19]
Wang, Fei, et al. "Residual attention network for image classification." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.
[20]
Wang, Duo, Yu Cheng, Mo Yu, Xiaoxiao Guo, and Tao Zhang. "A hybrid approach with optimization-based and metric-based meta-learner for few-shot learning." Neurocomputing 349: 202--211, 2019.
[21]
Rostami, Mohammad, Soheil Kolouri, Eric Eaton, and Kyungnam Kim. "Deep Transfer Learning for Few-Shot SAR Image Classification." Remote Sensing 11, no. 11: 1374, 2019.

Cited By

View all
  • (2024)Palm vein recognition in few-shot learning via modified Siamese Network2024 4th International Conference on Neural Networks, Information and Communication (NNICE)10.1109/NNICE61279.2024.10498780(598-603)Online publication date: 19-Jan-2024
  • (2024)FSTL-SA: few-shot transfer learning for sentiment analysis from facial expressionsMultimedia Tools and Applications10.1007/s11042-024-20518-yOnline publication date: 19-Dec-2024
  • (2023)A Survey of Few-Shot Learning for Image Classification of Aerial ObjectsProceedings of the 10th Chinese Society of Aeronautics and Astronautics Youth Forum10.1007/978-981-19-7652-0_50(570-582)Online publication date: 1-Jan-2023
  • Show More Cited By

Index Terms

  1. Attention Guided Relation Network for Few-Shot Image Classification

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICCCM '19: Proceedings of the 7th International Conference on Computer and Communications Management
    July 2019
    260 pages
    ISBN:9781450371957
    DOI:10.1145/3348445
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    In-Cooperation

    • Chongqing University of Posts and Telecommunications

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 July 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Five-Shot Image Classification
    2. Relation Network
    3. ResNet Embedding
    4. Self Attention

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICCCM 2019

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)2
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 16 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Palm vein recognition in few-shot learning via modified Siamese Network2024 4th International Conference on Neural Networks, Information and Communication (NNICE)10.1109/NNICE61279.2024.10498780(598-603)Online publication date: 19-Jan-2024
    • (2024)FSTL-SA: few-shot transfer learning for sentiment analysis from facial expressionsMultimedia Tools and Applications10.1007/s11042-024-20518-yOnline publication date: 19-Dec-2024
    • (2023)A Survey of Few-Shot Learning for Image Classification of Aerial ObjectsProceedings of the 10th Chinese Society of Aeronautics and Astronautics Youth Forum10.1007/978-981-19-7652-0_50(570-582)Online publication date: 1-Jan-2023
    • (2022)Few-Shot Image Classification: Current Status and Research TrendsElectronics10.3390/electronics1111175211:11(1752)Online publication date: 31-May-2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media