Skip to main content

Few-Shot Learning Remote Scene Classification Based on DC-2DEC

  • Conference paper
  • First Online:
Spatial Data and Intelligence (SpatialDI 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14619))

Included in the following conference series:

  • 45 Accesses

Abstract

Few-shot learning image classification (FSLIC) is a task that has gained enhanced focus in recent years, the cost of collecting and annotating large number of data samples in some specialised domains is expensive, Few-shot remote scene classification (FRSSC) is of great utility in scenarios where sample is scarce and labelling is extremely costly, the core problem of this task is how to identify new classes with scarce and expensive few-shot samples. However, existing work prefers complicated feature extraction in various ways and the enhancement results are not satisfactory, this paper aims to improve the effectiveness of FSLIC not only through complicated feature extraction but also by exploring alternative approaches. Here are multiple avenues to improve the performance of few-shot classifiers. Training with a scarce data in a few-shot learning (FSL) task often results in a biased feature distribution. In this paper, we propose a method to address this issue by calibrating the support set data feature using sufficient base class data. (Our data distribution calibration method (DC) is on top of feature extractor), requiring no additional parameters. And the feature extraction model is further optimised and the feature extractor of DC-2DEC is optimised with the task of dealing with the spatial context structure of the image i.e. rotation prediction pretext, specifically rotation prediction. We refer to the proposed method as DC-2DEC, and we apply it to few-shot learning classification in RS image (RS image) scene recognition. Through experiments conducted on traditional few-shot datasets and RS image datasets, we validate the algorithm and present corresponding experimental results. These results demonstrate the competitiveness of DC-2DEC, highlighting its efficacy in few-shot learning classification for RS images.

This work is supported by the National Key R&D Program of China under Grant 2022YFF0503900.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  2. Yuan, Q., et al.: Deep learning in environmental remote sensing: achievements and challenges. Remote Sens. Environ. 241, 111716 (2020)

    Article  Google Scholar 

  3. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)

    Google Scholar 

  4. Yang, S., Liu, L., Xu, M.: Free lunch for few-shot learning: distribution calibration. arXiv preprint arXiv:2101.06395 (2021)

  5. Tukey, J.W., et al.: Exploratory Data Analysis, vol. 2. Reading (1977)

    Google Scholar 

  6. Li, W., Wang, L., Xu, J., Huo, J., Gao, Y., Luo, J.: Revisiting local descriptor based image-to-class measure for few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7260–7268 (2019)

    Google Scholar 

  7. Oreshkin, B., Rodríguez López, P., Lacoste, A.: TADAM: task dependent adaptive metric for improved few-shot learning. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  8. Garcia, V., Bruna, J.: Few-shot learning with graph neural networks. arXiv preprint arXiv:1711.04043 (2017)

  9. Lee, K., Maji, S., Ravichandran, A., Soatto, S.: Meta-learning with differentiable convex optimization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10 657–10 665 (2019)

    Google Scholar 

  10. Chu, W.-H., Li, Y.-J., Chang, J.-C., Wang, Y.-C.F.: Spot and learn: a maximum-entropy patch sampler for few-shot image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6251–6260 (2019)

    Google Scholar 

  11. Bartunov, S., Vetrov, D.: Few-shot generative modelling with generative matching networks. In: International Conference on Artificial Intelligence and Statistics, pp. 670–678. PMLR (2018)

    Google Scholar 

  12. Liu, W., Zhang, C., Lin, G., Liu, F.: CRNet: cross-reference networks for few-shot segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4165–4173 (2020)

    Google Scholar 

  13. Zhang, C., Lin, G., Liu, F., Guo, J., Wu, Q., Yao, R.: Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9587–9595 (2019)

    Google Scholar 

  14. Yang, Z., Wang, Y., Chen, X., Liu, J., Qiao, Y.: Context-transformer: tackling object confusion for few-shot detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, pp. 12 653–12 660 (2020)

    Google Scholar 

  15. Keshari, R., Vatsa, M., Singh, R., Noore, A.: Learning structure and strength of CNN filters for small sample size training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9349–9358 (2018)

    Google Scholar 

  16. Krantz, S.G., Parks, H.R.: The Implicit Function Theorem: History, Theory, and Applications. Springer, Cham (2002). https://doi.org/10.1007/978-1-4612-0059-8

    Book  Google Scholar 

  17. Munkhdalai, T., Yuan, X., Mehri, S., Trischler, A.: Rapid adaptation with conditionally shifted neurons. In: International Conference on Machine Learning, pp. 3664–3673. PMLR (2018)

    Google Scholar 

  18. Naik, D.K., Mammone, R.J.: Meta-neural networks that learn by learning. In: Proceedings 1992 IJCNN International Joint Conference on Neural Networks, vol. 1, pp. 437–442. IEEE (1992)

    Google Scholar 

  19. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra D., et al.: Matching networks for one shot learning. In: Advances in Neural Information Processing Systems, vol. 29 (2016)

    Google Scholar 

  20. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, vol. 27 (2014)

    Google Scholar 

  21. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)

    Article  Google Scholar 

  22. Zhang, J., Zhao, C., Ni, B., Xu, M., Yang, X.: Variational few-shot learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1685–1694 (2019)

    Google Scholar 

  23. Qin, T., Li, W., Shi, Y., Gao, Y.: Diversity helps: unsupervised few-shot learning via distribution shift-based data augmentation. arXiv preprint arXiv:2004.05805 (2020)

  24. Antoniou, A., Storkey, A.: Assume, augment and learn: unsupervised few-shot meta-learning via random labels and data augmentation. arXiv preprint arXiv:1902.09884 (2019)

  25. Wang, Y.-X., Girshick, R., Hebert, M., Hariharan, B.: Low-shot learning from imaginary data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7278–7286 (2018)

    Google Scholar 

  26. Hariharan, B., Girshick, R.: Low-shot visual recognition by shrinking and hallucinating features. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3018–3027 (2017)

    Google Scholar 

  27. Tian, Y., Krishnan, D., Isola, P.: Contrastive multiview coding. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12356, pp. 776–794. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58621-8_45

    Chapter  Google Scholar 

  28. Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving jigsaw puzzles. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 69–84. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_5

    Chapter  Google Scholar 

  29. Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728 (2018)

  30. Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2015)

    Google Scholar 

  31. Cheng, G., et al.: SPNet: siamese-prototype network for few-shot remote sensing image scene classification. IEEE Trans. Geosci. Remote Sens. 60, 1–11 (2021)

    Google Scholar 

  32. Zhang, P., Bai, Y., Wang, D., Bai, B., Li, Y.: Few-shot classification of aerial scene images via meta-learning. Remote Sens. 13(1), 108 (2020)

    Article  Google Scholar 

  33. Li, L., Han, J., Yao, X., Cheng, G., Guo, L.: DLA-MatchNet for few-shot remote sensing image scene classification. IEEE Trans. Geosci. Remote Sens. 59(9), 7844–7853 (2020)

    Article  Google Scholar 

  34. Liu, B., Yu, X., Yu, A., Zhang, P., Wan, G., Wang, R.: Deep few-shot learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 57(4), 2290–2304 (2018)

    Article  Google Scholar 

  35. Deng, W., Gould, S., Zheng, L.: What does rotation prediction tell us about classifier accuracy under varying testing environments? In: International Conference on Machine Learning, pp. 2579–2589. PMLR (2021)

    Google Scholar 

  36. Sun, Q., Liu, Y., Chua, T.-S., Schiele, B.: Meta-transfer learning for few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 403–412 (2019)

    Google Scholar 

  37. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: relation network for few-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199–1208 (2018)

    Google Scholar 

  38. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, pp. 1126–1135. PMLR (2017)

    Google Scholar 

  39. LiZ., Z., Zhou, F., Chen, F., Li, H.: Meta-SGD: learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835 (2017)

  40. Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)

  41. Ji, H., Gao, Z., Zhang, Y., Wan, Y., Li, C., Mei, T.: Few-shot scene classification of optical remote sensing images leveraging calibrated pretext tasks. IEEE Trans. Geosci. Remote Sens. 60, 1–13 (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiming Ding .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Z., Ding, Z., Wang, Y. (2024). Few-Shot Learning Remote Scene Classification Based on DC-2DEC. In: Meng, X., Zhang, X., Guo, D., Hu, D., Zheng, B., Zhang, C. (eds) Spatial Data and Intelligence. SpatialDI 2024. Lecture Notes in Computer Science, vol 14619. Springer, Singapore. https://doi.org/10.1007/978-981-97-2966-1_21

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-2966-1_21

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-2965-4

  • Online ISBN: 978-981-97-2966-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics