Skip to main content

Advertisement

Log in

An active unseen sample selection framework for generalized zero-shot classification

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

The purpose of generalized zero-shot classification (GZSC) is to classify the test samples no matter whether training classes (seen classes) or new classes (unseen classes) they are from. However, there are no labeled samples from new classes in the training process. Thus, selecting some unseen samples and labeling them are significant for GZSC. We present two novel ideas for GZSC: (1) splitting target samples into seen and unseen ones; (2) improving the GZSC performance with less manual annotations of the unseen samples. We propose an active unseen sample selection framework for GZSC tasks (AUSS). Specifically, a two-stage coarse-to-fine-grained selection method is first used to split target samples into seen and unseen ones. The selected unseen samples can be divided into high-confidence and informative ones. Unlike traditional active learning methods focusing on only the informative samples, we especially focus on the large number of high-confidence unseen samples. The high-confidence unseen samples are assigned with pseudo labels which do not need to be manually labeled. We select as few as possible informative unseen samples for manually labeling. Thanks to these high-confidence unseen samples and informative unseen samples, we do not need to train generative models for generating virtual unseen samples. Experiments on widely-adopted GZSC benchmarks demonstrate the advantages of AUSS over existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Atzmon Y, Chechik G (2019) Adaptive confidence smoothing for generalized zero-shot learning. In: CVPR, pp 11671–11680

  2. Brinker K (2003) Incorporating diversity in active learning with support vector machines. In: Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp 59–66

  3. Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2:271–277

    Article  Google Scholar 

  4. Changpinyo S, Chao WL, Gong B, Sha F (2016) Synthesized classifiers for zero-shot learning. In: CVPR, pp 5327–5336

  5. Changpinyo S, Chao WL, Sha F (2017) Predicting visual exemplars of unseen classes for zero-shot learning. In: ICCV, pp 3476–3485

  6. Chao WL, Changpinyo S, Gong B, Sha F (2016) An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. ECCV. Springer, Berlin, pp 52–68

    Google Scholar 

  7. Ding Z, Shao M, Fu Y (2017) Low-rank embedded ensemble semantic dictionary for zero-shot learning. In: CVPR, pp 2050–2058

  8. Elhoseiny M, Saleh B, Elgammal A (2013) Write a classifier: zero-shot learning using purely textual descriptions. In: ICCV, pp 2584–2591

  9. Farhadi A, Endres I, Hoiem D, Forsyth D (2009) Describing objects by their attributes. In: CVPR, IEEE, pp 1778–1785

  10. Felix R, Kumar VB, Reid I, Carneiro G (2018) Multi-modal cycle-consistent generalized zero-shot learning. In: ECCV, pp 21–37

  11. Fu Y, Hospedales TM, Xiang T, Fu Z, Gong S (2014) Transductive multi-view embedding for zero-shot recognition and annotation. ECCV. Springer, Berlin, pp 584–599

    Google Scholar 

  12. Gan C, Yang T, Gong B (2016) Learning attributes equals multi-source domain generalization. In: CVPR, pp 87–97

  13. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: NeurIPS, pp 2672–2680

  14. Huang H, Wang C, Yu PS, Wang CD (2019) Generative dual adversarial network for generalized zero-shot learning. In: CVPR, pp 801–810

  15. Keshari R, Singh R, Vatsa M (2020) Generalized zero-shot learning via over-complete distribution. In: CVPR, pp 13300–13308

  16. Kingma DP, Welling M (2014) Auto-encoding variational bayes. ICLR, Washington

    Google Scholar 

  17. Kodirov E, Xiang T, Fu Z, Gong S (2015) Unsupervised domain adaptation for zero-shot learning. In: ICCV, pp 2452–2460

  18. Kodirov E, Xiang T, Gong S (2017) Semantic autoencoder for zero-shot learning. In: CVPR, pp 3174–3183

  19. Lampert CH, Nickisch H, Harmeling S (2014) Attribute-based classification for zero-shot visual object categorization. Pattern Anal Machine Intell IEEE Trans 36(3):453–465

    Article  Google Scholar 

  20. Li J, Jing M, Lu K, Ding Z, Zhu L, Huang Z (2019) Leveraging the invariant side of generative zero-shot learning. In: CVPR

  21. Li J, Jing M, Lu K, Ding Z, Zhu L, Huang Z (2019) Leveraging the invariant side of generative zero-shot learning. In: CVPR, pp 7402–7411

  22. Li J, Lan X, Long Y, Liu Y, Chen X, Shao L, Zheng N (2020) A joint label space for generalized zero-shot classification. IEEE Trans Image Process 29:5817–5831

    Article  MathSciNet  Google Scholar 

  23. Li X, Fang M, Feng D, Li H, Wu J (2018) Learning unseen visual prototypes for zero-shot classification. Knowl Based Syst 160:176–187

    Article  Google Scholar 

  24. Liu S, Long M, Wang J, Jordan MI (2018) Generalized zero-shot learning with deep calibration network. In: NeurIPS, pp 2005–2015

  25. McCallumzy AK, Nigamy K (1998) Employing em and pool-based active learning for text classification. In: Proceedings of International Conference on Machine Learning (ICML). Citeseer, pp 359–367

  26. Min S, Yao H, Xie H, Wang C, Zha ZJ, Zhang Y (2020) Domain-aware visual bias eliminating for generalized zero-shot learning. In: CVPR, pp 12664–12673

  27. Ni J, Zhang S, Xie H (2019) Dual adversarial semantics-consistent network for generalized zero-shot learning. In: NeurIPS, pp 6143–6154

  28. Norouzi M, Mikolov T, Bengio S, Singer Y, Shlens J, Frome A, Corrado GS, Dean J (2014) Zero-shot learning by convex combination of semantic embeddings. In: ICLR

  29. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359

    Article  Google Scholar 

  30. Patterson G, Hays J (2012) Sun attribute database: discovering, annotating, and recognizing scene attributes. In: CVPR, IEEE, pp 2751–2758

  31. Romera-Paredes B, Torr P (2015) An embarrassingly simple approach to zero-shot learning. In: ICML, pp 2152–2161

  32. Schonfeld E, Ebrahimi S, Sinha S, Darrell T, Akata Z (2019) Generalized zero-and few-shot learning via aligned variational autoencoders. In: CVPR, pp 8247–8255

  33. Settles B (2009) Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences, Tech. Report

  34. Shigeto Y, Suzuki I, Hara K, Shimbo M, Matsumoto Y (2015) Ridge regression, hubness, and zero-shot learning. Joint European conference on machine learning and knowledge discovery in databases. Springer, Berlin, pp 135–151

    Chapter  Google Scholar 

  35. Socher R, Ganjoo M, Manning CD, Ng A (2013) Zero-shot learning through cross-modal transfer. In: NeurIPS, pp 935–943

  36. Wah C, Branson S, Welinder P, Perona P, Belongie S (2011) The caltech-ucsd birds-200-2011 dataset. Technical report

  37. Wan Z, Chen D, Li Y, Yan X, Zhang J, Yu Y, Liao J (2019) Transductive zero-shot learning with visual structure constraint. In: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019)

  38. Xian Y, Akata Z, Sharma G, Nguyen Q, Hein M, Schiele B (2016) Latent embeddings for zero-shot classification. In: CVPR, pp 69–77

  39. Xian Y, Lampert CH, Schiele B, Akata Z (2018) Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. IEEE Trans Pattern Anal Machine Intell 41(9):2251–2265

    Article  Google Scholar 

  40. Xian Y, Lorenz T, Schiele B, Akata Z (2018) Feature generating networks for zero-shot learning. In: CVPR, pp 5542–5551

  41. Xian Y, Sharma S, Schiele B, Akata Z (2019) f-vaegan-d2: a feature generating framework for any-shot learning. In: CVPR, pp 10275–10284

  42. Xie Z, Cao W, Ming Z (2021) A further study on biologically inspired feature enhancement in zero-shot learning. Int J Mach Learn Cybern 12(1):257–269

    Article  Google Scholar 

  43. Xiong X, De la Torre F (2013) Supervised descent method and its applications to face alignment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 532–539

  44. Xu X, Shen F, Yang Y, Zhang D, Shen HT, Song J (2017) Matrix tri-factorization with manifold regularizations for zero-shot learning. CVPR. https://doi.org/10.1109/CVPR.2017.217

    Article  Google Scholar 

  45. Yu Y, Ji Z, Guo J, Zhang Z (2018) Zero-shot learning via latent space encoding. IEEE Trans Cybern 49(10):3755–3766

    Article  Google Scholar 

  46. Yu Y, Ji Z, Han J, Zhang Z (2020) Episode-based prototype generating network for zero-shot learning. In: CVPR, pp 14035–14044

  47. Zhang H, Koniusz P (2018) Model selection for generalized zero-shot learning. In: ECCV

  48. Zhang H, Liu L, Long Y, Zhang Z, Shao L (2020) Deep transductive network for generalized zero shot learning. Pattern Recognit. https://doi.org/10.1016/j.patcog.2020.107370

    Article  Google Scholar 

  49. Zhang L, Xiang T, Gong S (2017) Learning a deep embedding model for zero-shot learning. In: CVPR, pp 2021–2030

  50. Zhu Y, Elhoseiny M, Liu B, Peng X, Elgammal A (2018) A generative adversarial approach for zero-shot learning from noisy texts. In: CVPR, pp 4321–4330

Download references

Acknowledgements

This work is supported by National Natural Science Foundation of China under Grant no. 61806155, National Natural Science Foundation of shaanxi province (Grant No.2020GY-062, 2020JQ-323), China Postdoctoral Science Foundation funded project under Grant no. 2018M631125, Fundamental Research Funds for the Central Universities under Grant no. XJS200303, Nature Science Foundation of Anhui Province under Grant no. 1908085MF186.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Fang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, X., Fang, M. & Chen, B. An active unseen sample selection framework for generalized zero-shot classification. Int. J. Mach. Learn. & Cyber. 13, 2119–2134 (2022). https://doi.org/10.1007/s13042-022-01509-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-022-01509-7

Keywords