Abstract
The purpose of generalized zero-shot classification (GZSC) is to classify the test samples no matter whether training classes (seen classes) or new classes (unseen classes) they are from. However, there are no labeled samples from new classes in the training process. Thus, selecting some unseen samples and labeling them are significant for GZSC. We present two novel ideas for GZSC: (1) splitting target samples into seen and unseen ones; (2) improving the GZSC performance with less manual annotations of the unseen samples. We propose an active unseen sample selection framework for GZSC tasks (AUSS). Specifically, a two-stage coarse-to-fine-grained selection method is first used to split target samples into seen and unseen ones. The selected unseen samples can be divided into high-confidence and informative ones. Unlike traditional active learning methods focusing on only the informative samples, we especially focus on the large number of high-confidence unseen samples. The high-confidence unseen samples are assigned with pseudo labels which do not need to be manually labeled. We select as few as possible informative unseen samples for manually labeling. Thanks to these high-confidence unseen samples and informative unseen samples, we do not need to train generative models for generating virtual unseen samples. Experiments on widely-adopted GZSC benchmarks demonstrate the advantages of AUSS over existing methods.







Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Atzmon Y, Chechik G (2019) Adaptive confidence smoothing for generalized zero-shot learning. In: CVPR, pp 11671–11680
Brinker K (2003) Incorporating diversity in active learning with support vector machines. In: Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp 59–66
Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2:271–277
Changpinyo S, Chao WL, Gong B, Sha F (2016) Synthesized classifiers for zero-shot learning. In: CVPR, pp 5327–5336
Changpinyo S, Chao WL, Sha F (2017) Predicting visual exemplars of unseen classes for zero-shot learning. In: ICCV, pp 3476–3485
Chao WL, Changpinyo S, Gong B, Sha F (2016) An empirical study and analysis of generalized zero-shot learning for object recognition in the wild. ECCV. Springer, Berlin, pp 52–68
Ding Z, Shao M, Fu Y (2017) Low-rank embedded ensemble semantic dictionary for zero-shot learning. In: CVPR, pp 2050–2058
Elhoseiny M, Saleh B, Elgammal A (2013) Write a classifier: zero-shot learning using purely textual descriptions. In: ICCV, pp 2584–2591
Farhadi A, Endres I, Hoiem D, Forsyth D (2009) Describing objects by their attributes. In: CVPR, IEEE, pp 1778–1785
Felix R, Kumar VB, Reid I, Carneiro G (2018) Multi-modal cycle-consistent generalized zero-shot learning. In: ECCV, pp 21–37
Fu Y, Hospedales TM, Xiang T, Fu Z, Gong S (2014) Transductive multi-view embedding for zero-shot recognition and annotation. ECCV. Springer, Berlin, pp 584–599
Gan C, Yang T, Gong B (2016) Learning attributes equals multi-source domain generalization. In: CVPR, pp 87–97
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: NeurIPS, pp 2672–2680
Huang H, Wang C, Yu PS, Wang CD (2019) Generative dual adversarial network for generalized zero-shot learning. In: CVPR, pp 801–810
Keshari R, Singh R, Vatsa M (2020) Generalized zero-shot learning via over-complete distribution. In: CVPR, pp 13300–13308
Kingma DP, Welling M (2014) Auto-encoding variational bayes. ICLR, Washington
Kodirov E, Xiang T, Fu Z, Gong S (2015) Unsupervised domain adaptation for zero-shot learning. In: ICCV, pp 2452–2460
Kodirov E, Xiang T, Gong S (2017) Semantic autoencoder for zero-shot learning. In: CVPR, pp 3174–3183
Lampert CH, Nickisch H, Harmeling S (2014) Attribute-based classification for zero-shot visual object categorization. Pattern Anal Machine Intell IEEE Trans 36(3):453–465
Li J, Jing M, Lu K, Ding Z, Zhu L, Huang Z (2019) Leveraging the invariant side of generative zero-shot learning. In: CVPR
Li J, Jing M, Lu K, Ding Z, Zhu L, Huang Z (2019) Leveraging the invariant side of generative zero-shot learning. In: CVPR, pp 7402–7411
Li J, Lan X, Long Y, Liu Y, Chen X, Shao L, Zheng N (2020) A joint label space for generalized zero-shot classification. IEEE Trans Image Process 29:5817–5831
Li X, Fang M, Feng D, Li H, Wu J (2018) Learning unseen visual prototypes for zero-shot classification. Knowl Based Syst 160:176–187
Liu S, Long M, Wang J, Jordan MI (2018) Generalized zero-shot learning with deep calibration network. In: NeurIPS, pp 2005–2015
McCallumzy AK, Nigamy K (1998) Employing em and pool-based active learning for text classification. In: Proceedings of International Conference on Machine Learning (ICML). Citeseer, pp 359–367
Min S, Yao H, Xie H, Wang C, Zha ZJ, Zhang Y (2020) Domain-aware visual bias eliminating for generalized zero-shot learning. In: CVPR, pp 12664–12673
Ni J, Zhang S, Xie H (2019) Dual adversarial semantics-consistent network for generalized zero-shot learning. In: NeurIPS, pp 6143–6154
Norouzi M, Mikolov T, Bengio S, Singer Y, Shlens J, Frome A, Corrado GS, Dean J (2014) Zero-shot learning by convex combination of semantic embeddings. In: ICLR
Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359
Patterson G, Hays J (2012) Sun attribute database: discovering, annotating, and recognizing scene attributes. In: CVPR, IEEE, pp 2751–2758
Romera-Paredes B, Torr P (2015) An embarrassingly simple approach to zero-shot learning. In: ICML, pp 2152–2161
Schonfeld E, Ebrahimi S, Sinha S, Darrell T, Akata Z (2019) Generalized zero-and few-shot learning via aligned variational autoencoders. In: CVPR, pp 8247–8255
Settles B (2009) Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences, Tech. Report
Shigeto Y, Suzuki I, Hara K, Shimbo M, Matsumoto Y (2015) Ridge regression, hubness, and zero-shot learning. Joint European conference on machine learning and knowledge discovery in databases. Springer, Berlin, pp 135–151
Socher R, Ganjoo M, Manning CD, Ng A (2013) Zero-shot learning through cross-modal transfer. In: NeurIPS, pp 935–943
Wah C, Branson S, Welinder P, Perona P, Belongie S (2011) The caltech-ucsd birds-200-2011 dataset. Technical report
Wan Z, Chen D, Li Y, Yan X, Zhang J, Yu Y, Liao J (2019) Transductive zero-shot learning with visual structure constraint. In: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019)
Xian Y, Akata Z, Sharma G, Nguyen Q, Hein M, Schiele B (2016) Latent embeddings for zero-shot classification. In: CVPR, pp 69–77
Xian Y, Lampert CH, Schiele B, Akata Z (2018) Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. IEEE Trans Pattern Anal Machine Intell 41(9):2251–2265
Xian Y, Lorenz T, Schiele B, Akata Z (2018) Feature generating networks for zero-shot learning. In: CVPR, pp 5542–5551
Xian Y, Sharma S, Schiele B, Akata Z (2019) f-vaegan-d2: a feature generating framework for any-shot learning. In: CVPR, pp 10275–10284
Xie Z, Cao W, Ming Z (2021) A further study on biologically inspired feature enhancement in zero-shot learning. Int J Mach Learn Cybern 12(1):257–269
Xiong X, De la Torre F (2013) Supervised descent method and its applications to face alignment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 532–539
Xu X, Shen F, Yang Y, Zhang D, Shen HT, Song J (2017) Matrix tri-factorization with manifold regularizations for zero-shot learning. CVPR. https://doi.org/10.1109/CVPR.2017.217
Yu Y, Ji Z, Guo J, Zhang Z (2018) Zero-shot learning via latent space encoding. IEEE Trans Cybern 49(10):3755–3766
Yu Y, Ji Z, Han J, Zhang Z (2020) Episode-based prototype generating network for zero-shot learning. In: CVPR, pp 14035–14044
Zhang H, Koniusz P (2018) Model selection for generalized zero-shot learning. In: ECCV
Zhang H, Liu L, Long Y, Zhang Z, Shao L (2020) Deep transductive network for generalized zero shot learning. Pattern Recognit. https://doi.org/10.1016/j.patcog.2020.107370
Zhang L, Xiang T, Gong S (2017) Learning a deep embedding model for zero-shot learning. In: CVPR, pp 2021–2030
Zhu Y, Elhoseiny M, Liu B, Peng X, Elgammal A (2018) A generative adversarial approach for zero-shot learning from noisy texts. In: CVPR, pp 4321–4330
Acknowledgements
This work is supported by National Natural Science Foundation of China under Grant no. 61806155, National Natural Science Foundation of shaanxi province (Grant No.2020GY-062, 2020JQ-323), China Postdoctoral Science Foundation funded project under Grant no. 2018M631125, Fundamental Research Funds for the Central Universities under Grant no. XJS200303, Nature Science Foundation of Anhui Province under Grant no. 1908085MF186.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Li, X., Fang, M. & Chen, B. An active unseen sample selection framework for generalized zero-shot classification. Int. J. Mach. Learn. & Cyber. 13, 2119–2134 (2022). https://doi.org/10.1007/s13042-022-01509-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13042-022-01509-7