Abstract
The superior performance of traditional Semi-Supervised Learning (SSL) methods are generally achieved in strictly data-constrained scenarios, e.g. the class distribution of labeled and unlabeled data is matched. However, in realistic scenarios, unlabeled data is gathered from a variety of sources and it is difficult to ensure a consistent class distribution with labeled data. Therefore, this paper considers a more realistic and widespread paradigm in which the labeled and unlabeled data come from the mismatched distribution, dubbed as Open-Set Semi-Supervised Learning (OS-SSL). Specifically, unlabeled data contains out of distribution (OOD) samples, which are samples that do not fall into the labeled categories. Existing research demonstrates that OOD samples can damage classification performance. Therefore, the OS-SSL methods usually filter out OOD samples during model training. In this work, we propose a simple but effective method, namely LaRW, which takes into account the overconfidence prediction of classifiers and the learning difficulty of each category, while attempting to utilize the OOD samples. First, we propose to apply the label propagation algorithm at the feature-level to assist in producing pseudo-labels, which improve the quality of pseudo-labels. Further, we design a novel OOD detection score to better filter OOD samples. Finally, we evaluate our method against the existing SSL and OS-SSL methods under several settings. Extensive empirical results demonstrate the effectiveness and expandability of our proposed method.
Similar content being viewed by others
Data Availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
References
Albert P, Ortego D, Arazo E, et al (2021) Relab: Reliable label bootstrapping for semi-supervised learning. In: International joint conference on neural networks, pp 1–8
Arazo E, Ortego D, Albert P, et al (2020) Pseudo-labeling and confirmation bias in deep semi-supervised learning. In: International joint conference on neural networks, pp 1–8
Berthelot D, Carlini N, Cubuk ED, et al (2019a) Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring. CoRR
Berthelot D, Carlini N, Goodfellow IJ, et al (2019b) Mixmatch: A holistic approach to semi-supervised learning. In: Neural information processing systems, pp 5050–5060
Beyer L, Zhai X, Oliver A, et al (2019) S4L: self-supervised semi-supervised learning. In: IEEE/CVF international conference on computer vision, pp 1476–1485
Cascante-Bonilla P, Tan F, Qi Y et al (2021) Curriculum labeling: Revisiting pseudo-labeling for semi-supervised learning. AAAI Conf Artif Intell 8:6912–6920
Chen T, Kornblith S, Norouzi M, et al (2020a) A simple framework for contrastive learning of visual representations. In: International conference on machine learning, pp 1597–1607
Chen Y, Zhu X, Li W, et al (2020b) Semi-supervised learning under class distribution mismatch. In: AAAI conference on artificial intelligence, pp 3569–3576
Deng J, Dong W, Socher R, et al (2009) Imagenet: A large-scale hierarchical image database. In: IEEE conference on computer vision and pattern recognition, pp 248–255
Doersch C, Gupta A, Efros AA (2015) Unsupervised visual representation learning by context prediction. In: IEEE/CVF international conference on computer vision, pp 1422–1430
Douze M, Szlam A, Hariharan B, et al (2018) Low-shot learning with large-scale diffusion. In: IEEE Conference on computer vision and pattern recognition, pp 3349–3358
Gidaris S, Singh P, Komodakis N (2018) Unsupervised representation learning by predicting image rotations. In: International conference on learning representations
Guo L, Zhang Z, Jiang Y, et al (2020) Safe deep semi-supervised learning for unseen-class unlabeled data. In: International conference on machine learning, pp 3897–3906
Huang J, Fang C, Chen W, et al (2021) Trash to treasure: Harvesting OOD data with cross-modal matching for open-set semi-supervised learning. In: IEEE/CVF international conference on computer vission, pp 8290–8299
Huang Z, Yang J, Gong C (2022) They are not completely useless: Towards recycling transferable unlabeled data for class-mismatched semi-supervised learning. IEEE Trans Multimed 1–1
Iscen A, Tolias G, Avrithis Y, et al (2019) Label propagation for deep semi-supervised learning. In: IEEE conference on computer vision and pattern recognition, pp 5070–5079
Jeong J, Lee S, Kim J, et al (2019) Consistency-based semi-supervised learning for object detection. In: Neural information processing systems, pp 10,758–10,767
Kingma DP, Ba J (2015) Adam: A method for stochastic optimization. In: International conference on learning representations
Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images
Laine S, Aila T (2017) Temporal ensembling for semi-supervised learning. In: International conference on learning representations
Lee DH, et al (2013) Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on challenges in representation learning, ICML, 2, p 896
Luo H, Cheng H, Gao Y, et al (2021) On the consistency training for open-set semi-supervised learning. CoRR
Miyato T, Dai AM, Goodfellow IJ (2017) Adversarial training methods for semi-supervised text classification. In: International conference on learning representations
Miyato T, Maeda S, Koyama M et al (2019) Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE Trans Pattern Anal Mach Intell 41(8):1979–1993
Noroozi M, Favaro P (2016) Unsupervised learning of visual representations by solving jigsaw puzzles. In: European conference on computer vision, pp 69–84
Oh J, Torisawa K, Hashimoto C, et al (2016) A semi-supervised learning approach to why-question answering. In: AAAI conference on artificial intelligence, pp 3022–3029
Oliver A, Odena A, Raffel C, et al (2018) Realistic evaluation of deep semi-supervised learning algorithms. In: Neural information processing systems, pp 3239–3250
Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybernet 9(1):62–66
Shi W, Gong Y, Ding C, et al (2018) Transductive semi-supervised deep learning using min-max features. In: European conference on computer vision, pp 311–327
Shu J, Xie Q, Yi L, et al (2019) Meta-weight-net: Learning an explicit mapping for sample weighting. In: Neural information processing systems, pp 1917–1928
Sohn K, Berthelot D, Carlini N, et al (2020) Fixmatch: Simplifying emi-supervised learning with consistency and confidence. In: Neural information processing systems, pp 596–608
Souly N, Spampinato C, Shah M (2017) Semi supervised semantic segmentation using generative adversarial network. In: IEEE/CVF international conference on computer vision, pp 5689–5697
Tarvainen A, Valpola H (2017) Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In: International conference on learning representations
Wang Y, Li B, Che T, et al (2021) Energy-based open-world uncertainty modeling for confidence calibration. In: IEEE/CVF international conference on computer vision, pp 9282–9291
Xie Q, Dai Z, Hovy EH, et al (2020) Unsupervised data augmentation for consistency training. In: Neural information processing systems
Yu F, Zhang Y, Song S, et al (2015) LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. CoRR
Yu Q, Ikami D, Irie G, et al (2020) Multi-task curriculum framework for open-set semi-supervised learning. In: European conference on computer vision, pp 438–454
Zagoruyko S, Komodakis N (2016) Wide residual networks. In: British machine vision conference
Zhang B, Wang Y, Hou W, et al (2021) Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In: Neural information processing systems, pp 18,408–18,419
Zhang H, Cissé M, Dauphin YN, et al (2018) mixup: Beyond empirical risk minimization. In: International conference on learning representations
Zhang R, Isola P, Efros AA (2016) Colorful image colorization. In: European conference on computer vision, pp 649–666
Zhou D, Bousquet O, Lal TN, et al (2003) Learning with local and global consistency. In: Neural information processing systems, pp 321–328
Acknowledgements
This work was partially supported by the National Natural Science Foundation of China (NSFC) [No.62006094, No.61876071] and Scientific and Technological Developing Scheme of Jilin Province [No.20180201003SF, No.20190701031GH] and Energy Administration of Jilin Province [No.3D516L921421].
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Experimental idea and design were performed by Qingyi Meng and Dong Mao. The first draft of the manuscript was written by Dong Mao and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interest
The authors have no competing interests to declare that are relevant to the content of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Ouyang, J., Mao, D. & Meng, Q. LaRW: boosting open-set semi-supervised learning with label-guided re-weighting. Multimed Tools Appl 83, 46419–46437 (2024). https://doi.org/10.1007/s11042-023-17357-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-023-17357-8