Abstract
Positive and Unlabeled learning (PU learning) learns a binary classifier on training data with only positive and unlabeled instances. Recent cost-sensitive methods tackled this problem by designing unbiased loss functions and achieved state-of-the-art performance. However, we observe that the model suffers from overfitting at the late training stage caused by regarding unlabeled positive samples as negative ones. This motivates us to propose PUSP, a novel framework that leverages sample selection and prototype refinement to tackle PU learning problem. We first carefully select reliable samples based on the time consistency of the model output and spatial consistency of different views of image contents. Then we assign labels to those unselected samples based on their feature similarities with the prototypes of the selected ones. We conduct extensive experiments to show the effectiveness of our method on over different datasets and modalities.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arpit, D., et al.: A closer look at memorization in deep networks. In ICML, pp. 233–242. PMLR (2017)
Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., Raffel, C.: A holistic approach to semi-supervised learning. In NeurIPS, Mixmatch (2019)
Chaudhari, S., Shevade, S.: Learning from positive and unlabelled examples using maximum margin clustering. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds.) ICONIP 2012. LNCS, vol. 7665, pp. 465–473. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34487-9_56
Chen, H., Liu, F., Wang, Y., Zhao, L., Wu, H.: A variational approach for learning from positive and unlabeled data. In: NeurIPS (2019)
Chen, X., et al.: Self-PU: self boosted and calibrated positive-unlabeled training. In: ICML, pp. 1510–1519. PMLR (2020)
Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
Plessis, M.D., Niu, G., Sugiyama, M.: Convex formulation for learning from positive and unlabeled data. In: ICML, pp. 1386–1394. PMLR (2015)
Plessis, M.C.D., Niu, G., Sugiyama, M.: Analysis of learning from positive and unlabeled data. NeurIPS 27, 703–711 (2014)
Fei, G., Liu, B.: Social media text classification under negative covariate shift. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 2347–2356 (2015)
Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. In: NeurIPS (2018)
Hendrycks, D., Mazeika, M., Wilson, D., Gimpel, K.: Using trusted data to train deep networks on labels corrupted by severe noise. In: NeurIPS (2018)
Hou, M., Chaib-Draa, B., Li, C., Zhao, Q.: Generative adversarial positive-unlabelled learning. In: IJCAI (2018)
Hsieh, Y.-G., Niu, G., Sugiyama, M.: Classification from positive, unlabeled and biased negative data. In: ICML, pp. 2820–2829. PMLR (2019)
Jiang, L., Zhou, Z., Leung, T., Li, L.-J., Fei-Fei, L.: MentorNet: learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, pp. 2304–2313. PMLR (2018)
Kato, M., Teshima, T., Honda, J.: Learning from positive and unlabeled data with a selection bias. In: ICLR (2018)
Kiryo, R., Niu, G., Plessis, M.C.D., Sugiyama, M.: Positive-unlabeled learning with non-negative risk estimator. In: NeurIPS (2017)
Krizhevsky, A., et al.: Learning multiple layers of features from tiny images (2009)
Laine, S., Aila, T.: Temporal ensembling for semi-supervised learning. In: ICLR (2017)
Lang, K.: Newsweeder: learning to filter netnews. In: Machine Learning Proceedings 1995, pp. 331–339. Elsevier (1995)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Li, B., Han, B., Wang, Z., Jiang, J., Long, G.: Confusable learning for large-class few-shot classification. In: Hutter, F., Kersting, K., Lijffijt, J., Valera, I. (eds.) ECML PKDD 2020. LNCS (LNAI), vol. 12458, pp. 707–723. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67661-2_42
Li, X., Liu, B.: Learning to classify texts using positive and unlabeled data. In: IJCAI, vol. 3, pp. 587–592. CiteSeer (2003)
Liu, B., Lee, W.S., Yu, P.S., Li, X.: Partially supervised classification of text documents. In ICML, vol. 2, pp. 387–394 (2002)
Liu, Y., Li, Z., Pan, S., Gong, C., Zhou, C., Karypis, G.: Anomaly detection on attributed networks via contrastive self-supervised learning. IEEE Trans. Neural Netw. Learn. Syst. 33(6), 2378–2392 (2021)
Chuan Luo, P., Zhao, C.C., Qiao, B., Chao, D., Zhang, H., Wei, W., Cai, S., He, B., Rajmohan, S., et al.: Pulns: Positive-unlabeled learning with effective negative sample selector. Proc. AAAI Conf. Artif. Intell. 35, 8784–8792 (2021)
Maas, A., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142–150 (2011)
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS, pp. 1273–1282. PMLR (2017)
Niu, G., du Plessis, M.C., Sakai, T., Ma, Y., Sugiyama, M.: Theoretical comparisons of positive-unlabeled learning against positive-negative learning. In: NeurIPS (2016)
Northcutt, C., Jiang, L., Chuang, I.: Confident learning: estimating uncertainty in dataset labels. J. Artif. Intell. Res. 70, 1373–1411 (2021)
Patrini, G., Rozza, A., Menon, A.K., Nock, R., Qu, L.: Making deep neural networks robust to label noise: a loss correction approach. In CVPR, pp. 1944–1952 (2017)
Peters, M.E., et al.: Knowledge enhanced contextual word representations. In: EMNLP-IJCNLP (2019)
Ramaswamy, H., Scott, C., Tewari, A.: Mixture proportion estimation via kernel embeddings of distributions. In ICML, pp. 2052–2060. PMLR (2016)
Ren, Y., Ji, D., Zhang, H.: Positive unlabeled learning for deceptive reviews detection. In EMNLP, pp. 488–498 (2014)
Schnabel, T., Swaminathan, A., Singh, A., Chandak, N., Joachims, T.: Recommendations as treatments: debiasing learning and evaluation. In: International Conference on Machine Learning, pp. 1670–1679. PMLR (2016)
Snell, J., Swersky, K., Zemel, R.S.: Prototypical networks for few-shot learning. In: NeurIPS (2017)
Sohn, K., et al.: Simplifying semi-supervised learning with consistency and confidence. In: NeurIPS, Fixmatch (2020)
Tan, Y., et al.: Fedproto: federated prototype learning across heterogeneous clients. In AAAI Conference on Artificial Intelligence, vol. 1, p. 3 (2022)
Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. In: NeurIPS (2017)
Wang, Z., et al.: SemiNLL: a framework of noisy-label learning by semi-supervised learning. in: Transactions on Machine Learning Research (2022)
Wang, Z., Zhou, T., Long, G., Han, B., Jiang, J.: FedNOiL: a simple two-level sampling method for federated learning with noisy labels. arXiv preprint arXiv:2205.10110 (2022)
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)
Xu, J., Chen, Z., Quek, T.Q.S., Chong, K.F.E.: FedCorr: multi-stage federated learning for label noise correction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10184–10193 (2022)
Yang, Y., Jiang, J., Wang, Z., Duan, Q., Shi, Y.: BiES: adaptive policy optimization for model-based offline reinforcement learning. In: Long, G., Yu, X., Wang, S. (eds.) AI 2022. LNCS (LNAI), vol. 13151, pp. 570–581. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-97546-3_46
Zhang, B., Zuo, W.: Reliable negative extracting based on KNN for learning from positive and unlabeled examples. J. Comput. 4(1), 94–101 (2009)
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. In: ICLR (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, Z., Long, G. (2022). Positive Unlabeled Learning by Sample Selection and Prototype Refinement. In: Chen, W., Yao, L., Cai, T., Pan, S., Shen, T., Li, X. (eds) Advanced Data Mining and Applications. ADMA 2022. Lecture Notes in Computer Science(), vol 13725. Springer, Cham. https://doi.org/10.1007/978-3-031-22064-7_23
Download citation
DOI: https://doi.org/10.1007/978-3-031-22064-7_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-22063-0
Online ISBN: 978-3-031-22064-7
eBook Packages: Computer ScienceComputer Science (R0)