Abstract
Contrastive clustering is an effective deep clustering approach, which learns both instance-level consistency and cluster-level consistency in a contrastive learning fashion. However, the strategies of data augmentation used by contrastive clustering is an important prior knowledge such that inappropriate strategies may severely cause performance degradation. By converting the different strategies of data augmentations into a multi-view problem, we propose a safe contrastive clustering method which is guaranteed to alleviate the reliance on prior knowledge of data augmentations. The proposed method can maximize the complementary information between these different views and minimize the noise caused by the inferior views. Such a method addresses the safeness that contrastive clustering with multiple data augmentation strategies is no worse than that with one of those strategies. Moreover, we provide the theoretical guarantee that the proposed method can achieve empirical safeness. Extensive experiments demonstrate that our method can reach safe contrastive clustering on popular benchmark datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsupervised visual representation learning. In: CVPR (2020)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.E.: A simple framework for contrastive learning of visual representations. In: ICML (2020)
Van Gansbeke, W., Vandenhende, S., Georgoulis, S., Proesmans, M., Van Gool, L.: SCAN: learning to classify images without labels. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12355, pp. 268–285. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_16
Li, Y., et al.: Contrastive clustering. In: AAAI (2021)
Zhong, H., et al.: Graph contrastive clustering. In: ICCV (2021)
Dizaji, K.G., Herandi, A., Deng, C., (Tom) Cai, W., Huang, H.: Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization. In: ICCV (2017)
Ji, P., Zhang, T., Li, H., Salzmann, M., Reid, I.D.: Deep subspace clustering networks. In: NeurIPS (2017)
Caron, M., Bojanowski, P., Joulin, A., Douze, M.: Deep clustering for unsupervised learning of visual features. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 139–156. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_9
Hu, W., Miyato, T., Tokui, S., Matsumoto, E., Sugiyama, M.: Learning discrete representations via information maximizing self-augmented training. In: ICML (2017)
Ji, X., Vedaldi, A., Henriques, J.F.: Invariant information clustering for unsupervised image classification and segmentation. In: CVPR (2019)
Franceschi, L., Frasconi, P., Salzo, S., Grazzi, R., Pontil, M.: Bilevel programming for hyperparameter optimization and meta-learning. In: ICML (2018)
Li, Y.-F., Zhou, Z.-H.: Towards making unlabeled data never hurt. IEEE TPAMI 37, 175–188 (2015)
Li, Y.-F., Zha, H., Zhou, Z.-H.: Learning safe prediction for semi-supervised regression. In: AAAI (2017)
Guo, L.-Z., Zhang, Z.-Y., Jiang, Y., Li, Y.-F., Zhou, Z.H.: Safe deep semi-supervised learning for unseen-class unlabeled data. In: ICML (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
Trosten, D.J., Løkse, S., Jenssen, R., Kampffmeyer, M.C.: Reconsidering representation alignment for multi-view clustering. In: CVPR (2021)
Kampffmeyer, M.C., Løkse, S., Bianchi, F.M., Livi, L.F., Salberg, A.-B., Jenssen, R.: Deep divergence-based approach to clustering. Neural Netw. 13, 91–101 (2019)
Tang, H., Liu, Y.: Deep safe multi-view clustering: reducing the risk of clustering performance degradation caused by view increase. In: CVPR (2022)
Li, Y.-F., Guo, L.-Z., Zhou, Z.-H.: Towards safe weakly supervised learning. IEEE TPAMI 43, 334–346 (2021)
Acknowledgments
This work is supported by the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (2021030199).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Tang, P., Tang, H., Wang, W., Liu, Y. (2023). Safe Contrastive Clustering. In: Dang-Nguyen, DT., et al. MultiMedia Modeling. MMM 2023. Lecture Notes in Computer Science, vol 13833. Springer, Cham. https://doi.org/10.1007/978-3-031-27077-2_23
Download citation
DOI: https://doi.org/10.1007/978-3-031-27077-2_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-27076-5
Online ISBN: 978-3-031-27077-2
eBook Packages: Computer ScienceComputer Science (R0)