Abstract
Siamese-structure-based contrastive learning has shown excellent performance in learning visual representations due to its ability to minimize the distance between positive pairs and increase the distance between negative pairs. Existing works mostly employ RandomCrop or ContrastiveCrop to obtain positive pairs of an image. However, RandomCrop causes the cropped views to contain many useless backgrounds, while ContrastiveCrop produces positive pairs that are too similar. In this paper, we propose a novel SemanticCrop to yield cropped views containing as much semantic information as possible. Specifically, SemanticCrop first computes a heatmap of an image. Then, an empirical threshold is tuned to box out a semantic region whose heatmap values are over this threshold. Finally, we design a center-suppressed probabilistic sampling to avoid excessive similarity between positive pairs, making the cropped view contain more parts of an object. As a plug-and-play module, the MoCo, SimCLR, SimSiam, and BYOL models equipped with our SemanticCrop module achieve an accuracy improvement from 0.5% to 2.34% on the CIFAR10, CIFAR100, IN-200, and IN-1K datasets. The code is available at https://github.com/GZHU-DVL/SemanticCrop.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arora, S., Khandeparkar, H., Khodak, M., Plevrakis, O., Saunshi, N.: A theoretical analysis of contrastive unsupervised representation learning. arXiv preprint arXiv:1902.09229 (2019)
Caron, M., et al.: Emerging properties in self-supervised vision transformers. In: ICCV, pp. 9650–9660 (2021)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: ICML, pp. 1597–1607 (2020)
Chen, X., Fan, H., Girshick, R., He, K.: Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297 (2020)
Chen, X., He, K.: Exploring simple Siamese representation learning. In: CVPR, pp. 15750–15758 (2021)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: CVPR, pp. 248–255 (2009)
Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (VOC) challenge. IJCV 88, 303–338 (2010)
Faster, R.: Towards real-time object detection with region proposal networks. NeurIPS 9199(10.5555), 2969239–2969250 (2015)
Grill, J.B., et al.: Bootstrap your own latent-a new approach to self-supervised learning. NeurIPS 33, 21271–21284 (2020)
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: ICCV, pp. 2961–2969 (2017)
Khosla, P., et al.: Supervised contrastive learning. NeurIPS 33, 18661–18673 (2020)
Khosla, P., et al.: Supervised contrastive learning. NeurIPS 33, 18661–18673 (2020)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Mishra, S., et al.: Object-aware cropping for self-supervised learning. arXiv preprint arXiv:2112.00319 (2021)
Peng, X., Wang, K., Zhu, Z., Wang, M., You, Y.: Crafting better contrastive views for siamese representation learning. In: CVPR, pp. 16031–16040 (2022)
Peng, Y., He, X., Zhao, J.: Object-part attention model for fine-grained image classification. TIP 27(3), 1487–1500 (2017)
Purushwalkam, S., Gupta, A.: Demystifying contrastive self-supervised learning: invariances, augmentations and dataset biases. NeurIPS 33, 3407–3418 (2020)
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. TPAMI 39(6), 1137–1149 (2017)
Selvaraju, R.R., Desai, K., Johnson, J., Naik, N.: Casting your model: learning to localize improves self-supervised representations. In: CVPR, pp. 11058–11067 (2021)
Shen, Z., Liu, Z., Liu, Z., Savvides, M., Darrell, T.: Rethinking image mixture for unsupervised visual representation learning (2020). 3(7), 8, arXiv preprint arXiv:2003.05438
Sun, M., Yuan, Y., Zhou, F., Ding, E.: Multi-attention multi-class constraint for fine-grained image recognition. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 834–850. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01270-0_49
Tian, Y., Krishnan, D., Isola, P.: Contrastive multiview coding. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12356, pp. 776–794. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58621-8_45
Touvron, H., Vedaldi, A., Douze, M., Jégou, H.: Fixing the train-test resolution discrepancy. NeurIPS 32, 8250–8260 (2019)
Xiao, T., Wang, X., Efros, A.A., Darrell, T.: What should not be contrastive in contrastive learning. arXiv preprint arXiv:2008.05659 (2020)
Zhang, J., Wang, Y., Zhou, Z., Luan, T., Wang, Z., Qiao, Y.: Learning dynamical human-joint affinity for 3d pose estimation in videos. TIP 30, 7914–7925 (2021)
Zhu, R., Zhao, B., Liu, J., Sun, Z., Chen, C.W.: Improving contrastive learning by visualizing feature transformation. In: ICCV, pp. 10306–10315 (2021)
Zoph, B., et al.: Rethinking pre-training and self-training. NeurIPS 33, 3833–3845 (2020)
Acknowledgement
This work was supported in part by the National Natural Science Foundation of China under Grants 62272116 and 62002075, in part by the Basic and Applied Basic Research Foundation of Guangdong Province under Grant 2023A1515011428, and in part by the Science and Technology Foundation of Guangzhou under Grant 2023A04J1723. The authors acknowledge the Network Center of Guangzhou University for providing HPC computing resources.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Fang, Y., Chen, Z., Tang, W., Wang, YG. (2024). SemanticCrop: Boosting Contrastive Learning via Semantic-Cropped Views. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14430. Springer, Singapore. https://doi.org/10.1007/978-981-99-8537-1_27
Download citation
DOI: https://doi.org/10.1007/978-981-99-8537-1_27
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8536-4
Online ISBN: 978-981-99-8537-1
eBook Packages: Computer ScienceComputer Science (R0)