skip to main content
10.1145/3589334.3645327acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Towards Expansive and Adaptive Hard Negative Mining: Graph Contrastive Learning via Subspace Preserving

Published: 13 May 2024 Publication History

Abstract

Graph Neural Networks (GNNs) have emerged as the predominant approach for analyzing graph data on the web and beyond. Contrastive learning (CL), a self-supervised paradigm, not only mitigates reliance on annotations but also has potential in performance. The hard negative sampling strategy that benefits CL in other domains proves ineffective in the context of Graph Contrastive Learning (GCL) due to the message passing mechanism. Embracing the subspace hypothesis in clustering, we propose a method towards expansive and adaptive hard negative mining, referred to as G raph contR astive leA rning via subsP ace prE serving (GRAPE ). Beyond homophily, we argue that false negatives are prevalent over an expansive range and exploring them confers benefits upon GCL. Diverging from existing neighbor-based methods, our method seeks to mine long-range hard negatives throughout subspace, where message passing is conceived as interactions between subspaces. %Empirical investigations back up this strategy. Additionally, our method adaptively scales the hard negatives set through subspace preservation during training. In practice, we develop two schemes to enhance GCL that are pluggable into existing GCL frameworks. The underlying mechanisms are analyzed and the connections to related methods are investigated. Comprehensive experiments demonstrate that our method outperforms across diverse graph datasets and remains competitive across varied application scenarios\footnoteOur code is available at https://github.com/zz-haooo/WWW24-GRAPE. .

Supplemental Material

MP4 File
presentation video
MP4 File
Supplemental video

References

[1]
Amir Beck and Marc Teboulle. 2009. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci., Vol. 2, 1 (2009), 183--202.
[2]
Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. 2018. Mutual information neural estimation. In ICML. 531--540.
[3]
Peter N. Belhumeur, Joao P Hespanha, and David J. Kriegman. 1997. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 19, 7 (1997), 711--720.
[4]
Xiaochun Cao, Changqing Zhang, Huazhu Fu, Si Liu, and Hua Zhang. 2015. Diversity-induced multi-view subspace clustering. In CVPR. 586--594.
[5]
Shuo Chen, Gang Niu, Chen Gong, Jun Li, Jian Yang, and Masashi Sugiyama. 2021. Large-margin contrastive learning with distance polarization regularizer. In ICML. 1673--1683.
[6]
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In ICML. 1597--1607.
[7]
Xinlei Chen and Kaiming He. 2021. Exploring simple siamese representation learning. In CVPR. 15750--15758.
[8]
Ching-Yao Chuang, Joshua Robinson, Yen-Chen Lin, Antonio Torralba, and Stefanie Jegelka. 2020. Debiased contrastive learning. In NeurIPS, Vol. 33. 8765--8775.
[9]
Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and Andrew Zisserman. 2021. With a little help from my friends: Nearest-neighbor contrastive learning of visual representations. In CVPR. 9588--9597.
[10]
Ehsan Elhamifar and René Vidal. 2013. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 35, 11 (2013), 2765--2781.
[11]
Yuqiang Fang, Ruili Wang, Bin Dai, and Xindong Wu. 2014. Graph-based learning via auto-grouped sparse regularization and kernelized extension. IEEE Trans. Knowl. Data Eng., Vol. 27, 1 (2014), 142--154.
[12]
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple Contrastive Learning of Sentence Embeddings. In EMNLP. 6894--6910.
[13]
Johannes Gasteiger, Stefan Weißenberger, and Stephan Günnemann. 2019. Diffusion improves graph learning. In NeurIPS, Vol. 32.
[14]
Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. 2020. Bootstrap your own latent-a new approach to self-supervised learning. In NeurIPS, Vol. 33. 21271--21284.
[15]
Michael U Gutmann and Aapo Hyv"arinen. 2012. Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics. J. Mach. Learn. Res., Vol. 13, 2 (2012).
[16]
Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006. Dimensionality reduction by learning an invariant mapping. In CVPR, Vol. 2. 1735--1742.
[17]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In NeurIPS, Vol. 30.
[18]
Kaveh Hassani and Amir Hosein Khasahmadi. 2020. Contrastive multi-view representation learning on graphs. In ICML. 4116--4126.
[19]
Xiaofei He and Partha Niyogi. 2003. Locality preserving projections. In NeurIPS, Vol. 16.
[20]
R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. 2018. Learning deep representations by mutual information estimation and maximization. In ICLR.
[21]
Han Hu, Zhouchen Lin, Jianjiang Feng, and Jie Zhou. 2014. Smooth representation clustering. In CVPR. 3834--3841.
[22]
Anil Jain and Douglas Zongker. 1997. Feature selection: Evaluation, application, and small sample performance. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 19, 2 (1997), 153--158.
[23]
Yizhu Jiao, Yun Xiong, Jiawei Zhang, Yao Zhang, Tianqi Zhang, and Yangyong Zhu. 2020. Sub-graph contrast for scalable self-supervised graph representation learning. In ICDM. 222--231.
[24]
Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, and Diane Larlus. 2020. Hard negative mixing for contrastive learning. In NeurIPS, Vol. 33. 21798--21809.
[25]
Mohsen Kheirandishfard, Fariba Zohrizadeh, and Farhad Kamangar. 2020. Deep low-rank subspace clustering. In CVPR. 864--865.
[26]
Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In NeurIPS, Vol. 33. 18661--18673.
[27]
Thomas N Kipf and Max Welling. 2016a. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR.
[28]
Thomas N Kipf and Max Welling. 2016b. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016).
[29]
Simin Kou, Xuesong Yin, Yigang Wang, Songcan Chen, Tieming Chen, and Zizhao Wu. 2023. Structure-Aware Subspace Clustering. IEEE Trans. Knowl. Data Eng. (2023).
[30]
Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Ng. 2006. Efficient sparse coding algorithms. In NeurIPS, Vol. 19.
[31]
Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, and Honglak Lee. 2020. $ i $-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning. In ICLR.
[32]
Namkyeong Lee, Junseok Lee, and Chanyoung Park. 2022. Augmentation-free self-supervised learning on graphs. In AAAI, Vol. 36. 7372--7380.
[33]
Wen-Zhi Li, Chang-Dong Wang, Hui Xiong, and Jian-Huang Lai. 2023. HomoGCL: Rethinking Homophily in Graph Contrastive Learning. arXiv preprint arXiv:2306.09614 (2023).
[34]
Zechao Li, Jing Liu, Jinhui Tang, and Hanqing Lu. 2015. Robust structured subspace learning for data representation. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 37, 10 (2015), 2085--2098.
[35]
Guangcan Liu, Zhouchen Lin, Shuicheng Yan, Ju Sun, Yong Yu, and Yi Ma. 2012. Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 35, 1 (2012), 171--184.
[36]
Guangcan Liu and Shuicheng Yan. 2011. Latent low-rank representation for subspace segmentation and feature extraction. In ICCV. 1615--1622.
[37]
Nian Liu, Xiao Wang, Deyu Bo, Chuan Shi, and Jian Pei. 2022b. Revisiting graph contrastive learning from the perspective of graph spectrum. In NeurIPS, Vol. 35. 2972--2983.
[38]
Weijie Liu, Jiahao Xie, Chao Zhang, Makoto Yamada, Nenggan Zheng, and Hui Qian. 2022c. Robust Graph Dictionary Learning. In ICLR.
[39]
Yixin Liu, Ming Jin, Shirui Pan, Chuan Zhou, Yu Zheng, Feng Xia, and S Yu Philip. 2022a. Graph self-supervised learning: A survey. IEEE Trans. Knowl. Data Eng., Vol. 35, 6 (2022), 5879--5900.
[40]
Yue Liu, Xihong Yang, Sihang Zhou, Xinwang Liu, Zhen Wang, Ke Liang, Wenxuan Tu, Liang Li, Jingcan Duan, and Cancan Chen. 2023. Hard sample aware network for contrastive deep graph clustering. In AAAI, Vol. 37. 8914--8922.
[41]
Canyi Lu, Jiashi Feng, Zhouchen Lin, Tao Mei, and Shuicheng Yan. 2018. Subspace clustering by block diagonal representation. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 41, 2 (2018), 487--501.
[42]
Can-Yi Lu, Hai Min, Zhong-Qiu Zhao, Lin Zhu, De-Shuang Huang, and Shuicheng Yan. 2012. Robust and efficient subspace segmentation via least squares regression. In ECCV. 347--360.
[43]
Julien Mairal, Jean Ponce, Guillermo Sapiro, Andrew Zisserman, and Francis Bach. 2008. Supervised dictionary learning. In NeurIPS, Vol. 21.
[44]
Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual review of sociology, Vol. 27, 1 (2001), 415--444.
[45]
Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman. 2020. End-to-end learning of visual representations from uncurated instructional videos. In CVPR. 9879--9889.
[46]
Andrew Ng, Michael Jordan, and Yair Weiss. 2001. On spectral clustering: Analysis and an algorithm. In NeurIPS, Vol. 14.
[47]
Zhiyuan Ning, Pengfei Wang, Pengyang Wang, Ziyue Qiao, Wei Fan, Denghui Zhang, Yi Du, and Yuanchun Zhou. [n.d.]. Graph soft-contrastive learning via neighborhood ranking. ( [n.,d.]).
[48]
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018).
[49]
Vishal M Patel, Hien Van Nguyen, and René Vidal. 2013. Latent space sparse subspace clustering. In ICCV. 225--232.
[50]
Zhen Peng, Wenbing Huang, Minnan Luo, Qinghua Zheng, Yu Rong, Tingyang Xu, and Junzhou Huang. 2020. Graph representation learning via graphical mutual information maximization. In WWW. 259--270.
[51]
Xueming Qian, He Feng, Guoshuai Zhao, and Tao Mei. 2013. Personalized recommendation combining user interest and social circle. IEEE Trans. Knowl. Data Eng., Vol. 26, 7 (2013), 1763--1777.
[52]
Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. 2020. Gcc: Graph contrastive coding for graph neural network pre-training. In KDD. 1150--1160.
[53]
Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. 2020. Contrastive Learning with Hard Negative Samples. In ICLR.
[54]
Sam T Roweis and Lawrence K Saul. 2000. Nonlinear dimensionality reduction by locally linear embedding. Science, Vol. 290, 5500 (2000), 2323--2326.
[55]
Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In NeurIPS, Vol. 29.
[56]
Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. 2019. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000 (2019).
[57]
Qingqiang Sun, Wenjie Zhang, and Xuemin Lin. 2023. Progressive Hard Negative Masking: From Global Uniformity to Local Tolerance. IEEE Trans. Knowl. Data Eng. (2023).
[58]
Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. 2021. Adversarial graph augmentation to improve graph contrastive learning. In NeurIPS, Vol. 34. 15920--15933.
[59]
Joshua B Tenenbaum, Vin de Silva, and John C Langford. 2000. A global geometric framework for nonlinear dimensionality reduction. Science, Vol. 290, 5500 (2000), 2319--2323.
[60]
Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L Dyer, Remi Munos, Petar Velivc ković, and Michal Valko. 2021. Large-Scale Representation Learning on Graphs via Bootstrapping. In ICLR.
[61]
Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. 2020. What makes for good views for contrastive learning?. In NeurIPS, Vol. 33. 6827--6839.
[62]
Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. 2019. On Mutual Information Maximization for Representation Learning. In ICLR.
[63]
Petar Velivc ković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018a. Graph Attention Networks. In ICLR.
[64]
Petar Velivc ković, William Fedus, William L Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. 2018b. Deep Graph Infomax. In ICLR.
[65]
Rene Vidal. 2021. Attention: Self-Expression Is All You Need. (2021).
[66]
Cédric Vincent-Cuaz, Titouan Vayer, Rémi Flamary, Marco Corneli, and Nicolas Courty. 2021. Online graph dictionary learning. In ICML. 10564--10574.
[67]
Feng Wang and Huaping Liu. 2021. Understanding the behaviour of contrastive loss. In CVPR. 2495--2504.
[68]
Yu-Xiang Wang and Huan Xu. 2013. Noisy sparse subspace clustering. In ICML. 89--97.
[69]
Jason Weston, André Elisseeff, Bernhard Schölkopf, and Mike Tipping. 2003. Use of the zero norm with linear models and kernel methods. J. Mach. Learn. Res., Vol. 3 (2003), 1439--1461.
[70]
John Wright, Allen Y Yang, Arvind Ganesh, S Shankar Sastry, and Yi Ma. 2008. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 31, 2 (2008), 210--227.
[71]
Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019b. NPA: neural news recommendation with personalized attention. In KDD. 2576--2584.
[72]
Le Wu, Peijie Sun, Yanjie Fu, Richang Hong, Xiting Wang, and Meng Wang. 2019a. A neural influence diffusion model for social recommendation. In SIGIR. 235--244.
[73]
Mike Wu, Milan Mosse, Chengxu Zhuang, Daniel Yamins, and Noah Goodman. 2020. Conditional Negative Sampling for Contrastive Learning of Visual Representations. In ICLR.
[74]
Jun Xia, Lirong Wu, Jintao Chen, Bozhen Hu, and Stan Z Li. 2022a. Simgrace: A simple framework for graph contrastive learning without data augmentation. In WWW. 1070--1079.
[75]
Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, and Stan Z Li. 2022b. ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning. In ICML. 24332--24346.
[76]
Jun Xia, Yanqiao Zhu, Yuanqi Du, and Stan Z Li. 2022c. A survey of pretraining on graphs: Taxonomy, methods, and applications. arXiv preprint arXiv:2202.07893 (2022).
[77]
Junyuan Xie, Ross Girshick, and Ali Farhadi. 2016. Unsupervised deep embedding for clustering analysis. In ICML. 478--487.
[78]
Yaochen Xie, Zhao Xu, Jingtun Zhang, Zhengyang Wang, and Shuiwang Ji. 2022. Self-supervised learning of graph neural networks: A unified review. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 45, 2 (2022), 2412--2429.
[79]
Hongteng Xu, Dixin Luo, Hongyuan Zha, and Lawrence Carin Duke. 2019a. Gromov-wasserstein learning for graph matching and node embedding. In ICML. 6932--6941.
[80]
Jun Xu, Mengyang Yu, Ling Shao, Wangmeng Zuo, Deyu Meng, Lei Zhang, and David Zhang. 2019b. Scaled simplex representation for subspace clustering. IEEE Trans. Cybern., Vol. 51, 3 (2019), 1493--1505.
[81]
Shuicheng Yan, Dong Xu, Benyu Zhang, Hong-Jiang Zhang, Qiang Yang, and Stephen Lin. 2006. Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 29, 1 (2006), 40--51.
[82]
Chong You, Chun-Guang Li, Daniel P Robinson, and René Vidal. 2016. Oracle based active set algorithm for scalable elastic net subspace clustering. In CVPR. 3928--3937.
[83]
Yuning You, Tianlong Chen, Yang Shen, and Zhangyang Wang. 2021. Graph contrastive learning automated. In ICML. 12121--12132.
[84]
Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. In NeurIPS, Vol. 33. 5812--5823.
[85]
Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Lizhen Cui, and Quoc Viet Hung Nguyen. 2022. Are graph augmentations necessary? simple graph contrastive learning for recommendation. In SIGIR. 1294--1303.
[86]
Hengrui Zhang, Qitian Wu, Junchi Yan, David Wipf, and Philip S Yu. 2021a. From canonical correlation analysis to self-supervised graph neural networks. In NeurIPS, Vol. 34. 76--89.
[87]
Shangzhi Zhang, Chong You, René Vidal, and Chun-Guang Li. 2021b. Learning a self-expressive network for subspace clustering. In CVPR. 12393--12403.
[88]
Xiang Zhang, Ziyuan Zhao, Theodoros Tsiligkaridis, and Marinka Zitnik. 2022a. Self-supervised contrastive pre-training for time series via time-frequency consistency. In NeurIPS, Vol. 35. 3988--4003.
[89]
Yifei Zhang, Hao Zhu, Zixing Song, Piotr Koniusz, and Irwin King. 2022b. COSTA: covariance-preserving feature augmentation for graph contrastive learning. In KDD. 2524--2534.
[90]
Zheng Zhang, Zhihui Lai, Yong Xu, Ling Shao, Jian Wu, and Guo-Sen Xie. 2017. Discriminative elastic-net regularized linear regression. IEEE Trans Image Process, Vol. 26, 3 (2017), 1466--1481.
[91]
Han Zhao, Xu Yang, Zhenru Wang, Erkun Yang, and Cheng Deng. 2021b. Graph Debiased Contrastive Learning with Joint Representation Clustering. In IJCAI. 3434--3440.
[92]
Tong Zhao, Wei Jin, Yozen Liu, Yingheng Wang, Gang Liu, Stephan Günnemann, Neil Shah, and Meng Jiang. 2022. Graph data augmentation for graph machine learning: A survey. arXiv preprint arXiv:2202.08871 (2022).
[93]
Tong Zhao, Yozen Liu, Leonardo Neves, Oliver Woodford, Meng Jiang, and Neil Shah. 2021a. Data augmentation for graph neural networks. In AAAI, Vol. 35. 11015--11023.
[94]
Jiangbin Zheng, Yile Wang, Ge Wang, Jun Xia, Yufei Huang, Guojiang Zhao, Yue Zhang, and Stan Li. 2022. Using Context-to-Vector with Graph Retrofitting to Improve Word Embeddings. In ACL. 8154--8163.
[95]
Yanqiao Zhu, Yichen Xu, Qiang Liu, and Shu Wu. 2021a. An empirical study of graph contrastive learning. arXiv preprint arXiv:2109.01116 (2021).
[96]
Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2020. Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131 (2020).
[97]
Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, and Liang Wang. 2021b. Graph contrastive learning with adaptive augmentation. In WWW. 2069--2080.
[98]
Hui Zou and Trevor Hastie. 2005. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B: Statistical Methodology, Vol. 67, 2 (2005), 301--320. io

Cited By

View all
  • (2025)Topology reorganized graph contrastive learning with mitigating semantic driftPattern Recognition10.1016/j.patcog.2024.111160159(111160)Online publication date: Mar-2025
  • (2024)Deep Ensemble Remote Sensing Scene Classification via Category Distribution AssociationRemote Sensing10.3390/rs1621408416:21(4084)Online publication date: 1-Nov-2024
  • (2024)Spatial–Spectral Graph Contrastive Clustering With Hard Sample Mining for Hyperspectral ImagesIEEE Transactions on Geoscience and Remote Sensing10.1109/TGRS.2024.346464862(1-16)Online publication date: 2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
WWW '24: Proceedings of the ACM Web Conference 2024
May 2024
4826 pages
ISBN:9798400701719
DOI:10.1145/3589334
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 May 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. graph contrastive learning
  2. graph neural networks
  3. hard negative mining
  4. web data mining

Qualifiers

  • Research-article

Conference

WWW '24
Sponsor:
WWW '24: The ACM Web Conference 2024
May 13 - 17, 2024
Singapore, Singapore

Acceptance Rates

Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)419
  • Downloads (Last 6 weeks)41
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Topology reorganized graph contrastive learning with mitigating semantic driftPattern Recognition10.1016/j.patcog.2024.111160159(111160)Online publication date: Mar-2025
  • (2024)Deep Ensemble Remote Sensing Scene Classification via Category Distribution AssociationRemote Sensing10.3390/rs1621408416:21(4084)Online publication date: 1-Nov-2024
  • (2024)Spatial–Spectral Graph Contrastive Clustering With Hard Sample Mining for Hyperspectral ImagesIEEE Transactions on Geoscience and Remote Sensing10.1109/TGRS.2024.346464862(1-16)Online publication date: 2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media