skip to main content
10.1145/3459637.3482151acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
short-paper

Multimodal Graph Meta Contrastive Learning

Authors Info & Claims
Published:30 October 2021Publication History

ABSTRACT

In recent years, graph contrastive learning has achieved promising node classification accuracy using graph neural networks (GNNs), which can learn representations in an unsupervised manner. However, such representations cannot be generalized to unseen novel classes with only few-shot labeled samples in spite of exhibiting good performance on seen classes. In order to assign generalization capability to graph contrastive learning, we propose multimodal graph meta contrastive learning (MGMC) in this paper, which integrates multimodal meta learning into graph contrastive learning. On one hand, MGMC accomplishes effectively fast adapation on unseen novel classes by the aid of bilevel meta optimization to solve few-shot problems. On the other hand, MGMC can generalize quickly to a generic dataset with multimodal distribution by inducing the FiLM-based modulation module. In addition, MGMC incorporates the lastest graph contrastive learning method that does not rely on the onstruction of augmentations and negative examples. To our best knowledge, this is the first work to investigate graph contrastive learning for few-shot problems. Extensieve experimental results on three graph-structure datasets demonstrate the effectiveness of our proposed MGMC in few-shot node classification tasks.

Skip Supplemental Material Section

Supplemental Material

CIKM21-rgsp2612.mp4

mp4

15.6 MB

References

  1. Avishek Joey Bose, Ankit Jain, Piero Molino, and William L Hamilton. 2019. Meta-Graph: Few Shot Link Prediction via Meta Learning. arXiv preprint arXiv:1912.09867 (2019).Google ScholarGoogle Scholar
  2. Jatin Chauhan, Deepak Nathani, and Manohar Kaul. 2020. Few-Shot Learning on Graphs via Super-Classes based on Graph Spectral Measures. arXiv preprint arXiv:2002.12815 (2020).Google ScholarGoogle Scholar
  3. Feihu Che, Guohua Yang, Dawei Zhang, Jianhua Tao, Pengpeng Shao, and Tong Liu. 2020. Self-supervised Graph Representation Learning via Bootstrapping. arXiv preprint arXiv:2011.05126 (2020).Google ScholarGoogle Scholar
  4. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning. PMLR, 1597--1607.Google ScholarGoogle Scholar
  5. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. 2020. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733 (2020).Google ScholarGoogle Scholar
  7. Kaveh Hassani and Amir Hosein Khasahmadi. 2020. Contrastive multi-view representation learning on graphs. In International Conference on Machine Learning. PMLR, 4116--4126.Google ScholarGoogle Scholar
  8. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. 2019. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265 (2019).Google ScholarGoogle Scholar
  9. Kexin Huang and Marinka Zitnik. 2020. Graph Meta Learning via Local Subgraphs. arXiv preprint arXiv:2006.07889 (2020).Google ScholarGoogle Scholar
  10. Yizhu Jiao, Yun Xiong, Jiawei Zhang, Yao Zhang, Tianqi Zhang, and Yangyong Zhu. 2020. Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning. arXiv preprint arXiv:2009.10273 (2020).Google ScholarGoogle Scholar
  11. Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).Google ScholarGoogle Scholar
  12. Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. 2017. Meta-sgd: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835 (2017).Google ScholarGoogle Scholar
  13. Ning Ma, Jiajun Bu, Jieyu Yang, Zhen Zhang, Chengwei Yao, Zhi Yu, Sheng Zhou, and Xifeng Yan. 2020. Adaptive-Step Graph Meta-Learner for Few-Shot Graph Classification. arXiv preprint arXiv:2003.08246 (2020).Google ScholarGoogle Scholar
  14. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111--3119. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. 2020. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1150--1160.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. 2018. Pitfalls of Graph Neural Network Evaluation. Relational Representation Learning Workshop, NeurIPS 2018 (2018).Google ScholarGoogle Scholar
  17. Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. In Advances in neural information processing systems. 4077--4087. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. 2019. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000 (2019).Google ScholarGoogle Scholar
  19. Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1199--1208.Google ScholarGoogle ScholarCross RefCross Ref
  20. Qiuling Suo, Jingyuan Chou, Weida Zhong, and Aidong Zhang. 2020. TAdaNet: Task-Adaptive Network for Graph-Enriched Meta-Learning. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1789--1799.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Rémi Munos, Petar Velivc ković, and Michal Valko. 2021. Bootstrapped Representation Learning on Graphs. arXiv preprint arXiv:2102.06514 (2021).Google ScholarGoogle Scholar
  22. Petar Velivc ković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).Google ScholarGoogle Scholar
  23. Petar Velivc ković, William Fedus, William L Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. 2018. Deep graph infomax. arXiv preprint arXiv:1809.10341 (2018).Google ScholarGoogle Scholar
  24. Risto Vuorio, Shao-Hua Sun, Hexiang Hu, and Joseph J Lim. 2019. Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation. In Advances in Neural Information Processing Systems. 1--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Yuxiao Dong, and Anshul Kanakia. 2020. Microsoft academic graph: When experts are not enough. Quantitative Science Studies, Vol. 1, 1 (2020), 396--413.Google ScholarGoogle ScholarCross RefCross Ref
  26. Yaochen Xie, Zhengyang Wang, and Shuiwang Ji. 2020. Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising. arXiv preprint arXiv:2010.11971 (2020).Google ScholarGoogle Scholar
  27. Yaochen Xie, Zhao Xu, Jingtun Zhang, Zhengyang Wang, and Shuiwang Ji. 2021. Self-supervised learning of graph neural networks: A unified review. arXiv preprint arXiv:2102.10757 (2021).Google ScholarGoogle Scholar
  28. Huaxiu Yao, Chuxu Zhang, Ying Wei, Meng Jiang, Suhang Wang, Junzhou Huang, Nitesh Chawla, and Zhenhui Li. 2020. Graph few-shot learning via knowledge transfer. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 6656--6663.Google ScholarGoogle ScholarCross RefCross Ref
  29. Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, Vol. 33 (2020).Google ScholarGoogle Scholar
  30. Fan Zhou, Chengtai Cao, Kunpeng Zhang, Goce Trajcevski, Ting Zhong, and Ji Geng. 2019. Meta-GNN: On Few-shot Node Classification in Graph Meta-learning. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 2357--2360. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Multimodal Graph Meta Contrastive Learning

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge Management
        October 2021
        4966 pages
        ISBN:9781450384469
        DOI:10.1145/3459637

        Copyright © 2021 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 30 October 2021

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • short-paper

        Acceptance Rates

        Overall Acceptance Rate1,861of8,427submissions,22%

        Upcoming Conference

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader