skip to main content
research-article

Hi-PART: Going Beyond Graph Pooling with Hierarchical Partition Tree for Graph-Level Representation Learning

Published:12 February 2024Publication History
Skip Abstract Section

Abstract

Graph pooling refers to the operation that maps a set of node representations into a compact form for graph-level representation learning. However, existing graph pooling methods are limited by the power of the Weisfeiler–Lehman (WL) test in the performance of graph discrimination. In addition, these methods often suffer from hard adaptability to hyper-parameters and training instability. To address these issues, we propose Hi-PART, a simple yet effective graph neural network (GNN) framework with Hierarchical Partition Tree (HPT). In HPT, each layer is a partition of the graph with different levels of granularities that are going toward a finer grain from top to bottom. Such an exquisite structure allows us to quantify the graph structure information contained in HPT with the aid of structural information theory. Algorithmically, by employing GNNs to summarize node features into the graph feature based on HPT’s hierarchical structure, Hi-PART is able to adequately leverage the graph structure information and provably goes beyond the power of the WL test. Due to the separation of HPT optimization from graph representation learning, Hi-PART involves the height of HPT as the only extra hyper-parameter and enjoys higher training stability. Empirical results on graph classification benchmarks validate the superior expressive power and generalization ability of Hi-PART compared with state-of-the-art graph pooling approaches.

REFERENCES

  1. [1] Anand Kartik and Bianconi Ginestra. 2009. Entropy measures for networks: Toward an information theory of complex topologies. Physical Review E 80, 4 (2009), 045102.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Babai László. 2016. Graph isomorphism in quasipolynomial time. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing. 684697.Google ScholarGoogle Scholar
  3. [3] J. Baek, M. Kang, and S. J. Hwang. 2021. Accurate learning of graph representations with graph multiset pooling. The International Conference on Learning Representations (ICLR’21).Google ScholarGoogle Scholar
  4. [4] Bianchi Filippo Maria, Grattarola Daniele, and Alippi Cesare. 2020. Spectral clustering with graph neural networks for graph pooling. In Proceedings of the International Conference on Machine Learning. PMLR, 874883.Google ScholarGoogle Scholar
  5. [5] Blondel Vincent D., Guillaume Jean-Loup, Lambiotte Renaud, and Lefebvre Etienne. 2008. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment 2008, 10 (2008), P10008.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Braunstein Samuel L., Ghosh Sibasish, and Severini Simone. 2006. The Laplacian of a graph as a density matrix: A basic combinatorial approach to separability of mixed states. Annals of Combinatorics 10, 3 (2006), 291317.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Cai Jin-Yi, Fürer Martin, and Immerman Neil. 1992. An optimal lower bound on the number of variables for graph identification. Combinatorica 12, 4 (1992), 389410.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] F. Diehl. 2019. Edge contraction pooling for graph neural networks. arXiv preprint arXiv:1905.10990.Google ScholarGoogle Scholar
  9. [9] Dobson Paul D. and Doig Andrew J.. 2003. Distinguishing enzyme structures from non-enzymes without alignments. Journal of Molecular Biology 330, 4 (2003), 771783.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Fouss Francois, Pirotte Alain, Renders Jean-Michel, and Saerens Marco. 2007. Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Transactions on Knowledge and Data Engineering 19, 3 (2007), 355369.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Gao Hongyang and Ji Shuiwang. 2019. Graph U-Nets. In Proceedings of the International Conference on Machine Learning. PMLR, 20832092.Google ScholarGoogle Scholar
  12. [12] Hamilton William L., Ying Rex, and Leskovec Jure. 2017. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 10251035.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] T. N. Kipf and M. Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.Google ScholarGoogle Scholar
  14. [14] Lee Junhyun, Lee Inyeop, and Kang Jaewoo. 2019. Self-attention graph pooling. In Proceedings of the International Conference on Machine Learning. PMLR, 37343743.Google ScholarGoogle Scholar
  15. [15] Li Angsheng and Pan Yicheng. 2016. Structural information and dynamical complexity of networks. IEEE Transactions on Information Theory 62, 6 (2016), 32903339.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Liu Meng, Gao Hongyang, and Ji Shuiwang. 2020. Towards deeper graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 338348.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Ma Yao, Wang Suhang, Aggarwal Charu C., and Tang Jiliang. 2019. Graph convolutional networks with eigenpooling. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 723731.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] C. Morris, M. N. Kriege, F. Bause, et al. 2020. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663.Google ScholarGoogle Scholar
  19. [19] Newman Mark E. J.. 2004. Fast algorithm for detecting community structure in networks. Physical Review E 69, 6 (2004), 066133.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Newman Mark E. J. and Girvan Michelle. 2004. Finding and evaluating community structure in networks. Physical Review E 69, 2 (2004), 026113.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Pang Yunsheng, Zhao Yunxiang, and Li Dongsheng. 2021. Graph pooling via coarsened graph infomax. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 21772181.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Ranjan Ekagra, Sanyal Soumya, and Talukdar Partha. 2020. ASAP: Adaptive structure aware pooling for learning hierarchical graph representations. In Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34, 54705477.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Rashevsky Nicolas. 1955. Life, information theory, and topology. The Bulletin of Mathematical Biophysics 17, 3 (1955), 229235.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Shannon Claude Elwood. 1948. A mathematical theory of communication. The Bell System Technical Journal 27, 3 (1948), 379423.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Su Zidong, Hu Zehui, and Li Yangding. 2021. Hierarchical graph representation learning with local capsule pooling. In ACM Multimedia Asia. 17.Google ScholarGoogle Scholar
  26. [26] P. Veličković, G. Cucurull, A. Casanova, et al. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903.Google ScholarGoogle Scholar
  27. [27] Wale Nikil, Watson Ian A., and Karypis George. 2008. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems 14, 3 (2008), 347375.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Wang Yu Guang, Li Ming, Ma Zheng, Montufar Guido, Zhuang Xiaosheng, and Fan Yanan. 2020. Haar graph pooling. In Proceedings of the International Conference on Machine Learning. PMLR, 99529962.Google ScholarGoogle Scholar
  29. [29] Z. Wang and S. Ji. 2020. Second-order pooling for graph neural networks. IEEE Transactions on Pattern Analysis & Machine Intelligence 1 (2020), 1–1.Google ScholarGoogle Scholar
  30. [30] Weisfeiler Boris and Leman Andrei. 1968. The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series 2, 9 (1968), 1216.Google ScholarGoogle Scholar
  31. [31] K. Xu, W. Hu, J. Leskovec, et al. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826.Google ScholarGoogle Scholar
  32. [32] Yanardag Pinar and Vishwanathan S. V. N.. 2015. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 13651374.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Yanardag Pinar and Vishwanathan S. V. N.. 2015. A structural smoothing framework for robust graph comparison. In Proceedings of the 28th International Conference on Neural Information Processing Systems . 21342142.Google ScholarGoogle Scholar
  34. [34] Z. Ying, J. You, C. Morris, et al. 2018. Hierarchical graph representation learning with differentiable pooling. Advances in Neural Information Processing Systems. (2018), 31.Google ScholarGoogle Scholar
  35. [35] You Jiaxuan, Gomes-Selman Jonathan M., Ying Rex, and Leskovec Jure. 2021. Identity-aware graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35, 1073710745.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Yuan Hao and Ji Shuiwang. 2020. StructPool: Structured graph pooling via conditional random fields. In Proceedings of the 8th International Conference on Learning Representations.Google ScholarGoogle Scholar
  37. [37] Zhang Muhan and Chen Yixin. 2018. Link prediction based on graph neural networks. In Proceedings of the International Conference on Neural Information Processing Systems . 51655175.Google ScholarGoogle Scholar
  38. [38] Zhang Muhan, Cui Zhicheng, Neumann Marion, and Chen Yixin. 2018. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 32.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] J. Zhou, G. Cui, S. Hu, et al. 2020. Graph neural networks: A review of methods and applications. AI Open, 1 (2020), 57–81.Google ScholarGoogle Scholar

Index Terms

  1. Hi-PART: Going Beyond Graph Pooling with Hierarchical Partition Tree for Graph-Level Representation Learning

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Knowledge Discovery from Data
        ACM Transactions on Knowledge Discovery from Data  Volume 18, Issue 4
        May 2024
        707 pages
        ISSN:1556-4681
        EISSN:1556-472X
        DOI:10.1145/3613622
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 12 February 2024
        • Online AM: 14 December 2023
        • Accepted: 29 November 2023
        • Revised: 22 May 2023
        • Received: 1 July 2022
        Published in tkdd Volume 18, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
      • Article Metrics

        • Downloads (Last 12 months)126
        • Downloads (Last 6 weeks)38

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text