Skip to main content

Hierarchical Graph Representation Learning with Structural Attention for Graph Classification

  • Conference paper
  • First Online:
Artificial Intelligence (CICAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13605))

Included in the following conference series:

  • 1349 Accesses

Abstract

Recently, graph neural networks (GNNs) exhibit strong expressive power in modeling graph structured data and have been shown to work effectively for graph classification tasks. However, existing GNN models for predicting graph categories frequently neglect the graph hierarchy information or fail to accurately capture the graph substructure, resulting in significant performance decreases. In this paper, we propose Hierarchical Graph Representation Learning (HGRL), a multi-level framework for capturing hierarchical local and global topological structures to enrich graph representations. In specific, we utilize a structural coarsening module that generates a series of coarsened graphs for an input graph instance, followed by a graph encoder to preserve the local graph structure information. Furthermore, graph convolutional networks are layered to capture high dimensional proximity in graphs, and we incorporate the attention mechanism for entire graph embedding, which enables our framework to focus on critical nodes with significant contribution to the learned low-dimensional graph representations for subsequent graph classification tasks. Experimental results on multiple benchmark datasets demonstrate that the proposed HGRL can substantially improve the classification accuracy and outperform the existing state-of-the-art graph classification approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhang, Z., Li, Z., Liu, H., Xiong, N.N.: Multi-scale dynamic convolutional network for knowledge graph embedding. IEEE Trans. Knowl. Data Eng. 34(5), 2335–2347 (2022)

    Article  Google Scholar 

  2. Cai, T., Li, J., Mian, A.S., Sellis, T., Yu, J.X., et al.: Target-aware holistic influence maximization in spatial social networks. IEEE Trans. Knowl. Data Eng. 34(4), 1993–2007 (2022)

    Google Scholar 

  3. Cong, Q., Anishchenko, I., Ovchinnikov, S., Baker, D.: Protein interaction networks revealed by proteome coevolution. Science 365(6449), 185–189 (2019)

    Article  Google Scholar 

  4. Al-Rfou, R., Perozzi, B., Zelle, D.: DDGK: learning graph representations for deep divergence graph kernels. In: The World Wide Web Conference, pp. 37–48 (2019)

    Google Scholar 

  5. Gao, H., Ji, S.: Graph U-Nets. In: International Conference on Machine Learning, pp. 2083–2092. PMLR (2019)

    Google Scholar 

  6. Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O., Dahl, G.E.: Neural message passing for quantum chemistry. In: Proceedings of International Conference on Machine Learning, pp. 1263–1272. PMLR (2017)

    Google Scholar 

  7. Bianchi, F.M., Grattarola, D., Livi, L., Alippi, C.: Hierarchical representation learning in graph neural networks with node decimation pooling. IEEE Trans. Neural Netw. Learn. Syst. 33, 2195–2207 (2020)

    Article  MathSciNet  Google Scholar 

  8. Hu, F., Zhu, Y., Wu, S., Wang, L., Tan, T.: Hierarchical graph convolutional networks for semi-supervised node classification. arXiv preprint arXiv:1902.06667 (2019)

  9. Huang, J., Li, Z., Li, N., Liu, S., Li, G.: AttPool: towards hierarchical feature representation in graph convolutional networks via attention mechanism. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6480–6489 (2019)

    Google Scholar 

  10. Li, Q., Han, Z., Wu, X.-M.: Deeper insights into graph convolutional networks for semi-supervised learning. In: Proceedings 32nd AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  11. Mnih, V., Heess, N., Graves, A., et al.: Recurrent models of visual attention. In: Advances in Neural Information Processing Systems 27 (2014)

    Google Scholar 

  12. Ying, Z., You, J., Morris, C., Ren, X., Hamilton, W., Leskovec, J.: Hierarchical graph representation learning with differentiable pooling. In: Advances in Neural Information Processing Systems (2018)

    Google Scholar 

  13. Lee, J., Lee, I., Kang, J.: Self-attention graph pooling. In: Proceedings of International Conference on Machine Learning, pp. 3734–3743. PMLR (2019)

    Google Scholar 

  14. Ma, Y., Wang, S., Aggarwal, C.C., Tang, J.: Graph convolutional networks with eigenpooling. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 723–731 (2019)

    Google Scholar 

  15. Ahmadi, A.H.K.: Memory-based graph networks. Ph.D. thesis, University of Toronto, Canada (2020)

    Google Scholar 

  16. Chen, H., Perozzi, B., Hu, Y., Skiena, S.: HARP: hierarchical representation learning for networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  17. Safro, I., Sanders, P., Schulz, C.: Advanced coarsening schemes for graph partitioning. ACM J. Exp. Algorithmics 19, 1–24 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  18. Dhillon, I.S., Guan, Y., Kulis, B.: Weighted graph cuts without eigenvectors a multilevel approach. IEEE Trans. Pattern Anal. Mach. Intell. 29(11), 1944–1957 (2007)

    Article  Google Scholar 

  19. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  20. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. In: Proceedings of International Conference on Learning Representations (2018)

    Google Scholar 

  21. Wang, G. Ying, R., Huang, J., Leskovec, J.: Improving graph attention networks with large margin-based constraints. In: Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  22. Liu, Z., et al.: GeniePath: graph neural networks with adaptive receptive paths. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 4424–4431 (2019)

    Google Scholar 

  23. Oono, K., Suzuki, T.: Graph neural networks exponentially lose expressive power for node classification. In: Proceedings of International Conference on Learning Representations (2020)

    Google Scholar 

  24. Bastings, J., Filippova, K.: The elephant in the interpretability room: why use attention as explanation when we have saliency methods? In: Proceedings of the 4th BlackboxNLP Workshop, EMNLP (2021)

    Google Scholar 

  25. Yajun, Yu., Pan, Z., Guyu, H., Ren, H.: Graph classification based on sparse graph feature selection and extreme learning machine. Neurocomputing 261, 20–27 (2017)

    Article  Google Scholar 

  26. Pan, S., Jia, W., Zhu, X., Long, G., Zhang, C.: Task sensitive feature exploration and learning for multitask graph classification. IEEE Trans. Cybern. 47(3), 744–758 (2016)

    Article  Google Scholar 

  27. Dobson, P.D., Doig, A.J.: Distinguishing enzyme structures from non-enzymes without alignments. J. Mol. Biol. 330(4), 771–783 (2003)

    Article  Google Scholar 

  28. Pan, S., Jia, W., Zhu, X.: CogBoost: boosting for fast cost-sensitive graph classification. IEEE Trans. Knowl. Data Eng. 27(11), 2933–2946 (2015)

    Article  Google Scholar 

  29. Schomburg, I., Chang, A., Ebeling, C., Gremse, M., Heldt, C., Huhn, G., Schomburg, D.: Brenda, the enzyme database: updates and major new developments. Nucleic Acids Res. 32, 431–433 (2004)

    Article  Google Scholar 

  30. Cai, C., Wang, Y.: A simple yet effective baseline for non-attributed graph classification. arXiv preprint arXiv:1811.03508 (2018)

  31. Borgwardt, K.M., Kriegel, H.-P.: Shortest-path kernels on graphs. In: Proceedings of the 5th IEEE International Conference on Data Mining, pp. 74–81. IEEE (2005)

    Google Scholar 

  32. Shervashidze, N., Vishwanathan, S.V.N., Petri, T., Mehlhorn, K., Borgwardt, K.: Efficient graphlet kernels for large graph comparison. In: Artificial Intelligence Statistics, pp. 488–495. PMLR (2009)

    Google Scholar 

  33. Shervashidze, N., Schweitzer, P., Van Leeuwen, E.J., Mehlhorn, K., Borgwardt, K.M.: Weisfeiler-Lehman graph kernels. J. Mach. Learn. Res. 12(9), 2539–2561 (2011)

    MathSciNet  MATH  Google Scholar 

  34. Niepert, M., Ahmed, M., Kutzkov, K.: Learning convolutional neural networks for graphs. In: Proceedings of International Conference on Machine Learning, pp. 2014–2023. PMLR (2016)

    Google Scholar 

  35. Zhang, M., Cui, Z., Neumann, M., Chen, Y.: An end-to-end deep learning architecture for graph classification. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  36. Verma, S., Zhang, Z.-L.: Graph capsule convolutional neural networks. In: Joint ICML and IJCAI Workshop on Computational Biology (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chen Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yu, B., Xu, X., Wen, C., Xie, Y., Zhang, C. (2022). Hierarchical Graph Representation Learning with Structural Attention for Graph Classification. In: Fang, L., Povey, D., Zhai, G., Mei, T., Wang, R. (eds) Artificial Intelligence. CICAI 2022. Lecture Notes in Computer Science(), vol 13605. Springer, Cham. https://doi.org/10.1007/978-3-031-20500-2_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20500-2_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20499-9

  • Online ISBN: 978-3-031-20500-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics