Skip to main content

Boosting Adaptive Graph Augmented MLPs via Customized Knowledge Distillation

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases: Research Track (ECML PKDD 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14171))

  • 861 Accesses

Abstract

While Graph Neural Networks (GNNs) have shown convinced performance on handling non-Euclidean network data, the high inference latency caused by message-passing mechanism hinders their deployment on real-time scenarios. One emerging inference acceleration approach is to distill knowledge derived from teacher GNNs into message-passing-free student multi-layer perceptrons (MLPs). Nevertheless, due to the graph heterophily causing performance degradation of teacher GNNs, as well as the unsatisfactory generalization ability of student MLPs on graph data, GNN-MLP like designs often achieve inferior performance. To tackle this challenge, we propose boosting adaptive GRaph Augmented MLPs via Customized knowlEdge Distillation (GRACED), a novel approach to learn graph knowledge effectively and efficiently. Specifically, we first design a novel customized knowledge distillation strategy to modify the guided knowledge to mitigate the adverse influence of heterophily to student MLPs. Then, we introduce an adaptive graph propagation approach to precompute aggregation feature for node considering both of homophily and heterophily to boost the student MLPs for learning graph information. Furthermore, we design an aggregation feature approximation technique for inductive scenarios. Extensive experiments on node classification task and theoretical analyses demonstrate the superiority of GRACED by comparing with the state-of-the-art methods under both transductive and inductive settings across homophilic and heterophilic datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-11/www/wwkb.

  2. 2.

    https://github.com/benedekrozemberczki/datasets.

References

  1. Bo, D., Wang, X., Shi, C., Shen, H.: Beyond low-frequency information in graph convolutional networks. In: AAAI (2021)

    Google Scholar 

  2. Chen, L., Chen, Z., Bruna, J.: On graph neural networks versus graph-augmented MLPs. In: ICLR (2021)

    Google Scholar 

  3. Chien, E., Peng, J., Li, P., Milenkovic, O.: Adaptive universal generalized pagerank graph neural network. In: ICLR (2021)

    Google Scholar 

  4. Geng, T., et al.: AWB-GCN: a graph convolutional network accelerator with runtime workload rebalancing. In: MICRO (2020)

    Google Scholar 

  5. Hamilton, W.L., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NeurIPS (2017)

    Google Scholar 

  6. He, H., Wang, J., Zhang, Z., Wu, F.: Compressing deep graph neural networks via adversarial knowledge distillation. arXiv preprint: arXiv:2205.11678 (2022)

  7. Hinton, G., Vinyals, O., Dean, J., et al.: Distilling the knowledge in a neural network. arXiv preprint: arXiv:1503.02531 (2015)

  8. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint: arXiv:1412.6980 (2014)

  9. Li, P., Chien, I., Milenkovic, O.: Optimizing generalized pagerank methods for seed-expansion community detection. In: NeurIPS (2019)

    Google Scholar 

  10. Liang, S., et al.: EnGN: a high-throughput and energy-efficient accelerator for large graph neural networks. IEEE Trans. Comput. 70, 1511–1525 (2021)

    Article  MATH  Google Scholar 

  11. Lim, D., Li, X., Hohne, F., Lim, S.: New benchmarks for learning on non-homophilous graphs (2021). https://arxiv.org/abs/2104.01404

  12. Liu, H., Dai, Z., So, D.R., Le, Q.V.: Pay attention to MLPs. In: NeurIPS (2021)

    Google Scholar 

  13. Luan, S., Hua, C., Lu, Q., Zhu, J., Zhao, M., Zhang, S., Chang, X.W., Precup, D.: Is heterophily a real nightmare for graph neural networks to do node classification? arXiv preprint: arXiv:2109.05641 (2021)

  14. McPherson, M., Smith-Lovin, L., Cook, J.M.: Birds of a feather: homophily in social networks. Ann. Rev. Sociol. 27(1), 415–444 (2001)

    Article  Google Scholar 

  15. Pei, H., Wei, B., Chang, K.C.C., Lei, Y., Yang, B.: Geom-GCN: geometric graph convolutional networks. In: ICLR (2019)

    Google Scholar 

  16. Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., Bengio, Y.: FitNets: hints for thin deep nets. In: ICLR (2015)

    Google Scholar 

  17. Rossi, E., Frasca, F., Chamberlain, B., Eynard, D., Bronstein, M.M., Monti, F.: SIGN: scalable inception graph neural networks (2020). https://arxiv.org/abs/2004.11198

  18. Shuman, D.I., Narang, S.K., Frossard, P., Ortega, A., Vandergheynst, P.: The emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains. IEEE Sig. Process, Mag. 30, 83–98 (2013)

    Article  Google Scholar 

  19. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR (2016)

    Google Scholar 

  20. Tian, Y., Zhang, C., Guo, Z., Zhang, X., Chawla, N.: Learning MLPs on graphs: a unified view of effectiveness, robustness, and efficiency. In: ICLR (2023)

    Google Scholar 

  21. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint: arXiv:1710.10903 (2017)

  22. Welling, M., Kipf, T.N.: Semi-supervised classification with graph convolutional networks. In: ICLR (2016)

    Google Scholar 

  23. Yan, B., Wang, C., Guo, G., Lou, Y.: TinyGNN: learning efficient graph neural networks. In: KDD (2020)

    Google Scholar 

  24. Yang, C., Wu, Q., Wang, J., Yan, J.: Graph neural networks are inherently good generalizers: isights by bridging GNNs and MLPs. In: ICLR (2023)

    Google Scholar 

  25. Yang, C., Wu, Q., Yan, J.: Geometric knowledge distillation: topology compression for graph neural networks. In: NeurIPS (2023)

    Google Scholar 

  26. Yang, Y., Qiu, J., Song, M., Tao, D., Wang, X.: Distilling knowledge from graph convolutional networks. In: CVPR (2020)

    Google Scholar 

  27. Zeng, H., Zhou, H., Srivastava, A., Kannan, R., Prasanna, V.K.: Graphsaint: Graph sampling based inductive learning method. In: ICLR (2020)

    Google Scholar 

  28. Zhang, S., Liu, Y., Sun, Y., Shah, N.: Graph-less neural networks: teaching old MLPs new tricks via distillation. In: ICLR (2022)

    Google Scholar 

  29. Zhao, Y., Wang, D., Bates, D., Mullins, R.D., Jamnik, M., Liò, P.: Learned low precision graph neural networks (2020). https://arxiv.org/abs/2009.09232

  30. Zheng, W., Huang, E.W., Rao, N., Katariya, S., Wang, Z., Subbian, K.: Cold brew: distilling graph node representations with incomplete or missing neighborhoods. In: ICLR (2022)

    Google Scholar 

  31. Zhou, H., Srivastava, A., Zeng, H., Kannan, R., Prasanna, V.: Accelerating large scale real-time GNN inference using channel pruning. VLDB (2021)

    Google Scholar 

  32. Zhu, J., Yan, Y., Zhao, L., Heimann, M., Akoglu, L., Koutra, D.: Beyond homophily in graph neural networks: current limitations and effective designs. In: NeurIPS (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Zhou .

Editor information

Editors and Affiliations

Ethics declarations

Ethical Statement

As machine learning and data mining researchers, we recognize the importance of ethical considerations in our work. The ethical implications of our research can have a significant impact on individuals, communities, and society as a whole. Therefore, we believe that it is our responsibility to carefully consider and address any ethical concerns that may arise from our work. We acknowledge that the collection and processing of personal data can have significant ethical implications. As such, we have taken steps to ensure that our research adheres to ethical guidelines and regulations. We have obtained all necessary permissions and have taken encryption measures to protect the privacy and confidentiality of any personal data used in our research. Additionally, we have implemented measures to ensure that any inferences made from data are transparent and are not used to perpetuate any forms of bias or discrimination. Our research aims to provide insights that are beneficial to society, while avoiding any potential negative impacts on individuals or communities. Our research does not relate to, nor collaborate with, the police or military. We believe that by addressing ethical concerns in our work, we can promote the responsible and beneficial use of machine learning and data mining technologies.

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 2 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wei, S., Wu, Z., Zhang, Z., Zhou, J. (2023). Boosting Adaptive Graph Augmented MLPs via Customized Knowledge Distillation. In: Koutra, D., Plant, C., Gomez Rodriguez, M., Baralis, E., Bonchi, F. (eds) Machine Learning and Knowledge Discovery in Databases: Research Track. ECML PKDD 2023. Lecture Notes in Computer Science(), vol 14171. Springer, Cham. https://doi.org/10.1007/978-3-031-43418-1_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43418-1_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43417-4

  • Online ISBN: 978-3-031-43418-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics