Skip to main content

EGAT: Edge-Featured Graph Attention Network

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2021 (ICANN 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12891))

Included in the following conference series:

Abstract

Most state-of-the-art Graph Neural Networks focus on node features in the learning process but ignore edge features. However, edge features also contain essential information in real-world, such as financial graphs. Node-centric approaches are suboptimal in edge-sensitive graphs since edge features are not adequately utilized. To address this problem, we present the Edge-Featured Graph Attention Network (EGAT) to leverage edge features in the graph feature representation. Our model is based on the edge-integrated attention mechanism, where both node and edge features are included in the calculation of the message and attention weights. In addition, the importance of edge information suggests that the edge features should be updated to learn high-level representation. So we perform edge updating with the integration of the features of connected nodes. In contrast to edge-node switching, our model acquires the adjacent edge features with the node-transit strategy, avoiding significant lift of computational complexity. Then we employ a multi-scale merge strategy, which concatenates features of every layer to construct hierarchical representation. Moreover, our model can be adapted to domain-specific graph neural networks, which further extends the application scenarios. Experiments show that our model achieves or matches the state-of-the-art on both node-sensitive and edge-sensitive datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhou, J., et al.: Graph neural networks: a review of methods and applications. arXiv e-prints arXiv:1812.08434, December 2018

  2. Zhang, Z., Cui, P., Zhu, W.: Deep learning on graphs: a survey. arXiv e-prints arXiv:1812.04202, December 2018

  3. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv e-prints arXiv:1609.02907, September 2016

  4. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=rJXMpikCZ, accepted as poster

  5. Vaswani, A., et al.: Attention is all you need. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 5998–6008. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf

  6. Yang, Z., Cohen, W.W., Salakhutdinov, R.: Revisiting semi-supervised learning with graph embeddings. In: Balcan, M., Weinberger, K.Q. (eds.) Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, 19–24 June 2016. JMLR Workshop and Conference Proceedings, vol. 48, pp. 40–48. JMLR.org (2016). http://proceedings.mlr.press/v48/yanga16.html

  7. Weber, M., et al.: Scalable graph learning for anti-money laundering: a first look. arXiv e-prints arXiv:1812.00076, November 2018

  8. Xiao, W., et al.: Heterogeneous graph attention network. WWW (2019)

    Google Scholar 

  9. Schlichtkrull, M., Kipf, T.N., Bloem, P., Berg, R.v.d., Titov, I., Welling, M.: Modeling relational data with graph convolutional networks. arXiv preprint arXiv:1703.06103 (2017)

  10. Gong, L., Cheng, Q.: Exploiting edge features in graph neural networks. arXiv e-prints arXiv:1809.02709, September 2018

  11. Jiang, X., Ji, P., Li, S.: CensNet: convolution with edge-node switching in graph neural networks. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, pp. 2656–2662. International Joint Conferences on Artificial Intelligence Organization, July 2019. https://doi.org/10.24963/ijcai.2019/369

  12. Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O., Dahl, G.E.: Neural message passing for quantum chemistry. arXiv e-prints arXiv:1704.01212, April 2017

  13. Lu, C., Liu, Q., Wang, C., Huang, Z., Lin, P., He, L.: Molecular property prediction: a multilevel quantum interactions modeling perspective. arXiv e-prints arXiv:1906.11081, June 2019

  14. Shervashidze, N., Schweitzer, P., van Leeuwen, E.J., Mehlhorn, K., Borgwardt, K.M.: Weisfeiler-Lehman graph kernels. J. Machine Learn. Res. 12(77), 2539–2561 (2011). http://jmlr.org/papers/v12/shervashidze11a.html

  15. Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K.i., Jegelka, S.: Representation learning on graphs with jumping knowledge networks. arXiv e-prints arXiv:1806.03536, June 2018

  16. Maas, A.L.: Rectifier nonlinearities improve neural network acoustic models (2013)

    Google Scholar 

  17. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv e-prints arXiv:1502.03167, February 2015

  18. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv e-prints arXiv:1412.6980, December 2014

  19. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs). arXiv e-prints arXiv:1511.07289, November 2015

  20. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  21. Liao, R., Zhao, Z., Urtasun, R., Zemel, R.: LanczosNet: multi-scale deep graph convolutional networks. In: ICLR (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haopeng Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Z., Chen, J., Chen, H. (2021). EGAT: Edge-Featured Graph Attention Network. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2021. ICANN 2021. Lecture Notes in Computer Science(), vol 12891. Springer, Cham. https://doi.org/10.1007/978-3-030-86362-3_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86362-3_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86361-6

  • Online ISBN: 978-3-030-86362-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics