Skip to main content

Transformer-Based Representation Learning on Temporal Heterogeneous Graphs

  • Conference paper
  • First Online:
Web and Big Data (APWeb-WAIM 2022)

Abstract

Temporal heterogeneous graphs can model lots of complex systems in the real world, such as social networks and e-commerce applications, which are naturally time-varying and heterogeneous. As most existing graph representation learning methods cannot efficiently handle both of these characteristics, we propose a Transformer-like representation learning model, named THAN, to learn low-dimensional node embeddings preserving the topological structure features, heterogeneous semantics, and dynamic evolutionary patterns of temporal heterogeneous graphs simultaneously. Specifically, THAN first samples heterogeneous neighbors with temporal constraints and projects node features into the same vector space, then encodes time information and aggregates the neighborhood influence in different weights via type-aware self-attention. Experiments on three real-world datasets demonstrate that THAN outperforms the state-of-the-arts in terms of effectiveness with respect to the temporal link prediction task.

This work was supported in part by the National Key Research and Development Program of China (2018YFB0704301-1), the National Natural Science Foundation of China (61972268), the Med-X Center for Informatics Funding Project (YGJC001).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://grouplens.org/datasets/movielens/100k.

  2. 2.

    http://snap.stanford.edu/data/higgs-twitter.html.

  3. 3.

    http://snap.stanford.edu/data/sx-mathoverflow.html.

  4. 4.

    https://github.com/moli-L/THAN.

References

  1. Ba, L.J., Kiros, J.R., Hinton, G.E.: Layer normalization. CoRR abs/1607.06450 (2016)

    Google Scholar 

  2. Dong, Y., Chawla, N.V., Swami, A.: metapath2vec: scalable representation learning for heterogeneous networks. In: SIGKDD, pp. 135–144 (2017)

    Google Scholar 

  3. Fan, Y., Ju, M., Zhang, C., Zhao, L., Ye, Y.: Heterogeneous temporal graph neural network. CoRR abs/2110.13889 (2021)

    Google Scholar 

  4. Fey, M., Lenssen, J.E.: Fast graph representation learning with PyTorch geometric. CoRR abs/1903.02428 (2019)

    Google Scholar 

  5. Fu, T., Lee, W., Lei, Z.: HIN2Vec: explore meta-paths in heterogeneous information networks for representation learning. In: CIKM, pp. 1797–1806 (2017)

    Google Scholar 

  6. Grover, A., Leskovec, J.: node2vec: scalable feature learning for networks. In: SIGKDD, pp. 855–864 (2016)

    Google Scholar 

  7. Hamilton, W.L., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NIPS, pp. 1024–1034 (2017)

    Google Scholar 

  8. Hu, Z., Dong, Y., Wang, K., Sun, Y.: Heterogeneous graph transformer. In: WWW, pp. 2704–2710 (2020)

    Google Scholar 

  9. Huang, H., Shi, R., Zhou, W., Wang, X., Jin, H., Fu, X.: Temporal heterogeneous information network embedding. In: IJCAI, pp. 1470–1476 (2021)

    Google Scholar 

  10. Ji, G., He, S., Xu, L., Liu, K., Zhao, J.: Knowledge graph embedding via dynamic mapping matrix. In: ACL, pp. 687–696 (2015)

    Google Scholar 

  11. Ji, Y., Jia, T., Fang, Y., Shi, C.: Dynamic heterogeneous graph embedding via heterogeneous Hawkes process. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds.) ECML PKDD 2021. LNCS (LNAI), vol. 12975, pp. 388–403. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86486-6_24

    Chapter  Google Scholar 

  12. Kazemi, S.M., et al.: Time2Vec: learning a vector representation of time. CoRR abs/1907.05321 (2019)

    Google Scholar 

  13. Kazemi, S.M., et al.: Representation learning for dynamic graphs: a survey. J. Mach. Learn. Res. 21, 70:1–70:73 (2020)

    Google Scholar 

  14. Kipf, T.N., Welling, M.: Variational graph auto-encoders. CoRR abs/1611.07308 (2016)

    Google Scholar 

  15. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)

    Google Scholar 

  16. Kumar, S., Zhang, X., Leskovec, J.: Predicting dynamic embedding trajectory in temporal interaction networks. In: SIGKDD, pp. 1269–1278 (2019)

    Google Scholar 

  17. Luo, J., Xiao, S., Jiang, S., Gao, H., Xiao, Y.: ripple2vec: node embedding with ripple distance of structures. Data Sci. Eng. 7, 156–174 (2022). https://doi.org/10.1007/s41019-022-00184-6

    Article  Google Scholar 

  18. Pareja, A., et al.: EvolveGCN: evolving graph convolutional networks for dynamic graphs. In: AAAI, pp. 5363–5370 (2020)

    Google Scholar 

  19. Perozzi, B., Al-Rfou, R., Skiena, S.: DeepWalk: online learning of social representations. In: SIGKDD, pp. 701–710 (2014)

    Google Scholar 

  20. Sankar, A., Wu, Y., Gou, L., Zhang, W., Yang, H.: DySAT: Deep neural representation learning on dynamic graphs via self-attention networks. In: WSDM, pp. 519–527 (2020)

    Google Scholar 

  21. Schlichtkrull, M., Kipf, T.N., Bloem, P., van den Berg, R., Titov, I., Welling, M.: Modeling relational data with graph convolutional networks. In: Gangemi, A., et al. (eds.) ESWC 2018. LNCS, vol. 10843, pp. 593–607. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93417-4_38

    Chapter  Google Scholar 

  22. Trivedi, R., Farajtabar, M., Biswal, P., Zha, H.: DyRep: learning representations over dynamic graphs. In: ICLR (2019)

    Google Scholar 

  23. Tuteja, S., Kumar, R.: A unification of heterogeneous data sources into a graph model in e-commerce. Data Sci. Eng. 7, 57–70 (2022). https://doi.org/10.1007/s41019-021-00174-0

    Article  Google Scholar 

  24. Vaswani, A., et al.: Attention is all you need. In: NIPS, pp. 5998–6008 (2017)

    Google Scholar 

  25. Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. In: ICLR (2018)

    Google Scholar 

  26. Wang, X., et al.: Heterogeneous graph attention network. In: WWW, pp. 2022–2032 (2019)

    Google Scholar 

  27. Wang, Y., Chang, Y., Liu, Y., Leskovec, J., Li, P.: Inductive representation learning in temporal networks via causal anonymous walks. In: ICLR (2021)

    Google Scholar 

  28. Xu, D., Ruan, C., Körpeoglu, E., Kumar, S., Achan, K.: Self-attention with functional time representation learning. In: NIPS, pp. 15889–15899 (2019)

    Google Scholar 

  29. Xu, D., Ruan, C., Körpeoglu, E., Kumar, S., Achan, K.: Inductive representation learning on temporal graphs. In: ICLR (2020)

    Google Scholar 

  30. Xue, H., Yang, L., Jiang, W., Wei, Y., Hu, Y., Lin, Yu.: Modeling dynamic heterogeneous network for link prediction using hierarchical attention with temporal RNN. In: Hutter, F., Kersting, K., Lijffijt, J., Valera, I. (eds.) ECML PKDD 2020. LNCS (LNAI), vol. 12457, pp. 282–298. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67658-2_17

    Chapter  Google Scholar 

  31. Yin, Y., Ji, L., Zhang, J., Pei, Y.: DHNE: network representation learning method for dynamic heterogeneous networks. IEEE Access 7, 134782–134792 (2019)

    Article  Google Scholar 

  32. Ying, C., et al.: Do transformers really perform bad for graph representation? CoRR abs/2106.05234 (2021)

    Google Scholar 

  33. Zhao, J., Wang, X., Shi, C., Hu, B., Song, G., Ye, Y.: Heterogeneous graph structure learning for graph neural networks. In: AAAI, pp. 4697–4705 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Duan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, L. et al. (2023). Transformer-Based Representation Learning on Temporal Heterogeneous Graphs. In: Li, B., Yue, L., Tao, C., Han, X., Calvanese, D., Amagasa, T. (eds) Web and Big Data. APWeb-WAIM 2022. Lecture Notes in Computer Science, vol 13422. Springer, Cham. https://doi.org/10.1007/978-3-031-25198-6_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25198-6_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25197-9

  • Online ISBN: 978-3-031-25198-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics