Abstract
Temporal knowledge graph reasoning (TKGR) has been of great interest for its role in enriching the naturally incomplete temporal knowledge graph (TKG) by uncovering new events from existing ones with temporal information. At present, the majority of existing TKGR methods have attained commendable performance. Nevertheless, they still suffer from several problems, specifically their limited ability to adeptly capture intricate long-term event dependencies within the context of pertinent historical events, as well as to address the occurrence of an event with insufficient historical information or be influenced by other events. To alleviate such issues, we propose a novel TKGR method named TKGR-RHETNE, which jointly models the context of relevant historical events and temporal neighborhood events. In terms of the historical event view, we introduce an encoder based on the transformer Hawkes process and self-attention mechanism to effectively capture long-term event dependencies, thus modeling the event evolution process continuously. In terms of the neighborhood event view, we propose a neighborhood aggregator to model the potential influence between events with insufficient historical information and other events, which is implemented by integrating the random walk strategy with the TKG topological structure. Comprehensive experiments on five benchmark datasets demonstrate the superior performance of our proposed model (Code is publicly available at https://github.com/wanwano/TKGR-RHETNE).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The code for TransE, DistMult, and ComplEx is from https://github.com/thunlp/OpenKE.
- 2.
The code for TTransE is from https://github.com/INK-USC/RE-Net/tree/master/baselines.
- 3.
The code for TANGO is from https://github.com/TemporalKGTeam/TANGO.
- 4.
The code for RE-NET is from https://github.com/INK-USC/RE-Net.
- 5.
The code for RE-GCN is from https://github.com/Lee-zix/RE-GCN.
- 6.
The code for GHNN is from https://github.com/Jeff20100601/GHNN_clean.
- 7.
The code for GHT is from https://github.com/JHL-HUST/GHT.
References
Barbosa, D., Wang, H., Yu, C.: Shallow information extraction for the knowledge web. In: ICDE, pp. 1264–1267 (2013)
Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: NIPS, pp. 2787–2795 (2013)
Cho, K., et al.: Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)
Deng, S., Rangwala, H., Ning, Y.: Dynamic knowledge graph based multi-event forecasting. In: SIGKDD, pp. 1585–1595 (2020)
Feng, F., He, X., Wang, X., Luo, C., Liu, Y., Chua, T.S.: Temporal relational ranking for stock prediction. ACM Trans. Inform. Syst. (TOIS) 37(2), 1–30 (2019)
García-Durán, A., Dumančić, S., Niepert, M.: Learning sequence encoders for temporal knowledge graph completion. arXiv preprint arXiv:1809.03202 (2018)
Goel, R., Kazemi, S.M., Brubaker, M., Poupart, P.: Diachronic embedding for temporal knowledge graph completion. In: AAAI, vol. 34, pp. 3988–3995 (2020)
Han, Z., Ding, Z., Ma, Y., Gu, Y., Tresp, V.: learning neural ordinary equations for forecasting future links on temporal knowledge graphs. In: EMNLP, pp. 8352–8364 (2021)
Han, Z., Ma, Y., Wang, Y., Günnemann, S., Tresp, V.: Graph hawkes neural network for forecasting on temporal knowledge graphs. arXiv preprint arXiv:2003.13432 (2020)
Jin, D., et al.: Raw-gnn: Random walk aggregation based graph neural network. arXiv preprint arXiv:2206.13953 (2022)
Jin, W., Qu, M., Jin, X., Ren, X.: Recurrent event network: Autoregressive structure inference over temporal knowledge graphs. arXiv preprint arXiv:1904.05530 (2019)
Jung, J., Jung, J., Kang, U.: Learning to walk across time for interpretable temporal knowledge graph completion. In: SIGKDD, pp. 786–795 (2021)
Lacroix, T., Obozinski, G., Usunier, N.: Tensor decompositions for temporal knowledge base completion. arXiv preprint arXiv:2004.04926 (2020)
Leblay, J., Chekol, M.W.: Deriving validity time in knowledge graph. In: The Web Conference, pp. 1771–1776 (2018)
Li, Z., et al.: Temporal knowledge graph reasoning based on evolutional representation learning. In: SIGIR, pp. 408–417 (2021)
Liu, M., Liu, Y.: Inductive representation learning in temporal networks via mining neighborhood and community influences. In: SIGIR, pp. 2202–2206 (2021)
Mahdisoltani, F., Biega, J., Suchanek, F.: Yago3: a knowledge base from multilingual wikipedias. In: 7th Biennial Conference on Innovative Data Systems Research. CIDR Conference (2014)
Mei, H., Eisner, J.M.: The neural hawkes process: a neurally self-modulating multivariate point process. In: NeurIPS 30 (2017)
Nguyen, G.H., Lee, J.B., Rossi, R.A., Ahmed, N.K., Koh, E., Kim, S.: Continuous-time dynamic network embeddings. In: The Web Conference, pp. 969–976 (2018)
Shchur, O., Türkmen, A.C., Januschowski, T., Günnemann, S.: Neural temporal point processes: a review. arXiv preprint arXiv:2104.03528 (2021)
Sun, H., Geng, S., Zhong, J., Hu, H., He, K.: Graph hawkes transformer for extrapolated reasoning on temporal knowledge graphs. In: EMNLP, pp. 7481–7493 (2022)
Trivedi, R., Dai, H., Wang, Y., Song, L.: Know-evolve: deep temporal reasoning for dynamic knowledge graphs. In: ICML, pp. 3462–3471. PMLR (2017)
Trivedi, R., Farajtabar, M., Biswal, P., Zha, H.: Dyrep: learning representations over dynamic graphs. In: ICLR (2019)
Trouillon, T., Welbl, J., Riedel, S., Gaussier, É., Bouchard, G.: Complex embeddings for simple link prediction. In: ICML, pp. 2071–2080 (2016)
Vaswani, A., et al.: Attention is all you need. In: NeurIPS, vol. 30 (2017)
Wang, S., Cai, X., Zhang, Y., Yuan, X.: CRNet: modeling concurrent events over temporal knowledge graph. In: Sattler, U., et al. (eds.) The Semantic Web – ISWC 2022: 21st International Semantic Web Conference, Virtual Event, October 23–27, 2022, Proceedings, pp. 516–533. Springer International Publishing, Cham (2022). https://doi.org/10.1007/978-3-031-19433-7_30
Yang, B., Yih, W.t., He, X., Gao, J., Deng, L.: Embedding entities and relations for learning and inference in knowledge bases. https://arxiv.org/abs/1412.6575 (2014)
Zhang, Q., Lipani, A., Kirnap, O., Yilmaz, E.: Self-attentive hawkes process. In: ICML, pp. 11183–11193. PMLR (2020)
Zhu, C., Chen, M., Fan, C., Cheng, G., Zhang, Y.: Learning from history: modeling temporal knowledge graphs with sequential copy-generation networks. In: AAAI. vol. 35, pp. 4732–4740 (2021)
Zuo, S., Jiang, H., Li, Z., Zhao, T., Zha, H.: Transformer hawkes process. In: ICML, pp. 11692–11702. PMLR (2020)
Acknowledgements
This work was supported by the National Natural Science Foundation of China (Grant No. 62202075, No. 62171111, No. 62376058, No. 62376043, No. 62002052). the Natural Science Foundation of Chongqing, China (2022NSCQ-MSX3749), Sichuan Science and Technology Program (Grant No. 2022YFG0189), China Postdoctoral Science Foundation (Grant No. 2022M710614), Key Laboratory of Data Science and Smart Education, Hainan Normal University, Ministry of Education (Grant No. 2022NSCQ-MSX3749), Anhui Provincial Engineering Laboratory for Beidou Precision Agriculture Information (Grant No. BDSY2023004).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Sun, J., Sheng, Y., Zhan, L., He, L. (2024). TKGR-RHETNE: A New Temporal Knowledge Graph Reasoning Model via Jointly Modeling Relevant Historical Event and Temporal Neighborhood Event Context. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Lecture Notes in Computer Science, vol 14451. Springer, Singapore. https://doi.org/10.1007/978-981-99-8073-4_26
Download citation
DOI: https://doi.org/10.1007/978-981-99-8073-4_26
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8072-7
Online ISBN: 978-981-99-8073-4
eBook Packages: Computer ScienceComputer Science (R0)