Abstract
Neural tensor networks are knowledge graph embedding models which infer relationships between two given entities. Although demonstrated to be effective, any inference about an individual relation from a neural tensor network is isolated from the model’s intelligence about the other relations in the problem domain. We introduce cross-relational reasoning, a novel inference mechanism for neural tensor networks which intelligently coordinates all of the model’s relation-specific outputs to augment a prediction corresponding to a single relation. We frame the process of coordinating the relation-specific outputs as a meta-learning problem, not unlike stacked ensemble learning, and illustrate that cross-relational reasoning consistently outperforms the original inference mechanism on the WN18RR knowledge graph. We also explore modifications to the neural tensor network’s internal activation function, and illustrate that using ReLu or Elu can rapidly speed up the neural tensor network’s convergence at the cost of long-term improvement during training, and that using sigmoid universally improves the model’s performance in the context provided by this paper.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Note that the terminology used in this paper differs slightly from that used by the original research by Socher et al., 2013; for instance, we express the entities as h and t instead of \(e_1\) and \(e_2\), respectively, as the former terminology is more common in modern knowledge graph embeddings research [4, 14].
- 2.
As an example of the trade-off between specificity and variety for the threshold-calculation, we found empirical evidence that significantly larger bounds actually under-performed relative to the interval \([-6, 6]\).
References
Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, vol. 26 (2013)
Buitinck, L., et al.: API design for machine learning software: experiences from the scikit-learn project. In: ECML PKDD Workshop: Languages for Data Mining and Machine Learning, pp. 108–122 (2013)
Falck, T., Coulter, D.: Training neural tensor networks with corrupted relations. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds.) ICAISC 2023. LNCS, vol. 14125, pp. 56–67. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-42505-9_6
Ji, S., Pan, S., Cambria, E., Marttinen, P., Philip, S.Y.: A survey on knowledge graphs: representation, acquisition, and applications. IEEE Trans. Neural Netw. Learn. Syst. 33(2), 494–514 (2021)
Li, W., Zhu, L., Cambria, E.: Taylor’s theorem: a new perspective for neural tensor networks. Knowl.-Based Syst. 228, 107258 (2021)
Miller, G.A.: Wordnet: a lexical database for English. Commun. ACM 38(11), 39–41 (1995)
Nickel, M., Tresp, V., Kriegel, H.P., et al.: A three-way model for collective learning on multi-relational data. In: ICML, vol. 11, pp. 3104482–3104584 (2011)
Nielsen, M.A.: Neural Networks and Deep Learning, vol. 25. Determination press, San Francisco (2015)
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035. Curran Associates, Inc. (2019). http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014). http://www.aclweb.org/anthology/D14-1162
Qiu, X., Huang, X.: Convolutional neural tensor network architecture for community-based question answering. In: Twenty-Fourth International Joint Conference on Artificial Intelligence (2015)
Russell, S.J., Norvig, P.: Artificial Intelligence a Modern Approach, 4th edn. Pearson Education, Inc., London (2020)
Socher, R., Chen, D., Manning, C.D., Ng, A.: Reasoning with neural tensor networks for knowledge base completion. In: Advances in Neural Information Processing Systems, vol. 26 (2013)
Wang, Q., Mao, Z., Wang, B., Guo, L.: Knowledge graph embedding: a survey of approaches and applications. IEEE Trans. Knowl. Data Eng. 29(12), 2724–2743 (2017)
Wilson, H.: WN18RR (2020). https://doi.org/10.6084/m9.figshare.11911272.v1, https://figshare.com/articles/dataset/WN18RR/11911272
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 IFIP International Federation for Information Processing
About this paper
Cite this paper
Falck, T., Coulter, D. (2024). Cross-Relational Reasoning for Neural Tensor Networks. In: Maglogiannis, I., Iliadis, L., Macintyre, J., Avlonitis, M., Papaleonidas, A. (eds) Artificial Intelligence Applications and Innovations. AIAI 2024. IFIP Advances in Information and Communication Technology, vol 713. Springer, Cham. https://doi.org/10.1007/978-3-031-63219-8_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-63219-8_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-63218-1
Online ISBN: 978-3-031-63219-8
eBook Packages: Computer ScienceComputer Science (R0)