Abstract
Predicting entities in knowledge graphs is a crucial research area, and convolutional neural networks (CNNs) have exhibited significant performance due to their ability to generate expressive feature embeddings. However, several existing methods in this field tend to disrupt entities and relational embeddings, disregarding the original translation characteristics in triples, leading to incomplete feature extraction.To address this issue and preserve the translation characteristics of triples, the present study introduces a novel representation technique, termed MultiGNN. The suggested approach uses a graph convolutional neural network for encoding and implements a parameter sharing technique. It employs a convolutional neural network and a translation model as decoders. The model’s parameter space is expanded to effectively integrate translation characteristics into the convolutional neural network, which allows it to capture these characteristics and enhance the model’s performance. The proposed method in this paper has demonstrated significant enhancements in several metrics on the public benchmark dataset when compared to the baseline method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bordes, A., Usunier, N., Garcia-Duran, A., et al.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, vol. 26 (2013)
Yuan, J., Gao, N,. Xiang, J.: TransGate: knowledge graph embedding with shared gate structure. In: Proceedings of the AAAI Conference On Artificial Intelligence. 2019, vol. 33(01), pp. 3100–3107 (2019)
Lin, Y., Liu, Z., Sun, M., et al.: Learning entity and relation embeddings for knowledge graph completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29(1) (2015)
Balažević, I., Allen, C., Hospedales, T M.: Tucker: Tensor factorization for knowledge graph completion. arXiv preprint arXiv:1901.09590 (2019)
Yang, B., Yih, W., He, X., et al.: Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575 (2014)
Trouillon T, Welbl J, Riedel S, et al. Complex embeddings for simple link prediction. In: International Conference on Machine Learning. PMLR, pp. 2071–2080 (2016)
Kazemi, S.M., Poole, D.: Simple embedding for link prediction in knowledge graphs. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Dettmers, T., Minervini, P., Stenetorp, P., et al.: Convolutional 2D knowledge graph embeddings. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32(1) (2018)
Jiang, X., Wang, Q., Wang, B.: Adaptive convolution for multi-relational learning. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 978–987 (2019)
Vashishth, S., Sanyal, S., Nitin, V., et al.: Interacte: Improving convolution-based knowledge graph embeddings by increasing feature interactions. In: Proceedings of the AAAI Conference on Artificial Intelligence, 34(03), pp. 3009–3016 (2020)
Schlichtkrull, M., Kipf, T.N., Bloem, P., et al.: Modeling relational data with graph convolutional networks. In: The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15. Springer International Publishing, pp. 593–607 (2018)
Degraeve, V., Vandewiele, G., Ongenae, F., et al.: R-GCN: the R could stand for random. arXiv preprint arXiv:2203.02424 (2022)
Chen, J., Zhu, J., Song, L.: Stochastic training of graph convolutional networks with variance reduction. arXiv preprint arXiv:1710.10568 (2017)
Nathani, D., Chauhan, J., Sharma, C., et al.: Learning attention-based embeddings for relation prediction in knowledge graphs. arXiv preprint arXiv:1906.01195 (2019)
Shang, C., Tang, Y., Huang, J., et al.: End-to-end structure-aware convolutional networks for knowledge base completion. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33(01), pp. 3060–3067 (2019)
Vashishth, S., Sanyal, S., Nitin, V., et al.: Composition-based multi-relational graph convolutional networks. arXiv preprint arXiv:1911.03082 (2019)
Ruder, S.: An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 (2017)
Ma, J., et al.: Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2018)
Kendall, A., Yarin, G., Roberto, C.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)
Acknowledgements
This work was supported by the National Key R&D Program of China under Grant No. 2020YFB1710200.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wang, Y., Yang, J., Li, L., Yao, J. (2023). Multitask Graph Neural Network for Knowledge Graph Link Prediction. In: Yu, Z., et al. Data Science. ICPCSEE 2023. Communications in Computer and Information Science, vol 1880. Springer, Singapore. https://doi.org/10.1007/978-981-99-5971-6_23
Download citation
DOI: https://doi.org/10.1007/978-981-99-5971-6_23
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-5970-9
Online ISBN: 978-981-99-5971-6
eBook Packages: Computer ScienceComputer Science (R0)