Abstract
Automated completion of knowledge graphs is a popular topic in the Semantic Web community that aims to automatically and continuously integrate new appearing knowledge into knowledge graphs using artificial intelligence. Recently, approaches that leverage implicit knowledge from language models for this task have shown promising results. However, by fine-tuning language models directly to the domain of knowledge graphs, models forget their original language representation and associated knowledge. An existing solution to address this issue is a trainable adapter, which is integrated into a frozen language model to extract the relevant knowledge without altering the model itself. However, this constrains the generalizability to the specific extraction task and by design requires new and independent adapters to be trained for new knowledge extraction tasks. This effectively prevents the model from benefiting from existing knowledge incorporated in previously trained adapters.
In this paper, we propose to combine the benefits of adapters for knowledge graph completion with the idea of integrating capsules, introduced in the field of continual learning. This allows the continuous integration of knowledge into a joint model by sharing and reusing previously trained capsules. We find that our approach outperforms solutions using traditional adapters, while requiring notably fewer parameters for continuous knowledge integration. Moreover, we show that this architecture benefits significantly from knowledge sharing in low-resource situations, outperforming adapter-based models on the task of link prediction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Our code is available at https://professor-x.de/code-capskg.
References
Bengio, Y., Louradour, J., Collobert, R., Weston, J.: Curriculum learning. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 41–48 (2009)
Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, vol. 26. Curran Associates, Inc. (2013)
Bosselut, A., Rashkin, H., Sap, M., Malaviya, C., Celikyilmaz, A., Choi, Y.: COMET: commonsense transformers for automatic knowledge graph construction. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4762–4779. Association for Computational Linguistics, Florence, Italy, July 2019
Bouraoui, Z., Camacho-Collados, J., Schockaert, S.: Inducing relational knowledge from BERT. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 7456–7463 (2020)
Brayne, A., Wiatrak, M., Corneil, D.: On masked language models for contextual link prediction. In: Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pp. 87–99. Association for Computational Linguistics, Dublin, Ireland and Online, May 2022
Cao, B., et al.: Knowledgeable or educated guess? Revisiting language models as knowledge bases, June 2021
Dai, Y., Wang, S., Xiong, N.N., Guo, W.: A survey on knowledge graph embedding: approaches, applications and benchmarks. Electronics 9(5), 750 (2020)
Daruna, A., Gupta, M., Sridharan, M., Chernova, S.: Continual learning of knowledge graph embeddings. IEEE Rob. Autom. Lett. 6(2), 1128–1135 (2021)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding, May 2019
Ebisu, T., Ichise, R.: TorusE: knowledge graph embedding on a lie group. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)
Fichtel, L., Kalo, J.C., Balke, W.T.: Prompt tuning or fine-tuning - investigating relational knowledge in pre-trained language models. In: 3rd Conference on Automated Knowledge Base Construction, September 2021
Hao, S., et al.: BertNet: harvesting knowledge graphs from pretrained language models, December 2022
Haviv, A., Berant, J., Globerson, A.: BERTese: learning to speak to BERT. arXiv preprint arXiv:2103.05327 (2021)
Ke, Z., Lin, H., Shao, Y., Xu, H., Shu, L., Liu, B.: Continual training of language models for few-shot learning. arXiv preprint arXiv:2210.05549 (2022)
Ke, Z., Liu, B., Ma, N., Xu, H., Shu, L.: Achieving forgetting prevention and knowledge transfer in continual learning. In: Advances in Neural Information Processing Systems, vol. 34, pp. 22443–22456. Curran Associates, Inc. (2021)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization, December 2014
Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)
Mahdisoltani, F., Biega, J., Suchanek, F.M.: YAGO3: a knowledge base from multilingual Wikipedias (2016)
Nguyen, D.Q., Vu, T., Nguyen, T.D., Nguyen, D.Q., Phung, D.: A capsule network-based embedding model for knowledge graph completion and search personalization. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2180–2189. Association for Computational Linguistics, Minneapolis, Minnesota, June 2019
Omeliyanenko, J., Zehe, A., Hettinger, L., Hotho, A.: LM4KG: improving common sense knowledge graphs with language models. In: Pan, J.Z., et al. (eds.) ISWC 2020. LNCS, vol. 12506, pp. 456–473. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-62419-4_26
Parisi, G.I., Kemker, R., Part, J.L., Kanan, C., Wermter, S.: Continual lifelong learning with neural networks: a review. Neural Netw. 113, 54–71 (2019)
Petroni, F., et al.: Language models as knowledge bases? September 2019
Pfeiffer, J., Kamath, A., Rücklé, A., Cho, K., Gurevych, I.: AdapterFusion: non-destructive task composition for transfer learning, January 2021
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners (2019)
Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: AutoPrompt: eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980 (2020)
Song, H.J., Park, S.B.: Enriching translation-based knowledge graph embeddings through continual learning. IEEE Access 6, 60489–60497 (2018)
Teru, K.K., Denis, E., Hamilton, W.L.: Inductive relation prediction by subgraph reasoning, February 2020
Toutanova, K., Chen, D.: Observed versus latent features for knowledge base and text inference. In: Proceedings of the 3rd Workshop on Continuous Vector Space Models and Their Compositionality, pp. 57–66. Association for Computational Linguistics, Beijing, China, July 2015
Tran, H.N., Takasu, A.: MEIM: multi-partition embedding interaction beyond block term format for efficient and expressive link prediction, October 2022
Wang, B., Shen, T., Long, G., Zhou, T., Wang, Y., Chang, Y.: Structure-augmented text representation learning for efficient knowledge graph completion. In: Proceedings of the Web Conference 2021, pp. 1737–1748. ACM, Ljubljana Slovenia, April 2021
Wang, R., et al.: K-adapter: infusing knowledge into pre-trained models with adapters, December 2020
Wang, Y., Xiao, W., Tan, Z., Zhao, X.: Caps-OWKG: a capsule network model for open-world knowledge graph. Int. J. Mach. Learn. Cybern. 12(6), 1627–1637 (2021)
Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge graph embedding by translating on hyperplanes. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28, no. 1, June 2014
West, R., Gabrilovich, E., Murphy, K., Sun, S., Gupta, R., Lin, D.: Knowledge base completion via search-based question answering. In: Proceedings of the 23rd International Conference on World Wide Web, pp. 515–526 (2014)
Yao, L., Mao, C., Luo, Y.: KG-BERT: BERT for knowledge graph completion, September 2019
Youn, J., Tagkopoulos, I.: KGLM: integrating knowledge graph structure in language models for link prediction. arXiv preprint arXiv:2211.02744 (2022)
Youn, J., Tagkopoulos, I.: KGLM: integrating knowledge graph structure in language models for link prediction, November 2022
Zha, H., Chen, Z., Yan, X.: Inductive relation prediction by BERT. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 5, pp. 5923–5931 (2022)
Zhang, S., Tay, Y., Yao, L., Liu, Q.: Quaternion knowledge graph embeddings. In: Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)
Zhao, A., Yu, Y.: Knowledge-enabled BERT for aspect-based sentiment analysis. Knowl.-Based Syst. 227, 107220 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Omeliyanenko, J., Zehe, A., Hotho, A., Schlör, D. (2023). CapsKG: Enabling Continual Knowledge Integration in Language Models for Automatic Knowledge Graph Completion. In: Payne, T.R., et al. The Semantic Web – ISWC 2023. ISWC 2023. Lecture Notes in Computer Science, vol 14265. Springer, Cham. https://doi.org/10.1007/978-3-031-47240-4_33
Download citation
DOI: https://doi.org/10.1007/978-3-031-47240-4_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47239-8
Online ISBN: 978-3-031-47240-4
eBook Packages: Computer ScienceComputer Science (R0)