Skip to main content

Analogy-Triple Enhanced Fine-Grained Transformer for Sparse Knowledge Graph Completion

  • Conference paper
  • First Online:
  • 1836 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13944))

Abstract

Sparse problem is the major challenge in knowledge graph completion. However, existing knowledge graph completion methods utilize entity as the basic granularity, and face the semantic under-transfer problem. In this paper, we propose an analogy-triple enhanced fine-grained sequence-to-sequence model for sparse knowledge graph completion. Specifically, the entities are first split into different levels of granularity, such as sub-entity, word, and sub-word. Then we extract a set of analogy-triples for each entity-relation pair. Furthermore, our model encodes and integrates the analogy-triples and entity-relation pairs, and finally predicts the sequence of missing entities. Experimental results on multiple knowledge graphs show that the proposed model can achieve better performance than existing methods, especially on sparse entities.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    In this paper, the bold characters represent the embeddings in the model, and d is the dimension of embeddings.

  2. 2.

    Similarly, for the sub-entity granularity, entities are first split into words, and then sub-entities can be obtained through the combination of words.

  3. 3.

    More details can be seen in paper [3].

  4. 4.

    These results are quoted from papers [1, 14].

  5. 5.

    The results of these two models are obtained through our implementation using the open source codes. Note that the results on Wikidata5M is empty because it is difficult to extend to large-scale KGs for these two rule-based methods [4].

  6. 6.

    For SimKGC, the results on FB15k-237 and Wikidata5M is obtained from its original paper [23]. For KGT5, the results on FB15k-237 and Wikidata5M are quoted from [14].

References

  1. Akrami, F., Saeef, M.S., Zhang, Q., Hu, W., Li, C.: Realistic re-evaluation of knowledge graph completion methods: An experimental study. In: SIGMOD, pp. 1995–2010 (2020)

    Google Scholar 

  2. Bollacker, K., Evans, C., Paritosh, P., Sturge, T., Taylor, J.: Freebase: A collaboratively created graph database for structuring human knowledge. In: SIGMOD, pp. 1247–1250 (2008)

    Google Scholar 

  3. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. NeurIPS 26 (2013)

    Google Scholar 

  4. Chen, X., Jia, S., Xiang, Y.: A review: Knowledge reasoning over knowledge graph. Expert Syst. Appl. 141, 112948 (2020)

    Article  Google Scholar 

  5. Das, R., Godbole, A., Monath, N., Zaheer, M., McCallum, A.: Probabilistic case-based reasoning for open-world knowledge graph completion. In: EMNLP, pp. 4752–4765 (2020)

    Google Scholar 

  6. Dong, X., et al.: Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In: SIGKDD, pp. 601–610 (2014)

    Google Scholar 

  7. Ji, G., He, S., Xu, L., Liu, K., Zhao, J.: Knowledge graph embedding via dynamic mapping matrix. In: ACL, pp. 687–696 (2015)

    Google Scholar 

  8. Ji, G., Liu, K., He, S., Zhao, J.: Knowledge graph completion with adaptive sparse transfer matrix. In: AAAI (2016)

    Google Scholar 

  9. Kenton, J.D.M.W.C., Toutanova, L.K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT, pp. 4171–4186 (2019)

    Google Scholar 

  10. Kong, F., Zhang, R., Guo, H., Mensah, S., Hu, Z., Mao, Y.: A neural bag-of-words modelling framework for link prediction in knowledge bases with sparse connectivity. In: WWW, pp. 2929–2935 (2019)

    Google Scholar 

  11. Lajus, J., Galárraga, L., Suchanek, F.: Fast and exact rule mining with AMIE 3. In: Harth, A., et al. (eds.) ESWC 2020. LNCS, vol. 12123, pp. 36–52. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49461-2_3

  12. Ortona, S., Meduri, V.V., Papotti, P.: Rudik: Rule discovery in knowledge bases. VLDB Endowm. 11(12), 1946–1949 (2018)

    Article  Google Scholar 

  13. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(140), 1–67 (2020)

    Google Scholar 

  14. Saxena, A., Kochsiek, A., Gemulla, R.: Sequence-to-sequence knowledge graph completion and question answering. In: ACL, pp. 2814–2828 (2022)

    Google Scholar 

  15. Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. In: ACL, pp. 1715–1725 (2016)

    Google Scholar 

  16. Shi, B., Weninger, T.: Open-world knowledge graph completion. In: AAAI, vol. 32 (2018)

    Google Scholar 

  17. Sun, Z., Deng, Z.H., Nie, J.Y., Tang, J.: Rotate: Knowledge graph embedding by relational rotation in complex space. In: ICLR (2018)

    Google Scholar 

  18. Tan, Z., et al.: Thumt: An open-source toolkit for neural machine translation. In: AMTA, pp. 116–122 (2020)

    Google Scholar 

  19. Toutanova, K., Chen, D.: Observed versus latent features for knowledge base and text inference. In: CVSC, pp. 57–66 (2015)

    Google Scholar 

  20. Trouillon, T., Welbl, J., Riedel, S., Gaussier, É., Bouchard, G.: Complex embeddings for simple link prediction. In: ICML, pp. 2071–2080. PMLR (2016)

    Google Scholar 

  21. Vaswani, A., et al.: Attention is all you need. NeurIPS 30 (2017)

    Google Scholar 

  22. Wang, B., Shen, T., Long, G., Zhou, T., Wang, Y., Chang, Y.: Structure-augmented text representation learning for efficient knowledge graph completion. In: WWW, pp. 1737–1748 (2021)

    Google Scholar 

  23. Wang, L., Zhao, W., Wei, Z., Liu, J.: Simkgc: Simple contrastive knowledge graph completion with pre-trained language models. In: ACL, pp. 4281–4294 (2022)

    Google Scholar 

  24. Wang, S., Dang, D.: A generative answer aggregation model for sentence-level crowdsourcing task. IEEE Trans. Knowl. Data Eng. (2022)

    Google Scholar 

  25. Wang, X., et al.: Kepler: A unified model for knowledge embedding and pre-trained language representation. TACL 9, 176–194 (2021)

    Google Scholar 

  26. Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge graph embedding by translating on hyperplanes. In: AAAI, vol. 28 (2014)

    Google Scholar 

  27. Xue, B., Zou, L.: Knowledge graph quality management: A comprehensive survey. IEEE Trans. Knowl. Data Eng. (2022)

    Google Scholar 

  28. Yang, B., Yih, S.W.t., He, X., Gao, J., Deng, L.: Embedding entities and relations for learning and inference in knowledge bases. In: ICLR (2015)

    Google Scholar 

  29. Yang, F., Yang, Z., Cohen, W.W.: Differentiable learning of logical rules for knowledge base reasoning. NeurIPS 30 (2017)

    Google Scholar 

  30. Yao, L., Mao, C., Luo, Y.: Kg-bert: Bert for knowledge graph completion. arXiv preprint arXiv:1909.03193 (2019)

  31. Zhang, J., et al.: Improving the transformer translation model with document-level context. In: EMNLP, pp. 533–542 (2018)

    Google Scholar 

  32. Zhao, Y., Zhang, J., Zhou, Y., Zong, C.: Knowledge graphs enhanced neural machine translation. In: IJCAI, pp. 4039–4045 (2021)

    Google Scholar 

Download references

Acknowledgements

This work was supported by NSFC under grant 61932001 and U20A20174.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Zou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, S., Li, S., Zou, L. (2023). Analogy-Triple Enhanced Fine-Grained Transformer for Sparse Knowledge Graph Completion. In: Wang, X., et al. Database Systems for Advanced Applications. DASFAA 2023. Lecture Notes in Computer Science, vol 13944. Springer, Cham. https://doi.org/10.1007/978-3-031-30672-3_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-30672-3_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-30671-6

  • Online ISBN: 978-3-031-30672-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics