Skip to main content
Log in

Converting hyperparameter gamma in distance-based loss functions to normal parameter for knowledge graph completion

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The parameter gamma, which is used in distance-based knowledge graph embedding to distinguish positive and negative samples, plays an important role in model performance. Usually, gamma is considered a hyperparameter and taken from a discrete set. However, as the boundary of positive and negative samples, gamma may have a continuous property and be complementary to other parameters. The high consistency between gamma and the learnable parameter with respect to their attributes means that choosing gamma as a hyperparameter is probably not a good choice. To better explore the characteristics of gamma, we propose a multi-iterated parameterized scheme to convert gamma from a hyperparameter to a normal parameter. Two concrete implementation methods, the macro parametrized method and the micro parameterized method, are provided to achieve this conversion. The conversion for gamma can reduce dependency on expert knowledge in the gamma setting and can allow successors to become involved in the recent framework quickly. Experimental results on TransE, RotatE and LineaRE show that the proposed scheme and methods can achieve approximate or better results when compared with those of the original method. A series of experimental analyses further explore the characteristics and illustrate the availability of the proposed parameterized scheme and methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availibility Statement

The datasets generated and/or analysed during the current study will be made available on reasonable request.

References

  1. Bollacker K, Evans C, Paritosh P, Sturge T, Taylor J (2008) Freebase: a collaboratively created graph database for structuring human knowledge. In: Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pp. 1247–1250

  2. Lehmann J, Isele R, Jakob M, Jentzsch A, Kontokostas D, Mendes PN, Hellmann S, Morsey M, Van Kleef P, Auer S, et al. (2015) Dbpedia–a largescale, multilingual knowledge base extracted from wikipedia. Semantic web 6(2):167–195

  3. Miller GA (1995) Wordnet: a lexical database for english. Comms ACM 38(11):39–41

  4. Carlson A, Betteridge J, Kisiel B, Settles B, Hruschka ER, Mitchell TM (2010) Toward an architecture for never-ending language learning. In: Twenty- Fourth AAAI Conference on Artificial Intelligence

  5. Yan W, Ding Y (2021) Rdf knowledge graph keyword type search using frequent patterns. J Intell Fuzzy Syst 41(1):2239–2253

    Article  Google Scholar 

  6. Cao X, Zhao Y, Shen B (2023) Improving and evaluating complex question answering over knowledge bases by constructing strongly supervised data. Neural Comput Appl 35(7):5513–5533

    Article  Google Scholar 

  7. Wang R, Wang M, Liu J, Chen W, Cochez M, Decker S (2019) Leveraging knowledge graph embeddings for natural language question answering. Database Sys Adv Appl (Dasfaa 2019). Pt I 11446:659–675

    Google Scholar 

  8. Zou Q, Lu C (2022) An accurate matching query method of natural language knowledge graph based on hierarchical graph topological sequence. IEEE Access 10:24080–24094

    Article  Google Scholar 

  9. Sun Z, Deng ZH, Nie JY, Tang J (2019) Rotate: Knowledge graph embedding by relational rotation in complex space. In: International Conference on Learning Representations

  10. Yu M, Guo J, Yu J, Xu T, Zhao M, Liu H, Li X, Yu R (2023) Bdri: block decomposition based on relational interaction for knowledge graph completion. Data Min Knowl Discov 1–21

  11. Li W, Peng R, Li Z (2022) Improving knowledge graph completion via increasing embedding interactions. Appl Intell 1–19

  12. Seo S, Oh B, Lee KH (2020) Reliable knowledge graph path representation learning. IEEE Access 8:32816–32825

    Article  Google Scholar 

  13. Lin X, Liang Y, Giunchiglia F, Feng X, Guan R (2019) Relation path embedding in knowledge graphs. Neural Comput Appl 31:5629–5639

    Article  Google Scholar 

  14. Chen L, Cui J, Tang X, Qian Y, Li Y, Zhang Y (2022) Rlpath: a knowledge graph link prediction method using reinforcement learning based attentive relation path searching and representation learning. Appl Intell 1–12

  15. Zhang Z, Huang J, Tan Q (2022) Association rules enhanced knowledge graph attention network. Knowl Based Syst 239:108038

    Article  Google Scholar 

  16. Omran PG, Wang K, Wang Z (2019) An embedding-based approach to rule learning in knowledge graphs. IEEE Trans Knowl Data Eng 33(4):1348–1359

    Article  Google Scholar 

  17. Liang Z, Yang J, Liu H, Huang K, Cui L, Qu L, Li X (2022) Hrer: A new bottom-up rule learning for knowledge graph completion. Electron 11(6):908

    Article  Google Scholar 

  18. Luo Z, Xu W, Liu W, Bian J, Yin J, Liu TY (2022) Kge-cl: Contrastive learning of tensor decomposition based knowledge graph embeddings. In: Proceedings of the 29th International Conference on Computational Linguistics, pp. 2598–2607

  19. Yang Y, Huang C, Xia L, Li C (2022) Knowledge graph contrastive learning for recommendation. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1434–1443

  20. Cao X, Shi Y, Wang J, Yu H, Wang X, Yan Z (2022) Cross-modal knowledge graph contrastive learning for machine learning method recommendation. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 3694–3702

  21. Peng Y, Zhang J (2020) Lineare: Simple but powerful knowledge graph embedding for link prediction. In: 2020 IEEE International Conference on Data Mining (ICDM), IEEE, pp. 422–431

  22. Nayyeri M, Xu C, Yaghoobzadeh Y, Yazdi HS, Lehmann J (2019) Toward understanding the effect of loss function on then performance of knowledge graph embedding. arXiv:1909.00519

  23. Yu M, Bai C, Yu J, Zhao M, Xu T, Liu H, Li X, Yu R (2022) Translationbased embeddings with octonion for knowledge graph completion. Appl Sci 12(8):3935

    Article  Google Scholar 

  24. Le T, Le N, Le B (2023) Knowledge graph embedding by relational rotation and complex convolution for link prediction. Expert Syst Appl 214:119122

    Article  Google Scholar 

  25. Li Z, Zhao Y, Zhang Y, Zhang Z (2022) Multi-relational graph attention networks for knowledge graph completion. Knowl Based Syst 251:109262

    Article  Google Scholar 

  26. Bordes A, Usunier N, Garcia-Duran A, Weston J, Yakhnenko O (2013) Translating embeddings for modeling multi-relational data. Adv Neural Inf Process Syst 26

  27. Wang Z, Zhang J, Feng J, Chen Z (2014) Knowledge graph embedding by translating on hyperplanes. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28

  28. Lin Y, Liu Z, Sun M, Liu Y, Zhu X (2015) Learning entity and relation embeddings for knowledge graph completion. In: Twenty-ninth AAAI Conference on Artificial Intelligence

  29. Nickel M, Tresp V, Kriegel HP (2011) A three-way model for collective learning on multi-relational data. In: Icml

  30. Yang B, Yih SW-t, He X, Gao J, Deng L (2015) Embedding entities and relations for learning and inference in knowledge bases. In: Proceedings of the International Conference on Learning Representations (ICLR) 2015

  31. Trouillon T, Welbl J, Riedel S, Gaussier É, Bouchard G (2016) Complex embeddings for simple link prediction. In: International Conference on Machine Learning. PMLR, pp. 2071–2080

  32. Zhang S, Tay Y, Yao L, Liu Q (2019) Quaternion knowledge graph embeddings. Adv Neural Inf Process Syst 32

  33. Liang Z, Yang J, Liu H, Huang K, Qu L, Cui L, Li X (2022) Seatte: An embedding model based on separating attribute space for knowledge graph completion. Electron 11(7):1058

    Article  Google Scholar 

  34. Zhou X, Zhu Q, Liu P, Guo L (2017) Learning knowledge embeddings by combining limit-based scoring loss. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1009–1018

  35. Zhou X, Niu L, Zhu Q, Zhu X, Liu P, Tan J, Guo L (2021) Knowledge graph embedding by double limit scoring loss. IEEE Trans Knowl Data Eng 34(12):5825–5839

    Article  Google Scholar 

  36. Qi X, Xu B (2022) Hyperparameter optimization of neural networks based on qlearning. Signal Image Video Process 1–8

  37. Li, Y (2022) Discrete hyperparameter optimization model based on skewed distribution. Math Probl Eng

  38. Fan Y, Zhang Y, Guo B, Luo X, Peng Q, Jin Z (2022) A hybrid sparrow search algorithm of the hyperparameter optimization in deep learning. Math 10(16):3019

    Article  Google Scholar 

  39. Bai J, Ren Y, Zhang J (2022) Adaptive momentum with discriminative weight for neural network stochastic optimization. Int J Intell Syst 37(9):6531–6554

    Article  Google Scholar 

  40. Liang X, Shen S, Chen E, Liu J, Liu Q, Cheng Y, Pan Z (2023) Accelerating local sgd for non-iid data using variance reduction. Front Comput Sci 17(2):172311

    Article  Google Scholar 

  41. Bhakta S, Nandi U, Si T, Ghosal SK, Changdar C, Pal RK (2022) Diffmoment: an adaptive optimization technique for convolutional neural network. Appl Intell 1–15

  42. Toutanova K, Chen D (2015) Observed versus latent features for knowledge base and text inference. In: Proceedings of the 3rd Workshop on Continuous Vector Space Models and Their Compositionality, pp. 57–66

Download references

Acknowledgements

This work was funded by the National Key Research and Development Program of China (Grant No. 2021YFC3300200).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bo Shen.

Ethics declarations

Competing Interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, J., Shen, B., Wang, T. et al. Converting hyperparameter gamma in distance-based loss functions to normal parameter for knowledge graph completion. Appl Intell 53, 23369–23382 (2023). https://doi.org/10.1007/s10489-023-04790-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-04790-3

Keywords

Navigation