Skip to main content
Log in

Aggregating neighborhood information for negative sampling for knowledge graph embedding

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

A Correction to this article was published on 25 May 2020

This article has been updated

Abstract

Knowledge graphs, as linked data, can be extracted from texts in triple form that illustrate the structure of “entity–relation–entity.” Knowledge graph embedding (KGE) models are used to map entities and relations into a continuous vector space with semantic constraints so as to learn a knowledge graph with fact triples. In the KGE model training process, both positive and negative triples are necessarily provided. Thus, negative sampling methods are meaningful in generating negative samples based on the representations of entities and relations. This paper proposes an innovative neighborhood knowledge selective adversarial network (NKSGAN), which leverages the representation of aggregating neighborhood information to generate high-quality negative samples for enhancing the performances of the discriminator. Experiments are conducted on widely used standard datasets such as FB15k, FB15k-237, WN18 and WN18RR to evaluate our model for link prediction task. The results present the superiority of our proposed NKSGAN than other baseline methods, indicating that the negative sampling process in NKSGAN is effective in generating high-quality negative samples for boosting KGE models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Change history

Notes

  1. https://github.com/cai-lw/KBGAN.

References

  1. Bollacker K, Evans C, Paritosh P, Sturge T, Taylor J (2008) Freebase: a collaboratively created graph database for structuring human knowledge. In: ACM SIGMOD international conference on Management of data, pp 1247–1250

  2. Miller GA (1995) WordNet: a lexical database for English. Commun ACM 38(11):39–41

    Article  Google Scholar 

  3. Suchanek FM, Kasneci G, Weikum G (2007) Yago: a core of semantic knowledge. In: WWW, pp 697–706

  4. Jiang T, Liu T, Ge T, Sha L, Li S, Chang B, Sui Z (2016) Encoding temporal information for time-aware link prediction. In: EMNLP, pp 2350–2354

  5. Trivedi R, Dai H, Wang Y, Song L (2017) Know-evolve: deep temporal reasoning for dynamic knowledge graphs. In: ICML, pp 3462–3471

  6. Dasgupta S S, Ray S N, Talukdar P (2018) Hyte: hyperplane-based temporally aware knowledge graph embedding. In: EMNLP, pp 2001–2011

  7. Chen X, Chen M, Shi W, Sun Y, Zaniolo C (2019) Embedding uncertain knowledge graphs. In: AAAI, pp 3363–3370

  8. Liu H, Singh P (2004) ConceptNet—a practical commonsense reasoning tool-kit. BT Technol J 22(4):211–226

    Article  Google Scholar 

  9. Bordes A, Usunier N, Garcia-Durán A, Weston J, Yakhnenko O (2013) Translating embeddings for modeling multi-relational data. In: NIPS, pp 2787–2795

  10. Ji G, He S, Xu L, Liu K, Zhao J (2015) Knowledge graph embedding via dynamic mapping matrix. In: ACL and IJCNLP, pp 687–696

  11. Yang B, Yih Wt, He X, Gao J, Deng L (2014) Embedding entities and relations for learning and inference in knowledge bases. Preprint arXiv:1412.6575

  12. Trouillon T, Welbl J, Riedel S, Gaussier É, Bouchard G (2016) Complex embeddings for simple link prediction. In: ICML, pp 2071–2080

  13. Wang Q, Mao Z, Wang B, Guo L (2017) Knowledge graph embedding: a survey of approaches and applications. IEEE Trans Knowl Data Eng 29(12):2724–2743

    Article  Google Scholar 

  14. Wang P, Li S, Pan R (2018) Incorporating GAN for negative sampling in knowledge representation learning. In: AAAI, pp 2005–2012

  15. Drumond L, Rendle S, Schmidt-Thieme L (2012) Predicting RDF triples in incomplete knowledge bases with tensor factorization. In: SAC, pp 326–331

  16. Wang Z, Zhang J, Feng J, Chen Z (2014) Knowledge graph embedding by translating on hyperplanes. In: AAAI, pp 1112–1119

  17. Cai L, Wang WY (2018) KBGAN: adversarial learning for knowledge graph embeddings. In: NAACL, pp 1470–1480

  18. Hu K, Liu H, Hao T (2019) A knowledge selective adversarial network for link prediction in knowledge graph. In: NLPCC, pp 171–183

  19. Veličković P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y (2018) Graph attention networks. In: ICLR

  20. Nathani D, Chauhan J, Sharma C, Kaul M (2019) Learning attention-based embeddings for relation prediction in knowledge graphs. Preprint arXiv:1906.01195

  21. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: NIPS, pp 3111–3119

  22. Lin Y, Liu Z, Sun M, Liu Y, Zhu X (2015) Learning entity and relation embeddings for knowledge graph completion. In: AAAI, pp 2181–2187

  23. Fan M, Zhou Q, Chang E, Zheng TF (2014) Transition-based knowledge graph embedding with relational mapping properties. In: PACLIC, pp 328–337

  24. Feng J, Huang M, Wang M, Zhou M, Hao Y, Zhu X (2016) Knowledge graph embedding by flexible translation. In: KR, pp 557–560

  25. Xiao H, Huang M, Zhu X (2016) From one point to a manifold: knowledge graph embedding for precise link prediction. In: IJCAI, pp 1315–1321

  26. Tan Z, Zhao X, Fang Y, Xiao W (2018) GTrans: generic knowledge graph embedding via multi-state entities and dynamic relation spaces. IEEE Access 6:8232–8244

    Article  Google Scholar 

  27. Sun Z, Deng ZH, Nie JY, Tang J (2019) RotatE: knowledge graph embedding by relational rotation in complex space. In: ICLR

  28. Nickel M, Tresp V, Kriegel HP (2011) A three-way model for collective learning on multi-relational data. In: ICML, pp 809–816

  29. Nickel M, Rosasco L, Poggio T (2016) Holographic embeddings of knowledge graphs. In: AAAI, pp 1955–1961

  30. Kazemi SM, Poole D (2018) SimplE embedding for link prediction in knowledge graphs. In: NIPS, pp 4284–4295

  31. Balažević I, Allen C, Hospedales TM (2019) TuckER: tensor factorization for knowledge graph completion. Preprint arXiv:1901.09590

  32. Socher R, Chen D, Manning CD, Ng A (2013) Reasoning with neural tensor networks for knowledge base completion. In: NIPS, pp 926–934

  33. Dong X, Gabrilovich E, Heitz G, Horn W, Lao N, Murphy K, Strohmann T, Sun S, Zhang W (2014) Knowledge vault: a web-scale approach to probabilistic knowledge fusion. In: KDD, pp 601–610

  34. Dettmers T, Minervini P, Stenetorp P, Riedel S (2018) Convolutional 2D knowledge graph embeddings. In: AAAI, pp 1811–1818

  35. Nguyen DQ, Nguyen TD, Nguyen DQ, Phung D (2018) A novel embedding model for knowledge base completion based on convolutional neural network. In: NAACL-HLT, pp 327–333

  36. Nguyen DQ, Vu T, Nguyen TD, Nguyen DQ, Phung D (2019) A capsule network-based embedding model for knowledge graph completion and search personalization. In: NAACL-HLT, pp 2180–2189

  37. Shang C, Tang Y, Huang J, Bi J, He X, Zhou B (2019) End-to-end structure-aware convolutional networks for knowledge base completion, In: AAAI, pp 3060–3067

  38. Krompaß D, Baier S, Tresp V (2015) Type-constrained representation learning in knowledge graphs. In: ISWC, pp 640–655

  39. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: NIPS, pp 2672–2680

  40. Rensink RA (2000) The dynamic representation of scenes. Vis Cognit 7(1–3):17–42

    Article  Google Scholar 

  41. Corbetta M, Shulman GL (2002) Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci 3(3):201–215

    Article  Google Scholar 

  42. Mnih V, Heess N, Graves A (2014) Recurrent models of visual attention. In: NIPS, pp 2204–2212

  43. Ba J, Mnih V, Kavukcuoglu K (2014) Multiple object recognition with visual attention. Preprint arXiv:1412.7755

  44. Gregor K, Danihelka I, Graves A, Rezende DJ, Wierstra D (2015) Draw: a recurrent neural network for image generation. Preprint arXiv:1502.04623

  45. Xu K, Ba J, Kiros R, Cho K, Courville A, Salakhudinov R, Zemel R, Bengio Y (2015) Show, attend and tell: neural image caption generation with visual attention. In: ICML, pp 2048–2057

  46. Ji Y, Zhang H, Wu QJ (2018) Salient object detection via multi-scale attention CNN. Neurocomputing 322:130–140

    Article  Google Scholar 

  47. Bahdanau D, Cho K, Bengio Y (2014) Neural machine translation by jointly learning to align and translate. Preprint arXiv:1409.0473

  48. Luong T, Pham H, Manning CD (2015) Effective approaches to attention-based neural machine translation. In: EMNLP, pp 1412–1421

  49. Zhang H, Li J, Ji Y, Yue H (2016) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Ind Inform 13(2):616–624

    Article  Google Scholar 

  50. Gehring J, Auli M, Grangier D, Dauphin Y (2017) A convolutional encoder model for neural machine translation. In: ACL, pp 123–135

  51. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: NIPS, pp 5998–6008

  52. Devlin J, Chang MW, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. Preprint arXiv:1810.04805

  53. Dai Z, Yang Z, Yang Y, Cohen WW, Carbonell J, Le QV, Salakhutdinov R (2019) Transformer-xl: attentive language models beyond a fixed-length context. Preprint arXiv:1901.02860

  54. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV (2019) XLNet: generalized autoregressive pretraining for language understanding. Preprint arXiv:1906.08237

  55. Hermann KM, Kocisky T, Grefenstette E, Espeholt L, Kay W, Suleyman M, Blunsom P (2015) Teaching machines to read and comprehend. In: NIPS, pp 1693–1701

  56. Rush AM, Chopra S, Weston J (2015) A neural attention model for abstractive sentence summarization. In: EMNLP, pp 379–389

  57. Wang L, Cao Z, De Melo G, Liu Z (2016) Relation classification via multi-level attention cnns. In: ACL, pp 1298–1307

  58. Lee JB, Rossi RA, Kim S, Ahmed NK, Koh E (2018) Attention models in graphs: a survey. Preprint arXiv:1807.07984

  59. Feng J, Huang M, Yang Y (2016) GAKE: graph aware knowledge embedding. In: COLING, pp 641–651

  60. Choi E, Bahadori MT, Song L, Stewart WF, Sun J (2017) GRAM: graph-based attention model for healthcare representation learning. In: KDD, pp 787–795

  61. Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In: ICLR

  62. Schlichtkrull M, Kipf T N, Bloem P, Van Den Berg R, Titov I, Welling M (2018) Modeling relational data with graph convolutional networks. In: ESWC, pp 593–607

  63. Sutton RS, McAllester DA, Singh SP, Mansour Y (2000) Policy gradient methods for reinforcement learning with function approximation. In: NIPS, pp 1057–1063

  64. Toutanova K, Chen D, Pantel P, Poon H, Choudhury P, Gamon M (2015) Representing text for joint embedding of text and knowledge bases. In: EMNLP, pp 1499–1509

  65. Carlson A, Betteridge J, Kisiel B, Settles B, Hruschka ER, Mitchell TM (2010) Toward an architecture for never-ending language learning. In: AAAI, pp 1306–1313

  66. Xiong W, Hoang T, Wang WY (2017) DeepPath: a reinforcement learning method for knowledge graph reasoning. In: EMNLP, pp 564–573

  67. Mahdisoltani F, Biega J, Suchanek FM (2015) YAGO3: a knowledge base from multilingual wikipedias. In: CIDR

  68. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. Preprint arXiv:1412.6980

Download references

Acknowledgements

This work is supported by National Natural Science Foundation of China (No. 61772146), Guangdong Natural Science Foundation (Nos. 2016A030313441, 2018A030310051), and Guangzhou Science Technology and Innovation Commission (No. 201803010063).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tianyong Hao.

Ethics declarations

Conflict of interest

The authors confirm that this article content has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, H., Hu, K., Wang, FL. et al. Aggregating neighborhood information for negative sampling for knowledge graph embedding. Neural Comput & Applic 32, 17637–17653 (2020). https://doi.org/10.1007/s00521-020-04940-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-04940-5

Keywords

Navigation