Skip to main content

Advertisement

Document-level relation extraction via commonsense knowledge enhanced graph representation learning

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Document-level relation extraction (DocRE) aims to reason about complex relational facts among entities by reading, inferring, and aggregating among entities over multiple sentences in a document. Existing studies construct document-level graphs to enrich interactions between entities. However, these methods pay more attention to the entity nodes and their connections, regardless of the rich knowledge entailed in the original corpus.In this paper, we propose a commonsense knowledge enhanced document-level graph representation, called CGDRE, which delves into the semantic knowledge of the original corpus and improves the ability of DocRE. Firstly, we use coreference contrastive learning to capture potential commonsense knowledge. Secondly, we construct a heterogeneous graph to enhance the graph structure information according to the original document and commonsense knowledge. Lastly, CGDRE infers relations on the aggregated graph and uses focal loss to train the model. Remarkably, it is amazing that CGDRE can effectively alleviate the long-tailed distribution problem in DocRE. Experiments on the public datasets DocRED, DialogRE, and MPDD show that CGDRE can significantly outperform other baselines, achieving a significant performance improvement. Extensive analyses demonstrate that the performance of our CGDRE is contributed by the capture of commonsense knowledge enhanced graph relation representation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability statements

https://github.com/lrongzheni/CGDRE

References

  1. Ren H, Dai H, Dai B, Chen X, Yasunaga M, Sun H, Schuurmans D, Leskovec J, Zhou D (2021) Lego: Latent execution-guided reasoning for multi-hop question answering on knowledge graphs. In: International conference on machine learning. PMLR, pp 8959–8970

  2. Reinanda R, Meij E, Rijke M et al (2020) Knowledge graphs: an information retrieval perspective. Found Trends® Inf Retr 14(4):289–444

    Article  Google Scholar 

  3. Li L, Wang P, Yan J, Wang Y, Li S, Jiang J, Sun Z, Tang B, Chang T-H, Wang S et al (2020) Real-world data medical knowledge graph: construction and applications. Artif Intell Med 103:101817

    Article  MATH  Google Scholar 

  4. Zhang L, Su J, Min Z, Miao Z, Hu Q, Fu B, Shi X, Chen Y (2023) Exploring self-distillation based relational reasoning training for document-level relation extraction. In: Proceedings of the AAAI conference on artificial intelligence, vol. 37, pp 13967–13975

  5. Ma Y, Wang A, Okazaki N (2023) Dreeam: Guiding attention with evidence for improving document-level relation extraction. In: Proceedings of the 17th conference of the european chapter of the association for computational linguistics, pp 1963–1975

  6. Xu T, Hua W, Qu J, Li Z, Xu J, Liu A, Zhao L (2022) Evidence-aware document-level relation extraction. In: Proceedings of the 31st ACM international conference on information & knowledge management, pp 2311–2320

  7. Zhou W, Huang K, Ma T, Huang J (2021) Document-level relation extraction with adaptive thresholding and localized context pooling. In: Proceedings of the AAAI conference on artificial intelligence, vol 35, pp 14612–14620

  8. Zhang R, Li Y, Zou L (2023) A novel table-to-graph generation approach for document-level joint entity and relation extraction. In: Proceedings of the 61st annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp 10853–10865

  9. Peng X, Zhang C, Xu K (2022) Document-level relation extraction via subgraph reasoning. In: Proceedings of the thirty-first international joint conference on artificial intelligence, IJCAI-22, International joint conferences on artificial intelligence organization, pp 4331–4337

  10. Xu W, Chen K, Zhao T (2021) Discriminative reasoning for document-level relation extraction. In: Findings of the association for computational linguistics: ACL-IJCNLP 2021, pp 1653–1663

  11. Huang H, Lei M, Feng C (2021) Graph-based reasoning model for multiple relation extraction. Neurocomputing 420:162–170

    Article  MATH  Google Scholar 

  12. Li P, Mao K, Yang X, Li Q (2019) Improving relation extraction with knowledge-attention. In: Inui K, Jiang J, Ng V, Wan X (eds) Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, pp 229–239

  13. Chen X, Zhang N, Xie X, Deng S, Yao Y, Tan C, Huang F, Si L, Chen H (2022) Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In: Proceedings of the ACM web conference 2022, pp 2778–2788

  14. Huang W, Mao Y, Yang L, Yang Z, Long J (2021) Local-to-global gcn with knowledge-aware representation for distantly supervised relation extraction. Knowl-Based Syst 234:107565

    Article  MATH  Google Scholar 

  15. Mintz M, Bills S, Snow R, Jurafsky D (2009) Distant supervision for relation extraction without labeled data. In: Proceedings of the joint conference of the 47th annual meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pp 1003–1011

  16. Tan Q, He R, Bing L, Ng HT (2022) Document-level relation extraction with adaptive focal loss and knowledge distillation. In: Findings of the association for computational linguistics: ACL 2022, pp 1672–1681

  17. You Y, Chen T, Sui Y, Chen T, Wang Z, Shen Y (2020) Graph contrastive learning with augmentations. Adv Neural Inf Process Syst 33:5812–5823

    MATH  Google Scholar 

  18. Zeng S, Xu R, Chang B, Li L (2020) Double graph based reasoning for document-level relation extraction. In: Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pp 1630–1640

  19. Shang Y-M, Huang H, Sun X, Wei W, Mao X-L (2022) A pattern-aware self-attention network for distant supervised relation extraction. Inf Sci 584:269–279

    Article  MATH  Google Scholar 

  20. Li J, Fei H, Liu J, Wu S, Zhang M, Teng C, Ji D, Li F (2022) Unified named entity recognition as word-word relation classification. In: Proceedings of the AAAI conference on artificial intelligence, vol. 36, pp 10965–10973

  21. Mei S, Li X, Liu X, Cai H, Du Q (2021) Hyperspectral image classification using attention-based bidirectional long short-term memory network. IEEE Trans Geosci Remote Sens 60:1–12

    Google Scholar 

  22. Zhu H, Lin Y, Liu Z, Fu J, Chua T-S, Sun M (2019) Graph neural networks with generated parameters for relation extraction. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp 1331–1339

  23. Gupta P, Rajaram S, Schütze H, Runkler T (2019) Neural relation extraction within and across sentence boundaries. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 6513–6520

  24. Zhang Y, Qi P, Manning CD (2018) Graph convolution over pruned dependency trees improves relation extraction. In: Proceedings of the 2018 conference on empirical methods in natural language processing (EMNLP)

  25. Christopoulou F, Miwa M, Ananiadou S (2019) Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pp 4925–4936

  26. Sahu SK, Christopoulou F, Miwa M, Ananiadou S (2019) Inter-sentence relation extraction with document-level graph convolutional neural network. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp 4309–4316

  27. Lee C-Y, Li C-L, Dozat T, Perot V, Su G, Hua N, Ainslie J, Wang R, Fujii Y, Pfister T (2022) Formnet: Structural encoding beyond sequential modeling in form document information extraction. In: Proceedings of the 60th annual meeting of the association for computational linguistics (Volume 1: Long Papers), pp 3735–3754

  28. Sun Q, Zhang K, Huang K, Li X, Zhang T, Xu T (2022) Enhanced graph convolutional network based on node importance for document-level relation extraction. Neural Comput & Applic 1–11

  29. Li R, Zhong J, Xue Z, Dai Q, Li X (2022) Heterogenous affinity graph inference network for document-level relation extraction. Knowl-Based Syst 109146

  30. Maddalena L, Giordano M, Manzo M, Guarracino MR (2021) Whole-graph embedding and adversarial attacks for life sciences. In: International symposium on mathematical and computational biology. Springer, pp 1–21

  31. Xu B, Wang Q, Lyu Y, Zhu Y, Mao Z (2021) Entity structure within and throughout: Modeling mention dependencies for document-level relation extraction. In: Proceedings of the AAAI conference on artificial intelligence, vol 35, pp 14149–14157

  32. Ding X, Zhou G, Zhu T (2023) Multi-perspective context aggregation for document-level relation extraction. Appl Intell 53(6):6926–6935

    Article  MATH  Google Scholar 

  33. Du X, Rush AM, Cardie C (2021) Grit: Generative role-filler transformers for document-level event entity extraction. In: Proceedings of the 16th conference of the european chapter of the association for computational linguistics: main Volume, pp 634–644

  34. Luoma J, Pyysalo S (2020) Exploring cross-sentence contexts for named entity recognition with bert. In: Proceedings of the 28th international conference on computational linguistics, pp 904–914

  35. Xue F, Sun A, Zhang H, Ni J, Chng E-S (2022) An embarrassingly simple model for dialogue relation extraction. In: ICASSP 2022-2022 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 6707–6711

  36. Ye D, Lin Y, Du J, Liu Z, Li P, Sun M, Liu Z (2020) Coreferential reasoning learning for language representation. In: Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pp 7170–7186

  37. Xie Y, Shen J, Li S, Mao Y, Han J (2022) Eider: Empowering document-level relation extraction with efficient evidence extraction and inference-stage fusion. In: Findings of the association for computational linguistics: ACL 2022, pp 257–268

  38. Huang H, Yuan C, Liu Q, Cao Y (2023) Document-level relation extraction via separate relation representation and logical reasoning. ACM Trans Inf Syst 42(1):1–24

    Article  MATH  Google Scholar 

  39. Zhang N, Chen X, Xie X, Deng S, Tan C, Chen M, Huang F, Si L, Chen H (2021) Document-level relation extraction as semantic segmentation. In: Proceedings of the thirtieth international joint conference on artificial intelligence, IJCAI-21, pp 3999–4006

  40. Ilievski F, Oltramari A, Ma K, Zhang B, McGuinness DL, Szekely P (2021) Dimensions of commonsense knowledge. Knowl-Based Syst 229:107347

    Article  Google Scholar 

  41. Liu W, Zhou P, Zhao Z, Wang Z, Ju Q, Deng H, Wang P (2020) K-bert: Enabling language representation with knowledge graph. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 2901–2908

  42. Liu Y, Wan Y, He L, Peng H, Yu PS (2021) Kg-bart: Knowledge graph-augmented bart for generative commonsense reasoning. In: Proceedings of the AAAI conference on artificial intelligence, vol 35, pp 6418–6425

  43. Klein T, Nabi M (2020) Contrastive self-supervised learning for commonsense reasoning. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 7517–7523

  44. Zhan X, Li Y, Dong X, Liang X, Hu Z, Carin L (2022) elberto: Self-supervised commonsense learning for question answering

  45. Wang H, Chen M, Zhang H, Roth D (2020) Joint constrained learning for event-event relation extraction. In: Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pp 696–706

  46. Ribeiro DN, Forbus K (2021) Combining analogy with language models for knowledge extraction. In: 3rd Conference on automated knowledge base construction

  47. Su P, Peng Y, Vijay-Shanker K (2021) Improving bert model using contrastive learning for biomedical relation extraction. In: Proceedings of the 20th workshop on biomedical language processing, pp 1–10

  48. Bhattacharjee A, Karami M, Liu H (2022) Text transformations in contrastive self-supervised learning: A review. In: Proceedings of the thirty-first international joint conference on artificial intelligence (IJCAI-22)

  49. Wang H, Wang X, Xiong W, Yu M, Guo X, Chang S, Wang WY (2019) Self-supervised learning for contextualized extractive summarization. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp 2221–2227

  50. Wei J, Zou K (2019) Eda: Easy data augmentation techniques for boosting performance on text classification tasks

  51. Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations

  52. Yao Y, Ye D, Li P, Han X, Lin Y, Liu Z, Liu Z, Huang L, Zhou J, Sun M (2019) Docred: A large-scale document-level relation extraction dataset. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp 764–777

  53. Yu D, Sun K, Cardie C, Yu D (2020) Dialogue-based relation extraction. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 4927–4940

  54. Chen Y-T, Huang H-H, Chen H-H (2020) Mpdd: A multi-party dialogue dataset for analysis of emotions and interpersonal relationships. In: Proceedings of the 12th language resources and evaluation conference, pp 610–614

  55. Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M, et al (2020) Transformers: State-of-the-art natural language processing. In: Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pp 38–45

  56. Devlin J, Chang M-W, Lee K, Toutanova K (2019) Bert: Pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (1)

  57. Loshchilov I, Hutter F (2019) Decoupled weight decay regularization. ICLR, 2019

  58. Wang H, Focke C, Sylvester R, Mishra N, Wang W (2019) Fine-tune bert for docred with two-step process

  59. Long X, Niu S, Li, Y (2021) Consistent inference for dialogue relation extraction. In: Proceedings of the twenty-ninth international conference on international joint conferences on artificial intelligence

Download references

Acknowledgements

The authors would like to thank the Associate Editor and anonymous reviewers for their valuable comments and suggestions. This work is funded in part by the National Natural Science Foundation of China under Grants No.62176029, and in part by the Natural Science Foundation of Chongqing, China under Grants cstc2021jcyj-bsh0123. This work also is supported in part by the National Key Research and Development Program of China under Grants 2017YFB1402400. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.

Author information

Authors and Affiliations

Authors

Contributions

Qizhu Dai: Conceptualization, Methodology, Software, Investigation, Writing - original draf. Rongzhen Li: Validation, Formal analysis, Visualization, Writing - review & editing. Zhongxuan Xue:Validation. Xue Li: Validation. Jiang Zhong: Validation, Writing - review & editing.

Corresponding authors

Correspondence to Rongzhen Li or Jiang Zhong.

Ethics declarations

Competing of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dai, Q., Li, R., Xue, Z. et al. Document-level relation extraction via commonsense knowledge enhanced graph representation learning. Appl Intell 55, 165 (2025). https://doi.org/10.1007/s10489-024-05985-y

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-05985-y

Keywords