skip to main content
10.1145/3502223.3502245acmotherconferencesArticle/Chapter ViewAbstractPublication PagesijckgConference Proceedingsconference-collections
research-article

Pre-classification Supporting Reasoning for Document-level Relation Extraction

Authors Info & Claims
Published:24 January 2022Publication History

ABSTRACT

The document-level relation extraction task aims to extract relational triples from a document consisting of multiple sentences. Most previous models focus on modeling the dependency between the entities and neglect the reasoning mechanism. Some other models construct paths implicitly between co-sentence entities to find semantic relations. However, they ignore that there are interactions between different triples, especially some triples play an import role in predicting others. In this short research paper, we propose a new two stage framework PCSR(Pre-classification Supporting Reasoning) which captures the interactions between triples and utilizes these information for reasoning. Specifically, we make a pre-classification for each entity pair in the first stage. Then we aggregate the embeddings of predicted triples to enhance entity representation and make a new classification. Since the second classification could find triples missed in the first stage, we take the result as the supplement of the prior one. Experiments on DocRED show that our method achieves an F1 score of 62.11. Compared with the previous state-of-the-art model, our model increase by 0.81 on the test set, which demonstrates the effectiveness of our reasoning mechanism.

References

  1. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 4171–4186.Google ScholarGoogle Scholar
  2. Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention Guided Graph Convolutional Networks for Relation Extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 241–251.Google ScholarGoogle ScholarCross RefCross Ref
  3. Pankaj Gupta, Subburam Rajaram, Hinrich Schütze, and Thomas Runkler. 2019. Neural relation extraction within and across sentence boundaries. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 6513–6520.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Robin Jia, Cliff Wong, and Hoifung Poon. 2019. Document-Level N-ary Relation Extraction with Multiscale Representation Learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 3693–3704.Google ScholarGoogle ScholarCross RefCross Ref
  5. Woohwan Jung and Kyuseok Shim. 2020. Dual Supervision Framework for Relation Extraction with Distant Supervision and Human Annotation. In Proceedings of the 28th International Conference on Computational Linguistics. 6411–6423.Google ScholarGoogle ScholarCross RefCross Ref
  6. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692(2019).Google ScholarGoogle Scholar
  7. Guoshun Nan, Zhijiang Guo, Ivan Sekulić, and Wei Lu. 2020. Reasoning with Latent Structure Refinement for Document-Level Relation Extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 1546–1557.Google ScholarGoogle ScholarCross RefCross Ref
  8. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-Sentence N-ary Relation Extraction with Graph LSTMs. Transactions of the Association for Computational Linguistics 5 (2017), 101–115.Google ScholarGoogle ScholarCross RefCross Ref
  9. Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, Xiangrong Zeng, and Shengping Liu. 2020. Joint entity and relation extraction with set prediction networks. arXiv preprint arXiv:2011.01675(2020).Google ScholarGoogle Scholar
  10. Hengzhu Tang, Yanan Cao, Zhenyu Zhang, Jiangxia Cao, Fang Fang, Shi Wang, and Pengfei Yin. [n. d.]. Hin: Hierarchical inference network for document-level relation extraction. Advances in Knowledge Discovery and Data Mining 12084 ([n. d.]), 197.Google ScholarGoogle Scholar
  11. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 6000–6010.Google ScholarGoogle Scholar
  12. Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). 872–884.Google ScholarGoogle ScholarCross RefCross Ref
  13. Hong Wang, Christfried Focke, Rob Sylvester, Nilesh Mishra, and William Wang. 2019. Fine-tune bert for docred with two-step process. arXiv preprint arXiv:1909.11898(2019).Google ScholarGoogle Scholar
  14. Hailin Wang, Ke Qin, Guoming Lu, Guangchun Luo, and Guisong Liu. 2020. Direction-sensitive relation extraction using Bi-SDP attention model. Knowledge-Based Systems 198 (2020), 105928.Google ScholarGoogle ScholarCross RefCross Ref
  15. Hailin Wang, Ke Qin, Guoming Lu, Jin Yin, Rufai Yusuf Zakari, and Jim Wilson Owusu. 2021. Document-level relation extraction using evidence reasoning on RST-GRAPH. Knowledge-Based Systems 228 (2021), 107274.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu, and Limin Sun. 2020. TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking. In Proceedings of the 28th International Conference on Computational Linguistics. 1572–1582.Google ScholarGoogle ScholarCross RefCross Ref
  17. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 28.Google ScholarGoogle ScholarCross RefCross Ref
  18. Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. 2020. A Novel Cascade Binary Tagging Framework for Relational Triple Extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 1476–1488.Google ScholarGoogle ScholarCross RefCross Ref
  19. Wang Xu, Kehai Chen, and Tiejun Zhao. 2021. Discriminative reasoning for document-level relation extraction. arXiv preprint arXiv:2106.01562(2021).Google ScholarGoogle Scholar
  20. Wang Xu, Kehai Chen, and Tiejun Zhao. 2021. Document-level relation extraction with reconstruction. In The 35th AAAI Conference on Artificial Intelligence (AAAI-21).Google ScholarGoogle ScholarCross RefCross Ref
  21. Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, and Zhi Jin. 2016. Improved relation classification by deep recurrent neural networks with data augmentation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 1461–1470.Google ScholarGoogle Scholar
  22. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A Large-Scale Document-Level Relation Extraction Dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 764–777.Google ScholarGoogle ScholarCross RefCross Ref
  23. Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Peng Li, Maosong Sun, and Zhiyuan Liu. 2020. Coreferential Reasoning Learning for Language Representation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 7170–7186.Google ScholarGoogle ScholarCross RefCross Ref
  24. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. 2335–2344.Google ScholarGoogle Scholar
  25. Shuang Zeng, Runxin Xu, Baobao Chang, and Lei Li. 2020. Double Graph Based Reasoning for Document-level Relation Extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 1630–1640.Google ScholarGoogle ScholarCross RefCross Ref
  26. Zexuan Zhong and Danqi Chen. 2021. A Frustratingly Easy Approach for Entity and Relation Extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 50–61.Google ScholarGoogle ScholarCross RefCross Ref
  27. Wenxuan Zhou, Kevin Huang, Tengyu Ma, and Jing Huang. 2021. Document-level relation extraction with adaptive thresholding and localized context pooling.Google ScholarGoogle Scholar

Index Terms

  1. Pre-classification Supporting Reasoning for Document-level Relation Extraction
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Other conferences
            IJCKG '21: Proceedings of the 10th International Joint Conference on Knowledge Graphs
            December 2021
            204 pages

            Copyright © 2021 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 24 January 2022

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed limited
          • Article Metrics

            • Downloads (Last 12 months)21
            • Downloads (Last 6 weeks)1

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format