ABSTRACT
The document-level relation extraction task aims to extract relational triples from a document consisting of multiple sentences. Most previous models focus on modeling the dependency between the entities and neglect the reasoning mechanism. Some other models construct paths implicitly between co-sentence entities to find semantic relations. However, they ignore that there are interactions between different triples, especially some triples play an import role in predicting others. In this short research paper, we propose a new two stage framework PCSR(Pre-classification Supporting Reasoning) which captures the interactions between triples and utilizes these information for reasoning. Specifically, we make a pre-classification for each entity pair in the first stage. Then we aggregate the embeddings of predicted triples to enhance entity representation and make a new classification. Since the second classification could find triples missed in the first stage, we take the result as the supplement of the prior one. Experiments on DocRED show that our method achieves an F1 score of 62.11. Compared with the previous state-of-the-art model, our model increase by 0.81 on the test set, which demonstrates the effectiveness of our reasoning mechanism.
- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 4171–4186.Google Scholar
- Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention Guided Graph Convolutional Networks for Relation Extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 241–251.Google ScholarCross Ref
- Pankaj Gupta, Subburam Rajaram, Hinrich Schütze, and Thomas Runkler. 2019. Neural relation extraction within and across sentence boundaries. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 6513–6520.Google ScholarDigital Library
- Robin Jia, Cliff Wong, and Hoifung Poon. 2019. Document-Level N-ary Relation Extraction with Multiscale Representation Learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 3693–3704.Google ScholarCross Ref
- Woohwan Jung and Kyuseok Shim. 2020. Dual Supervision Framework for Relation Extraction with Distant Supervision and Human Annotation. In Proceedings of the 28th International Conference on Computational Linguistics. 6411–6423.Google ScholarCross Ref
- Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692(2019).Google Scholar
- Guoshun Nan, Zhijiang Guo, Ivan Sekulić, and Wei Lu. 2020. Reasoning with Latent Structure Refinement for Document-Level Relation Extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 1546–1557.Google ScholarCross Ref
- Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-Sentence N-ary Relation Extraction with Graph LSTMs. Transactions of the Association for Computational Linguistics 5 (2017), 101–115.Google ScholarCross Ref
- Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, Xiangrong Zeng, and Shengping Liu. 2020. Joint entity and relation extraction with set prediction networks. arXiv preprint arXiv:2011.01675(2020).Google Scholar
- Hengzhu Tang, Yanan Cao, Zhenyu Zhang, Jiangxia Cao, Fang Fang, Shi Wang, and Pengfei Yin. [n. d.]. Hin: Hierarchical inference network for document-level relation extraction. Advances in Knowledge Discovery and Data Mining 12084 ([n. d.]), 197.Google Scholar
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 6000–6010.Google Scholar
- Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). 872–884.Google ScholarCross Ref
- Hong Wang, Christfried Focke, Rob Sylvester, Nilesh Mishra, and William Wang. 2019. Fine-tune bert for docred with two-step process. arXiv preprint arXiv:1909.11898(2019).Google Scholar
- Hailin Wang, Ke Qin, Guoming Lu, Guangchun Luo, and Guisong Liu. 2020. Direction-sensitive relation extraction using Bi-SDP attention model. Knowledge-Based Systems 198 (2020), 105928.Google ScholarCross Ref
- Hailin Wang, Ke Qin, Guoming Lu, Jin Yin, Rufai Yusuf Zakari, and Jim Wilson Owusu. 2021. Document-level relation extraction using evidence reasoning on RST-GRAPH. Knowledge-Based Systems 228 (2021), 107274.Google ScholarDigital Library
- Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu, and Limin Sun. 2020. TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking. In Proceedings of the 28th International Conference on Computational Linguistics. 1572–1582.Google ScholarCross Ref
- Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 28.Google ScholarCross Ref
- Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. 2020. A Novel Cascade Binary Tagging Framework for Relational Triple Extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 1476–1488.Google ScholarCross Ref
- Wang Xu, Kehai Chen, and Tiejun Zhao. 2021. Discriminative reasoning for document-level relation extraction. arXiv preprint arXiv:2106.01562(2021).Google Scholar
- Wang Xu, Kehai Chen, and Tiejun Zhao. 2021. Document-level relation extraction with reconstruction. In The 35th AAAI Conference on Artificial Intelligence (AAAI-21).Google ScholarCross Ref
- Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, and Zhi Jin. 2016. Improved relation classification by deep recurrent neural networks with data augmentation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 1461–1470.Google Scholar
- Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A Large-Scale Document-Level Relation Extraction Dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 764–777.Google ScholarCross Ref
- Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Peng Li, Maosong Sun, and Zhiyuan Liu. 2020. Coreferential Reasoning Learning for Language Representation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 7170–7186.Google ScholarCross Ref
- Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. 2335–2344.Google Scholar
- Shuang Zeng, Runxin Xu, Baobao Chang, and Lei Li. 2020. Double Graph Based Reasoning for Document-level Relation Extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 1630–1640.Google ScholarCross Ref
- Zexuan Zhong and Danqi Chen. 2021. A Frustratingly Easy Approach for Entity and Relation Extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 50–61.Google ScholarCross Ref
- Wenxuan Zhou, Kevin Huang, Tengyu Ma, and Jing Huang. 2021. Document-level relation extraction with adaptive thresholding and localized context pooling.Google Scholar
Index Terms
- Pre-classification Supporting Reasoning for Document-level Relation Extraction
Recommendations
Improving Graph-based Document-Level Relation Extraction Model with Novel Graph Structure
CIKM '22: Proceedings of the 31st ACM International Conference on Information & Knowledge ManagementDocument-level relation extraction is a natural language processing task for extracting relations among entities in a document. Compared with sentence-level relation extraction, there are more challenges to document-level relation extraction. To acquire ...
Document-Level Relation Extraction with Path Reasoning
Document-level relation extraction (DocRE) aims to extract relations among entities across multiple sentences within a document by using reasoning skills (i.e., pattern recognition, logical reasoning, coreference reasoning, etc.) related to the reasoning ...
Exploiting Ubiquitous Mentions for Document-Level Relation Extraction
SIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information RetrievalRecent years have witnessed the transition from sentence-level to document-level in relation extraction (RE), with new formulation, new methods and new insights. Yet, the fundamental concept, mention, is not well-considered and well-defined. Current ...
Comments