Abstract:
Named entity recognition and relation extraction are two important tasks in information extraction. Many recent works model two tasks jointly and achieve great success. H...Show MoreMetadata
Abstract:
Named entity recognition and relation extraction are two important tasks in information extraction. Many recent works model two tasks jointly and achieve great success. However, these methods still suffer from the relation semantic insufficiency, head entity dependency and nested entity detection problem. To address the challenges, we propose a relation-aware span-level transformer network (RSTN), which contains a span-level encoder for entity recognition and a non-autoregressive decoder for relation extraction. Specifically, we generate explicit represen-tations for possible spans to extract overlapping entities in our span-level encoder. In addition, we encode relation semantics in our non-autoregressive decoder, and exploit copy mechanism to extract head entities and tail entities simultaneously by modifying the casual attention mask. Through span-level multi-head attention mechanism, we enhance the interaction between entity recognition and relation extraction in our model. We evaluate our model on three public datasets: ACE05, ADE and SciERC. Experiment results show that the proposed model outperforms previous strong baseline methods.
Date of Conference: 18-23 July 2022
Date Added to IEEE Xplore: 30 September 2022
ISBN Information: