Abstract:
This article introduces an approach that learns segment-level context for sequence labeling in natural language processing (NLP). Previous approaches limit their basic un...Show MoreMetadata
Abstract:
This article introduces an approach that learns segment-level context for sequence labeling in natural language processing (NLP). Previous approaches limit their basic unit to a word for feature extraction because sequence labeling is a tokenlevel task in which labels are annotated word-by-word. However, the text segment is an ultimate unit for labeling, and we are easily able to obtain segment information from annotated labels in a IOB/IOBES format. Most neural sequence labeling models expand their learning capacity by employing additional layers, such as a character-level layer, or jointly training NLP tasks with common knowledge. The architecture of our model is based on the charLSTM-BiLSTM-CRF model, and we extend the model with an additional segment-level layer called segLSTM. We therefore suggest a sequence labeling algorithm called charLSTM-BiLSTMCRF-segLSTMsLM which employs an additional segment-level long short-term memory (LSTM) that trains features by learning adjacent context in a segment. We demonstrate the performance of our model on four sequence labeling datasets, namely, Peen Tree Bank, CoNLL 2000, CoNLL 2003, and OntoNotes 5.0. Experimental results show that our model performs better than state-of-theart variants of BiLSTM-CRF. In particular, the proposed model enhances the performance of tasks for finding appropriate labels of multiple token segments.
Published in: IEEE/ACM Transactions on Audio, Speech, and Language Processing ( Volume: 28)