Abstract:
Text classification models based on recurrent neural networks have poor parallel processing ability, as they need to be trained word by word. The model proposed in this p...Show MoreMetadata
Abstract:
Text classification models based on recurrent neural networks have poor parallel processing ability, as they need to be trained word by word. The model proposed in this paper has increased parallel computing capability compared to the one based on the recurrent neural networks. Text classification models based on convolution neural network struggle to obtain contextual information as they are limited by the size of the convolutional kernel. In this paper, we propose an attention network based on a gated control and transposed structure, which makes the network able to understand context and more efficiently extract important features from text. Experiments are carried out on six commonly used text classification datasets. The experimental results show that the model presented in this paper achieves the state of the art performance on four datasets and is very competitive on the other two. In addition, this paper also explores the effects of gate control and transposition on the whole model.
Date of Conference: 14-19 July 2019
Date Added to IEEE Xplore: 30 September 2019
ISBN Information: