Abstract
Fine-tuning the pre-trained language model is the current mainstream method of text classification. Take the fine-tuning BERT model as an example, this kind of approach has three main problems: the first one is that the training of massive parameters will cause high training costs The second is that the model is very easy to over-fit in trainable samples, resulting in low transferability. Third, the fine-tuning model is not good at long text classification task. In this paper, we take the sentiment classification task as an example and use the classification accuracy as a metric. We compared two methods to problem one: using a compressed language model (decreased 0.1%) and using entirely frozen weight (reduced by 4%). For the second problem, in the case of using fixed weights (reduced by 4%), Convolutional Networks (CNN) and Capsule Networks (CAP) are used to extract n-gram features and clustering features so that the classification accuracy is improved (increased by 0.5% and decreased by 0.2% respectively). The corpus transfer test shows that CAP’s accuracy is greater than the fine-tuning model (increased by 0.6%). Meanwhile, this article proposes a method to process the long text classification task by expanding position embedding to support long text input (increased by 1.1%). Finally, this paper compares the training speed and parameter scale of different models under the combination of different strategies, and uses the F1 value, precision rate, recall rate and accuracy to measure each model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
https://pan.baidu.com/s/1GrgqQXk5vg6aiaJaZhUAew password: lel4.
References
Brown, P.F., Della Pietra, V.J., Desouza, P.V., Lai, J.C., Mercer, R.L.: Class-based n-gram models of natural language. Comput. Linguist. 18(4), 467–480 (1992)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, vol. 26, pp. 3111–3119 (2013)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Peng, H., et al.: Hierarchical taxonomy-aware and attentional graph capsule RCNNs for large-scale multi-label text classification. IEEE Trans. Knowl. Data Eng. 33(6), 2505–2519 (2019)
Safaya, A., Abdullatif, M., Yuret, D.: KUISAIL at SemEval-2020 Task 12: BERT-CNN for offensive speech identification in social media. In: Proceedings of the Fourteenth Workshop on Semantic Evaluation, pp. 2054–2059 (2020)
Wang, Z., et al.: A novel method for intelligent fault diagnosis of bearing based on capsule neural network. Complexity (2019)
Liu, Y.: Fine-tune BERT for extractive summarization. arXiv preprint arXiv:1903.10318 (2019)
Rodrigues Makiuchi, M., Warnita, T., Uto, K., Shinoda, K.: Multimodal fusion of BERT-CNN and gated CNN representations for depression detection. In: Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop, pp. 55–63, October 2019
Peters, M.E., et al.: Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018)
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018). https://s3-us-west-2.amazonaws.com/openaiassets/researchcovers/languageunsupervised/language_understanding_paper.pdf
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)
Hinton, G.E., Krizhevsky, A., Wang, S.D.: Transforming auto-encoders. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds.) ICANN 2011. LNCS, vol. 6791, pp. 44–51. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21735-7_6
Sabour, S., Frosst, N., Hinton, G.E.: Dynamic routing between capsules. In: Advances in Neural Information Processing Systems, pp. 3856–3866 (2017)
Zhao, W., Ye, J., Yang, M., Lei, Z., Zhang, S., Zhao, Z.: Investigating capsule networks with dynamic routing for text classification. arXiv preprint arXiv:1804.00538 (2018)
Frosst, N., Hinton, G.: Distilling a neural network into a soft decision tree. arXiv preprint arXiv:1711.09784 (2017)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 (2019)
Lin, Y., Lei, H., Wu, J., Li, X.: An empirical study on sentiment classification of Chinese review using word embedding. arXiv preprint arXiv:1511.01665 (2015)
Jia, X., Li, N., Jin, Y.: Dynamic convolutional neural network extreme learning machine for text sentiment classification. J. Beijing Univ. Technol. (01), 28–35 (2017)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lin, Y., Lei, H., Li, X., Deng, Y. (2022). Research on Text Classification Modeling Strategy Based on Pre-trained Language Model. In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2021. Lecture Notes in Networks and Systems, vol 294. Springer, Cham. https://doi.org/10.1007/978-3-030-82193-7_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-82193-7_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-82192-0
Online ISBN: 978-3-030-82193-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)