Abstract
[Objectives] The study aims to compare the classification performance of various machine learning models, explore the classification effects of traditional machine learning models and deep learning models, solve the problem of missing category information of chapter structure in academic literature, promote the retrieval of the content of the specified chapter structure in the academic literature, and automatically extract and customize the formation of specific text services. [Methodology] 31,888 academic articles in the journal “PLOS ONE” were selected. After data cleaning and segmentation, a text classification corpus containing 313,952 chapter structure category information was constructed. Based on traditional machine learning models NB, SVM, CRF, and the deep learning model RNN model group, Bi-LSTM model group, IDCNN model group, BERT model group, a total of 17 machine learning models were used to carry out chapter structure division experiment. [Results] Among the classification tasks, the BERT-Bi-LSTM-CRF model has the best classification performance, with an average F value of 71.18%, which is 0.51% and 3.31% higher than the second CRF and the third Bi-LSTM-CRF, respectively. For deep learning models, the use of BERT for text representation is better than word2vec. Adding the Attention mechanism and replacing the Softmax layer with the CRF layer can achieve better classification results. In addition, the online version of the Chapter Structure Recognition Presentation and Application Platform has been developed, which can visually display the overall situation of the research and the model training process, and can realize machine learning and deep learning models such as NB, SVM, CRF, Bi-LSTM, IDCNN. The models can perform online recognition application of chapter structure.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Cao, Y., Liu, J., Cao, B., Shi, M., Wen, Y., Peng, Z.: Web Services classification with topical attention based Bi-LSTM. In: Wang, X., Gao, H., Iqbal, M., Min, G. (eds.) CollaborateCom 2019. LNICST, vol. 292, pp. 394–407. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30146-0_27
Ding, Z., Xia, R., Yu, J., Li, X., Yang, J.: Densely connected bidirectional LSTM with applications to sentence classification. In: Zhang, M., Ng, V., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2018. LNCS (LNAI), vol. 11109, pp. 278–287. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99501-4_24
Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)
Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014)
Xian-yan, M., Rong-yi, C., Ya-hui, Z., Zhenguo, Z.: Multilingual short text classification based on LDA and BiLSTM-CNN neural network. In: Ni, W., Wang, X., Song, W., Li, Y. (eds.) WISA 2019. LNCS, vol. 11817, pp. 319–323. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30952-7_32
Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. AAAI 2015, pp. 2267–2273. AAAI Press (2015)
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995). https://doi.org/10.1007/BF00994018
Pham, T.-H., Le-Hong, P.: End-to-End recurrent neural network models for vietnamese named entity recognition: word-level vs. character-level. In: Hasida, K., Pa, W.P. (eds.) PACLING 2017. CCIS, vol. 781, pp. 219–232. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-8438-6_18
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
Vaswani, A., et al.: Attention is all you need. In: Guyon, I., et al. (eds.) NIPS, pp. 6000–6010 (2017)
Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., Dyer, C.: Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 (2016)
Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015)
Zhang, K., Ren, W., Zhang, Y.: Attention-based Bi-LSTM for Chinese named entity recognition. In: Hong, J.-F., Su, Q., Wu, J.-S. (eds.) CLSW 2018. LNCS (LNAI), vol. 11173, pp. 643–652. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-04015-4_56
Strubell, E., Verga, P., Belanger, D., McCallum, A.: Fast and accurate entity recognition with iterated dilated convolutions. arXiv preprint arXiv:1702.02098 (2017)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, H., Deng, S., Lu, H., Wang, D. (2020). A Comparative Study on the Classification Performance of Machine Learning Models for Academic Full Texts. In: Sundqvist, A., Berget, G., Nolin, J., Skjerdingstad, K. (eds) Sustainable Digital Communities. iConference 2020. Lecture Notes in Computer Science(), vol 12051. Springer, Cham. https://doi.org/10.1007/978-3-030-43687-2_61
Download citation
DOI: https://doi.org/10.1007/978-3-030-43687-2_61
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-43686-5
Online ISBN: 978-3-030-43687-2
eBook Packages: Computer ScienceComputer Science (R0)