Abstract
Machine reading comprehension (MRC) is a challenging task in natural language processing domain. Mainstream methods such as BERT or XLNet create Language Model (LM) with rich contextual information, but suffer from lacking semantic and syntactic message for MRC. We present a novel model TLE-BERT to enhance text encoding with the token-level and sentence-level features. In token level, we add semantic representation to the BERT based embedding. The desired features such as POS, entity type, noun phrase and synonyms of tokens are chosen by the sentinel mechanism. In sentence level, we introduce the grammatical structure to text encoding with dependency relation of the sentence, which is converted into m-hop matrices and merged into the embedding with a multi-head attention transformer layer. Each feature improves the performance of the model in varying degrees, and experiments show that TLE-BERT which combines all features organically strengthen the expressive of text representation, achieving a significant improvement on the popular MRC datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Peters, M.E., et al.: Deep contextualized word representations. In: North American Chapter of the Association for Computational Linguistics (2018)
Radford, A., Narasimhan, K., Salimans, T., Sutskever I.: Improving language understanding by generative pre-training. Technical Report, OpenAI (2018)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30, pp. 5998–6008 (2017)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pretraining of deep bidirectional transformers for language understanding. arXiv:1810.04805 (2018)
Lan, Z., Chen, M., Goodman, S., et al.: ALBERT: a lite BERT for self-supervised learning of language representations. In: Eighth International Conference on Learning Representations, ICLR 2020, pp. 1–17 (2020)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, vol. 26, pp. 3111–3119 (2013)
Zhang, Z., Wu, Y., Zhao, H., et al.: Semantics-aware BERT for language understanding. In: Thirty-Fourth AAAI Conference on Artificial Intelligence, pp. 1–8 (2020)
Liu, W., Zhou, P., Zhao, Z., et al.: K-BERT: enabling language representation with knowledge graph. In: Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, pp. 1–8 (2020)
Yang, A., Wang, Q., Liu, T., et al.: Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019, pp. 2346–2357 (2019)
Joshi, M., Chen, D., Liu, Y., et al.: SpanBERT: improving pre-training by representing and predicting spans. In: Topology, Algebra, and Categories in Logic. arXiv:1907.10529 (2019)
Nguyen, T., Rosenberg, M., Song, X., Gao, J.F., Tiwary, S., Majumder, R., Deng, L.: Ms marco: a human generated machine reading comprehension dataset. arXiv:1611.09268 (2016)
Zhang, Z., Han, X., Liu, Z., et al.: ERNIE: enhanced language representation with informative entities. In: 57th Annual Meeting of the Association for Computational Linguistics. ACL 2019, pp. 1–11 (2019)
Sun, Y., Wang, S., Li Y., et al.: ERNIE: Enhanced Representation through Knowledge Integration. arXiv:1904.09223 (2019)
Sun, Y., Wang, S., Li, Y., et al.: ERNIE 2.0: A Continual Pre-training Framework for Language Understanding. arXiv:1907.12412 (2019)
Lauscher, A., Vulic, I., Ponti, E.M., Korhonen A., Glavas G.: Informing Unsupervised Pretraining with External Linguistic Knowledge. arXiv:1909.20339 (2019)
Xiong, W., Du J., Wang, W.Y., et al.: Pretrained encyclopedia: weakly supervised knowledge-pretrained language model. In: Eighth International Conference on Learning Representations, ICLR 2020, pp. 1–13 (2020)
Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language processing, EMNLP 2014, pp 1532–1543 (2014)
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le Q.: Xlnet: generalized autoregressive pretraining for language understanding. arXiv:1906.08237 (2019)
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Language models are unsupervised multitask learners. Technical Report, OpenAI (2019)
http://www.research-covers/languageunsupervised/languageunderstandingpaper.pdf (2018)
Lehnert, W.G.: The Process of Question Answering. PhD thesis, Yale University (1977)
Liu, S., Zhang, X., Zhang, S., Wang, H., Weiming, Z.: Neural Machine Reading Comprehension: Methods and Trends. arXiv:1907.01118 (2019)
Dai, L., Xu, B., Song, H.: Feature-level attention based sentence encoding for neural relation extraction. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11838, pp. 184–196. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32233-5_15
Yang, B., Mitchell, T.: Leveraging knowledge bases in LSTMs for improving machine reading. In: 55th Annual Meeting of the Association for Computational Linguistics. ACL 2017, vol. 1, pp. 1436–1446 (2017)
Hermann, K.M., et al.: Teaching machines to read and comprehend. In: Advances in Neural Information Processing Systems, pp. 1693–1701 (2015)
Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: Squad: 100,000 + questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, pp. 2383–2392 (2016)
Yu, A.W., Dohan D., Luong, M.-T., Zhao, R., Chen, K., Norouzi, M., Le, Q.V.: Qanet: combining local convolution with global self-attention for reading comprehension. arXiv:1804.09541 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wang, S., Song, H., Xu, B., Zhang, H. (2021). Enhancing Embedding via Two-Level Features for Machine Reading Comprehension. In: Chen, H., Liu, K., Sun, Y., Wang, S., Hou, L. (eds) Knowledge Graph and Semantic Computing: Knowledge Graph and Cognitive Intelligence. CCKS 2020. Communications in Computer and Information Science, vol 1356. Springer, Singapore. https://doi.org/10.1007/978-981-16-1964-9_21
Download citation
DOI: https://doi.org/10.1007/978-981-16-1964-9_21
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-1963-2
Online ISBN: 978-981-16-1964-9
eBook Packages: Computer ScienceComputer Science (R0)