Abstract
Recent advances in Pre-trained Language Models (PrLMs) have driven general domain multi-choice Machine Reading Comprehension (MRC) to a new level. However, they perform much worse on domain-specific MRC like biomedicine, due to the lack of effective matching networks to capture the relationships among documents, question and candidate options. In this paper, we propose a Sketchy-Intensive (SkeIn) reading comprehension model, which simulates the cognitive thinking process of humans to solve the Chinese multi-choice biomedical MRC questions: (1) obtaining a general impression of content with a sketch reading process; (2) capturing dedicated information and relationships of documents, question and candidate options with an intensive reading process and making the final prediction. Experimental results show that our SkeIn model achieves substantial improvements over competing baseline PrLMs, with average accuracy improvements of +4.03% dev/+3.43% test, +2.69% dev/+3.22% test, and 5.31% dev/5.25% test from directly fine-tuning BERT-Base, BERT-wwm-ext and RoBERTa-wwm-ext-large, respectively, indicating the effectiveness of SkeIn to enhance the general performance of PrLMs on the biomedical MRC tasks.
We thank the anonymous reviewers for their insightful comments and suggestions. This work is supported by the National Natural Science Foundation of China (NSFC No. 61972187).
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Chen, D., Fisch, A., Weston, J., Bordes, A.: Reading Wikipedia to answer open-domain questions. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada, pp. 1870–1879. Association for Computational Linguistics (July 2017)
Cui, Y., et al.: Pre-training with whole word masking for Chinese BERT. arXiv (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 [cs] (May 2019)
Ding, M., Zhou, C., Chen, Q., Yang, H., Tang, J.: Cognitive graph for multi-hop reading comprehension at scale. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 2694–2703. Association for Computational Linguistics (July 2019). https://doi.org/10.18653/v1/P19-1259, cogQA-master
Falcon, WA: PyTorch lightning. GitHub (March 2019). https://github.com/PyTorchLightning/pytorch-lightning
Jin, D., Gao, S., Kao, J.Y., Chung, T., Hakkani-tur, D.: MMM: multi-stage multi-task learning for multi-choice reading comprehension. arXiv:1910.00458 [cs] (November 2019). Accepted by AAAI 2020
Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: RACE: large-scale reading comprehension dataset from examinations. arXiv:1704.04683 [cs] (December 2017). EMNLP 2017
Lee, J., et al.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2019)
Liu, Y.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv (2019)
Luo, R., Xu, J., Zhang, Y., Ren, X., Sun, X.: PKUSEG: a toolkit for multi-domain Chinese word segmentation. CoRR abs/1906.11455 (2019)
Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 2383–2392. Association for Computational Linguistics (November 2016)
Ran, Q., Li, P., Hu, W., Zhou, J.: Option comparison network for multiple-choice reading comprehension. arXiv:1903.03033 [cs] (March 2019). Comment: 6 pages, 2 tables
Richardson, M., Burges, C.J., Renshaw, E.: MCTest: a challenge dataset for the open-domain machine comprehension of text. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Washington, USA, pp. 193–203. Association for Computational Linguistics (October 2013)
Robertson, S., Zaragoza, H.: The Probabilistic Relevance Framework: BM25 and Beyond. Found. Trends Inf. Retrieval 3(4), 333–389 (2009)
Shin, H.C., et al.: BioMegatron: larger biomedical domain language model. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4700–4706. Association for Computational Linguistics (November 2020)
Vaswani, A., et al.: Attention is all you need. arXiv:1706.03762 [cs] (December 2017). Comment: 15 pages, 5 figures
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding. arXiv:1906.08237 [cs] (January 2020). Comment: Pretrained models and code are available athttps://github.com/zihangdai/xlnet
Yang, Z., et al.: HotpotQA: a dataset for diverse, explainable multi-hop question answering. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369–2380, Brussels, Belgium. Association for Computational Linguistics (October 2018). https://doi.org/10.18653/v1/D18-1259
Zhang, S., Zhao, H., Wu, Y., Zhang, Z., Zhou, X., Zhou, X.: DCMN+: dual co-matching network for multi-choice reading comprehension. arXiv:1908.11511 [cs] (January 2020). Accepted by AAAI 2020
Zhang, T., Wang, C., Qiu, M., Yang, B., He, X., Huang, J.: Knowledge-empowered representation learning for Chinese medical reading comprehension: task, model and resources. arXiv:2008.10327 [cs] (August 2020)
Zhao, Z., et al.: UER: an open-source toolkit for pre-training models. In: EMNLP/IJCNLP (2019)
Zhu, P., Zhao, H., Li, X.: DUMA: reading comprehension with transposition thinking. arXiv:2001.09415 [cs] (September 2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, J., Zhong, S., Chen, K., Li, T. (2021). SkeIn: Sketchy-Intensive Reading Comprehension Model for Multi-choice Biomedical Questions. In: Wei, Y., Li, M., Skums, P., Cai, Z. (eds) Bioinformatics Research and Applications. ISBRA 2021. Lecture Notes in Computer Science(), vol 13064. Springer, Cham. https://doi.org/10.1007/978-3-030-91415-8_47
Download citation
DOI: https://doi.org/10.1007/978-3-030-91415-8_47
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91414-1
Online ISBN: 978-3-030-91415-8
eBook Packages: Computer ScienceComputer Science (R0)