Skip to main content

SkeIn: Sketchy-Intensive Reading Comprehension Model for Multi-choice Biomedical Questions

  • Conference paper
  • First Online:
  • 1674 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNBI,volume 13064))

Abstract

Recent advances in Pre-trained Language Models (PrLMs) have driven general domain multi-choice Machine Reading Comprehension (MRC) to a new level. However, they perform much worse on domain-specific MRC like biomedicine, due to the lack of effective matching networks to capture the relationships among documents, question and candidate options. In this paper, we propose a Sketchy-Intensive (SkeIn) reading comprehension model, which simulates the cognitive thinking process of humans to solve the Chinese multi-choice biomedical MRC questions: (1) obtaining a general impression of content with a sketch reading process; (2) capturing dedicated information and relationships of documents, question and candidate options with an intensive reading process and making the final prediction. Experimental results show that our SkeIn model achieves substantial improvements over competing baseline PrLMs, with average accuracy improvements of +4.03% dev/+3.43% test, +2.69% dev/+3.22% test, and 5.31% dev/5.25% test from directly fine-tuning BERT-Base, BERT-wwm-ext and RoBERTa-wwm-ext-large, respectively, indicating the effectiveness of SkeIn to enhance the general performance of PrLMs on the biomedical MRC tasks.

We thank the anonymous reviewers for their insightful comments and suggestions. This work is supported by the National Natural Science Foundation of China (NSFC No. 61972187).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://www.nmec.org.cn/Pages/ArticleList-12-0-0-1.html.

  2. 2.

    https://dumps.wikimedia.org/.

  3. 3.

    https://www.elastic.co/.

References

  1. Chen, D., Fisch, A., Weston, J., Bordes, A.: Reading Wikipedia to answer open-domain questions. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada, pp. 1870–1879. Association for Computational Linguistics (July 2017)

    Google Scholar 

  2. Cui, Y., et al.: Pre-training with whole word masking for Chinese BERT. arXiv (2019)

    Google Scholar 

  3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 [cs] (May 2019)

  4. Ding, M., Zhou, C., Chen, Q., Yang, H., Tang, J.: Cognitive graph for multi-hop reading comprehension at scale. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 2694–2703. Association for Computational Linguistics (July 2019). https://doi.org/10.18653/v1/P19-1259, cogQA-master

  5. Falcon, WA: PyTorch lightning. GitHub (March 2019). https://github.com/PyTorchLightning/pytorch-lightning

  6. Jin, D., Gao, S., Kao, J.Y., Chung, T., Hakkani-tur, D.: MMM: multi-stage multi-task learning for multi-choice reading comprehension. arXiv:1910.00458 [cs] (November 2019). Accepted by AAAI 2020

  7. Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: RACE: large-scale reading comprehension dataset from examinations. arXiv:1704.04683 [cs] (December 2017). EMNLP 2017

  8. Lee, J., et al.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2019)

    Google Scholar 

  9. Liu, Y.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv (2019)

    Google Scholar 

  10. Luo, R., Xu, J., Zhang, Y., Ren, X., Sun, X.: PKUSEG: a toolkit for multi-domain Chinese word segmentation. CoRR abs/1906.11455 (2019)

    Google Scholar 

  11. Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 2383–2392. Association for Computational Linguistics (November 2016)

    Google Scholar 

  12. Ran, Q., Li, P., Hu, W., Zhou, J.: Option comparison network for multiple-choice reading comprehension. arXiv:1903.03033 [cs] (March 2019). Comment: 6 pages, 2 tables

  13. Richardson, M., Burges, C.J., Renshaw, E.: MCTest: a challenge dataset for the open-domain machine comprehension of text. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Washington, USA, pp. 193–203. Association for Computational Linguistics (October 2013)

    Google Scholar 

  14. Robertson, S., Zaragoza, H.: The Probabilistic Relevance Framework: BM25 and Beyond. Found. Trends Inf. Retrieval 3(4), 333–389 (2009)

    Google Scholar 

  15. Shin, H.C., et al.: BioMegatron: larger biomedical domain language model. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4700–4706. Association for Computational Linguistics (November 2020)

    Google Scholar 

  16. Vaswani, A., et al.: Attention is all you need. arXiv:1706.03762 [cs] (December 2017). Comment: 15 pages, 5 figures

  17. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding. arXiv:1906.08237 [cs] (January 2020). Comment: Pretrained models and code are available athttps://github.com/zihangdai/xlnet

  18. Yang, Z., et al.: HotpotQA: a dataset for diverse, explainable multi-hop question answering. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369–2380, Brussels, Belgium. Association for Computational Linguistics (October 2018). https://doi.org/10.18653/v1/D18-1259

  19. Zhang, S., Zhao, H., Wu, Y., Zhang, Z., Zhou, X., Zhou, X.: DCMN+: dual co-matching network for multi-choice reading comprehension. arXiv:1908.11511 [cs] (January 2020). Accepted by AAAI 2020

  20. Zhang, T., Wang, C., Qiu, M., Yang, B., He, X., Huang, J.: Knowledge-empowered representation learning for Chinese medical reading comprehension: task, model and resources. arXiv:2008.10327 [cs] (August 2020)

  21. Zhao, Z., et al.: UER: an open-source toolkit for pre-training models. In: EMNLP/IJCNLP (2019)

    Google Scholar 

  22. Zhu, P., Zhao, H., Li, X.: DUMA: reading comprehension with transposition thinking. arXiv:2001.09415 [cs] (September 2020)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jing Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, J., Zhong, S., Chen, K., Li, T. (2021). SkeIn: Sketchy-Intensive Reading Comprehension Model for Multi-choice Biomedical Questions. In: Wei, Y., Li, M., Skums, P., Cai, Z. (eds) Bioinformatics Research and Applications. ISBRA 2021. Lecture Notes in Computer Science(), vol 13064. Springer, Cham. https://doi.org/10.1007/978-3-030-91415-8_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-91415-8_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-91414-1

  • Online ISBN: 978-3-030-91415-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics