Abstract
Multiple-choice reading comprehension is a challenging task requiring a machine to select the correct answer from a candidate answers set. In this paper, we propose a model following a matching-integration-verification-prediction framework, which explicitly employs a verification module inspired by the human being and generates judgment of each option simultaneously according to the evidence information and the verified information. The verification module, which is responsible for recheck information from matching, can selectively combine matched information from the passage and option instead of transmitting them equally to prediction. Experimental results demonstrate that our proposed model achieves significant improvement on several multiple-choice reading comprehension benchmark datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
We will release our code upon publication.
References
Chen, Z., Cui, Y., Ma, W., Wang, S., Hu, G.: Convolutional spatial attention model for reading comprehension with multiple-choice questions (2018)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL, pp. 4171–4186 (2019)
Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: Race: large-scale reading comprehension dataset from examinations. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 785–794 (2017)
Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. CoRR (2019)
Ostermann, S., Modi, A., Roth, M., Thater, S., Pinkal, M.: Mcscript: a novel dataset for assessing machine comprehension using script knowledge. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (2018)
Ostermann, S., Roth, M., Modi, A., Thater, S., Pinkal, M.: Semeval-2018 task 11: machine comprehension using commonsense knowledge. In: Proceedings of The 12th International Workshop on Semantic Evaluation, pp. 747–757 (2018)
Ostermann, S., Roth, M., Pinkal, M.: Mcscript2.0: a machine comprehension corpus focused on script events and participants. In: Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2019, Minneapolis, MN, USA, 6–7 June 2019, pp. 103–117 (2019)
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018)
Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: Squad: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383–2392 (2016)
Ran, Q., Li, P., Hu, W., Zhou, J.: Option comparison network for multiple-choice reading comprehension. arXiv preprint arXiv:1903.03033 (2019)
Sun, K., Yu, D., Yu, D., Cardie, C.: Improving machine reading comprehension with general reading strategies. arXiv Computation and Language (2018)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)
Wang, S., Yu, M., Jiang, J., Chang, S.: A co-matching model for multi-choice reading comprehension. In: Gurevych, I., Miyao, Y. (eds.) Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, 15–20 July 2018, vol. 2: Short Papers, pp. 746–751 (2018)
Xu, Y., Liu, J., Gao, J., Shen, Y., Liu, X.: Dynamic fusion networks for machine reading comprehension. arXiv preprint arXiv:1711.04964 (2017)
Yang, Z., Dai, Z., Yang, Y., Carbonell, J.G., Salakhutdinov, R., Le, Q.V.: Xlnet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019, pp. 5754–5764 (2019)
Zhang, S., Zhao, H., Wu, Y., Zhang, Z., Zhou, X., Zhou, X.: Dual co-matching network for multi-choice reading comprehension. arXiv preprint arXiv:1901.09381 (2019)
Zhang, S., Zhao, H., Wu, Y., Zhang, Z., Zhou, X., Zhou, X.: Dcmn+: dual co-matching network for multi-choice reading comprehension. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI) (2020)
Zhu, H., Wei, F., Qin, B., Liu, T.: Hierarchical attention flow for multiple-choice reading comprehension. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Acknowledgments
We thank the reviewers for their insightful comments. We also thank Effyic Intelligent Technology (Beijing) for their computing resource support. This work was supported by in part by the National Key Research and Development Program of China under Grant No. 2016YFB0801003.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Xing, L., Hu, Y., Xie, Y., Wang, C., Hu, Y. (2020). A Matching-Integration-Verification Model for Multiple-Choice Reading Comprehension. In: Li, G., Shen, H., Yuan, Y., Wang, X., Liu, H., Zhao, X. (eds) Knowledge Science, Engineering and Management. KSEM 2020. Lecture Notes in Computer Science(), vol 12275. Springer, Cham. https://doi.org/10.1007/978-3-030-55393-7_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-55393-7_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-55392-0
Online ISBN: 978-3-030-55393-7
eBook Packages: Computer ScienceComputer Science (R0)