Abstract
Multiple choice questions from university admission exams (Gaokao in Chinese) is a challenging AI task since it requires effective representation to capture complicated semantic relations between sentences in the article and strong ability to handle long text. Face the above challenges, we propose a key-elements graph to enhance context semantic representation and a comprehensive evidence extraction method inspired by existing methods. Our model first extracts evidence sentences from a passage according to the corresponding question and options to reduce the impact of noise. Then combines syntactic analysis techniques with graph neural network to construct the key-elements graph bases on the extracted sentences. Finally, fusing the learned graph nodes representation into context representation to enhancing syntactic information. Experiments on Gaokao Chinese multiple-choice dataset demonstrate the proposed model obtains substantial performance gains over various neural model baselines in terms of accuracy.
Supported by the National Key Research and Development Program of China (No. 2018YFB1005103) and the National Natural Science Foundation of China (No. 61772324) and the Postgraduate Education Innovation Project of Shanxi Province (No. 2020SY019).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Datasets and codes are available on https://github.com/jfzy-lab/GCRC.
References
Mihaylov, T., Clark, P., Khot, T., Sabharwal, A.: Can a suit of armor conduct electricity? A new dataset for open book question answering. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October–November 2018, pp. 2381–2391. Association for Computational Linguistics (2018)
Dua, D., Wang, Y., Dasigi, P., Stanovsky, G., Singh, S., Gardner, M.: DROP: a reading comprehension benchmark requiring discrete reasoning over paragraphs. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota, June 2019, Volume 1 (Long and Short Papers), pp. 2368–2378. Association for Computational Linguistics (2019)
Khashabi, D., Chaturvedi, S., Roth, M., Upadhyay, S., Roth, D.: Looking beyond the surface: a challenge set for reading comprehension over multiple sentences. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana, June 2018, Volume 1 (Long Papers), pp. 252–262. Association for Computational Linguistics (2018)
Wang, S., Yu, M., Chang, S., Jiang, J.: A co-matching model for multi-choice reading comprehension (2018)
Zhang, S., Zhao, H., Wu, Y., Zhang, Z., Zhou, X., Zhou, X.: Dual co-matching network for multi-choice reading comprehension. CoRR, abs/1901.09381 (2019)
Ran, Q., Li, P., Hu, W., Zhou, J.: Option comparison network for multiple-choice reading comprehension (2019)
Wang, H., Yu, D., Sun, K., Chen, J., Roth, D.: Evidence sentence extraction for machine reading comprehension. In: Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL) (2019)
Trivedi, H., Kwon, H., Khot, T., Sabharwal, A., Balasubramanian, N.: Repurposing entailment for multi-hop question answering tasks (2019)
Yadav, V., Bethard, S., Surdeanu, M.: Quick and (not so) dirty: unsupervised selection of justification sentences for multi-hop question answering. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (2019)
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2018)
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding (2019)
Trivedi, H., Kwon, H., Khot, T., Sabharwal, A., Balasubramanian, N.: Repurposing entailment for multi-hop question answering tasks. CoRR, abs/1904.09380 (2019)
Welbl, J., Stenetorp, P., Riedel, S.: Constructing datasets for multi-hop reading comprehension across documents (2017)
Yang, Z., et al.: HotpotQA: a dataset for diverse, explainable multi-hop question answering (2018)
Feldman, Y., El-Yaniv, R.: Multi-hop paragraph retrieval for open-domain question answering. CoRR, abs/1906.06606 (2019)
De Cao, N., Aziz, W., Titov, I.: Question answering by reasoning across documents with graph convolutional networks. CoRR, abs/1808.09920 (2018)
Ding, M., Zhou, C., Chen, Q., Yang, H., Tang, J.: Cognitive graph for multi-hop reading comprehension at scale (2019)
Choi, E., Hewlett, D., Uszkoreit, J., Polosukhin, I., Berant, J.: Coarse-to-fine question answering for long documents. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2017)
Geva, M., Berant, J.: Learning to search in long documents using document structure (2018)
Yadav, V., Bethard, S., Surdeanu, M.: Alignment over heterogeneous embeddings for question answering. In: Conference of the North (2019)
Song, L., Wang, Z., Yu, M., Zhang, Y., Florian, R., Gildea, D.: Exploring graph-structured passage representation for multi-hop reading comprehension with graph neural networks (2018)
Dhingra, B., Jin, Q., Yang, Z., Cohen, W.W., Salakhutdinov, R.: Neural models for reasoning over multiple mentions using coreference (2018)
De Cao, N., Aziz, W., Titov, I.: Question answering by reasoning across documents with graph convolutional networks (2018)
Veličkovié, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks (2017)
Vaswani, A., et al.: Attention is all you need (2017)
Lan, Z.-Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: Albert: a lite BERT for self-supervised learning of language representations. arXiv, abs/1909.11942 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, X., Ji, Y., Li, R. (2020). Key-Elements Graph Constructed with Evidence Sentence Extraction for Gaokao Chinese. In: Zhu, X., Zhang, M., Hong, Y., He, R. (eds) Natural Language Processing and Chinese Computing. NLPCC 2020. Lecture Notes in Computer Science(), vol 12431. Springer, Cham. https://doi.org/10.1007/978-3-030-60457-8_33
Download citation
DOI: https://doi.org/10.1007/978-3-030-60457-8_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60456-1
Online ISBN: 978-3-030-60457-8
eBook Packages: Computer ScienceComputer Science (R0)