Skip to main content

What Linguistic Information Does Reading Comprehension Require?

  • Conference paper
  • First Online:
Knowledge Graph and Semantic Computing: Knowledge Graph and Cognitive Intelligence (CCKS 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1356))

Included in the following conference series:

  • 1013 Accesses

Abstract

Machine comprehension is one of the primary goals in Artificial Intelligence (AI) and Natural Language Processing (NLP). Accessing the difficulty level of machine reading comprehension (MRC) questions is important for building accurate MRC systems. In order to tackle this problem, we propose a novel idea to access the difficulty level of MRC questions, according to the amount of linguistic information required to answer them. Specifically, we systematically analyze and compare the performance for each BERT layer representation per question type on MRC datasets, and highlighted the characteristics of the datasets according to linguistic information of different layers. Our extensive analysis suggests that the superficial categories (or question types) of MRC questions do not directly reflect their difficulty levels and that it is possible to analyze the MRC questions’ difficulty levels according to the amount of linguistic information required.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    For the same question, if the first two or three parts can answer it, we think it belongs to the first category, because the linguistic information in the first category is sufficient to solve this question. Other situations are similar to this.

References

  1. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–1734. Association for Computational Linguistics, Doha, October 2014. https://doi.org/10.3115/v1/D14-1179. https://www.aclweb.org/anthology/D14-1179

  2. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805 (2018). http://arxiv.org/abs/1810.04805

  3. Goldberg, Y.: Assessing BERT’s syntactic abilities. arXiv abs/1901.05287 (2019)

    Google Scholar 

  4. Jawahar, G., Sagot, B., Seddah, D.: What does BERT learn about the structure of language? In: ACL 2019-57th Annual Meeting of the Association for Computational Linguistics (2019)

    Google Scholar 

  5. Joshi, M., Choi, E., Weld, D.S., Zettlemoyer, L.: TriviaQA: a large scale distantly supervised challenge dataset for reading comprehension. CoRR abs/1705.03551 (2017). http://arxiv.org/abs/1705.03551

  6. Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 655–665. Association for Computational Linguistics, Baltimore, June 2014. https://doi.org/10.3115/v1/P14-1062. https://www.aclweb.org/anthology/P14-1062

  7. Kaushik, D., Lipton, Z.C.: How much reading does reading comprehension require? A critical investigation of popular benchmarks. arXiv preprint arXiv:1808.04926 (2018)

  8. Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: RACE: large-scale ReAding comprehension dataset from examinations. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 785–794. Association for Computational Linguistics, Copenhagen, September 2017. https://doi.org/10.18653/v1/D17-1082. https://www.aclweb.org/anthology/D17-1082

  9. Levesque, H.J.: On our best behaviour. Artif. Intell. 212, 27–35 (2014). https://doi.org/10.1016/j.artint.2014.03.007. http://www.sciencedirect.com/science/article/pii/S0004370214000356

  10. Mikolov, T., Chen, K., Corrado, G.S., Dean, J.: Efficient estimation of word representations in vector space. CoRR abs/1301.3781 (2013)

    Google Scholar 

  11. Narasimhan, K., Barzilay, R.: Machine comprehension with discourse relations. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1253–1262. Association for Computational Linguistics, Beijing, July 2015. https://doi.org/10.3115/v1/P15-1121. https://www.aclweb.org/anthology/P15-1121

  12. Nguyen, T., et al.: MS MARCO: a human generated machine reading comprehension dataset. CoRR abs/1611.09268 (2016). http://arxiv.org/abs/1611.09268

  13. Peters, M.E., et al.: Deep contextualized word representations. CoRR abs/1802.05365 (2018). http://arxiv.org/abs/1802.05365

  14. Qiu, L., et al.: Dynamically fused graph network for multi-hop reasoning. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6140–6150. Association for Computational Linguistics, Florence, July 2019. https://doi.org/10.18653/v1/P19-1617. https://www.aclweb.org/anthology/P19-1617

  15. Radford, A.: Improving language understanding by generative pre-training (2018)

    Google Scholar 

  16. Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. CoRR abs/1606.05250 (2016). http://arxiv.org/abs/1606.05250

  17. Richardson, M., Burges, C.J., Renshaw, E.: MCTest: a challenge dataset for the open-domain machine comprehension of text. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 193–203. Association for Computational Linguistics, Seattle, October 2013. https://www.aclweb.org/anthology/D13-1020

  18. Seo, M.J., Kembhavi, A., Farhadi, A., Hajishirzi, H.: Bidirectional attention flow for machine comprehension. CoRR abs/1611.01603 (2016). http://arxiv.org/abs/1611.01603

  19. Smith, E., Greco, N., Bošnjak, M., Vlachos, A.: A strong lexical matching method for the machine comprehension test. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1693–1698. Association for Computational Linguistics, Lisbon, September 2015. https://doi.org/10.18653/v1/D15-1197. https://www.aclweb.org/anthology/D15-1197

  20. Sugawara, S., Inui, K., Sekine, S., Aizawa, A.: What makes reading comprehension questions easier? arXiv preprint arXiv:1808.09384 (2018)

  21. Vaswani, A., et al.: Attention is all you need. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 5998–6008. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf

  22. Wang, W., Yang, N., Wei, F., Chang, B., Zhou, M.: Gated self-matching networks for reading comprehension and question answering. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 189–198. Association for Computational Linguistics, Vancouver, July 2017. https://doi.org/10.18653/v1/P17-1018. https://www.aclweb.org/anthology/P17-1018

  23. Yang, J., Zhao, H.: Deepening hidden representations from pre-trained language models for natural language understanding. ArXiv abs/1911.01940 (2019)

    Google Scholar 

  24. Yang, Z., et al.: HotpotQA: a dataset for diverse, explainable multi-hop question answering. In: EMNLP (2018)

    Google Scholar 

  25. Yin, W., Ebert, S., SchĂĽtze, H.: Attention-based convolutional neural network for machine comprehension. CoRR abs/1602.04341 (2016). http://arxiv.org/abs/1602.04341

  26. Yu, A.W., et al.: QANet: combining local convolution with global self-attention for reading comprehension. CoRR abs/1804.09541 (2018). http://arxiv.org/abs/1804.09541

Download references

Acknowledgement

We thank the anonymous reviewers for their helpful comments and suggestions. This work is supported by the National Natural Science Foundation of China (No. 61936012, No. 61772324).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ru Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guan, Y., Li, R., Guo, S. (2021). What Linguistic Information Does Reading Comprehension Require?. In: Chen, H., Liu, K., Sun, Y., Wang, S., Hou, L. (eds) Knowledge Graph and Semantic Computing: Knowledge Graph and Cognitive Intelligence. CCKS 2020. Communications in Computer and Information Science, vol 1356. Springer, Singapore. https://doi.org/10.1007/978-981-16-1964-9_20

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-1964-9_20

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-1963-2

  • Online ISBN: 978-981-16-1964-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics