Skip to main content

Boosting QA Performance Through SA-Net and AA-Net with the Read+Verify Framework

  • Conference paper
  • First Online:
Data Science and Machine Learning (AusDM 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1943))

Included in the following conference series:

  • 492 Accesses

Abstract

This paper proposes an ensemble model for the Stanford Question Answering Dataset (SQuAD) with the aim of improving performance compared to baseline models such as Albert, and Electra. The proposed ensemble model incorporates Sentence Attention (SA-Net) and Answer Attention (AA-Net) components, which leverage attention mechanisms to emphasize important information in sentences and answers, respectively. Additionally, the model adopts a read+verify architecture. In the Read stage, the model’s focus is on accurately predicting answer text, while in the Verify stage, it emphasizes the ability to determine the presence or absence of an answer, providing a probability for the existence of an answer. To enhance the training data, techniques for data augmentation are utilized, including Synonyms Replacement and Random Insertion. The experiment results demonstrate significant improvements on the Albert and Electra baseline models, highlighting the effectiveness of the proposed ensemble model for SQuAD.

L. Tang and Q. Qi—These authors contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Similar content being viewed by others

References

  1. Aniol, A., Pietron, M., Duda, J.: Ensemble approach for natural language question answering problem. In: 2019 Seventh International Symposium on Computing and Networking Workshops (CANDARW), pp. 180–183. IEEE (2019)

    Google Scholar 

  2. Clark, K., Luong, M.T., Le, Q.V., Manning, C.D.: Electra: pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020)

  3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  4. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 (2019)

  5. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  6. Rajpurkar, P., Jia, R., Liang, P.: Know what you don’t know: unanswerable questions for squad. arXiv preprint arXiv:1806.03822 (2018)

  7. Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 (2016)

  8. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., Bowman, S.R.: GLUE: a multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461 (2018)

  9. Wei, J., Zou, K.: EDA: easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprint arXiv:1901.11196 (2019)

  10. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  11. Zhang, Z., Wu, Y., Zhou, J., Duan, S., Zhao, H., Wang, R.: SG-Net: syntax-guided machine reading comprehension. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 9636–9643 (2020)

    Google Scholar 

  12. Zhang, Z., Yang, J., Zhao, H.: Retrospective reader for machine reading comprehension. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 14506–14514 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qinghua Shang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tang, L. et al. (2024). Boosting QA Performance Through SA-Net and AA-Net with the Read+Verify Framework. In: Benavides-Prado, D., Erfani, S., Fournier-Viger, P., Boo, Y.L., Koh, Y.S. (eds) Data Science and Machine Learning. AusDM 2023. Communications in Computer and Information Science, vol 1943. Springer, Singapore. https://doi.org/10.1007/978-981-99-8696-5_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8696-5_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8695-8

  • Online ISBN: 978-981-99-8696-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics