Abstract
Extractive Question Answering (EQA) tasks have gained intensive attention in recent years, while Pre-trained Language Models (PLMs) have been widely adopted for encoding purposes. Yet, PLMs typically take as initial input token embeddings and rely on attention mechanisms to extract contextual representations. In this paper, a simple yet comprehensive framework, termed perturbation for alignment (PFA), is proposed to investigate variations towards token embeddings. A robust encoder is further formed being tolerant against the embedding variation and hence beneficial to subsequent EQA tasks. Specifically, PFA consists of two general modules, including the embedding perturbation (a transformation to produce embedding variations) and the semantic alignment (to ensure the representation similarity from original and perturbed embeddings). Furthermore, the framework is flexible to allow several alignment strategies with different interpretations. Our framework is evaluated on four highly-competitive EQA benchmarks, and PFA consistently improves state-of-the-art models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
available from https://github.com/yeonsw/BLANC.
- 2.
available from https://github.com/nng555/ssmba.
- 3.
available from https://github.com/seanie12/SWEP.
- 4.
available from https://github.com/Nardien/KALA.
References
Cuturi, M.: Sinkhorn distances: lightspeed computation of optimal transport. In: Burges, C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 26 (2013)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (2019)
Fisch, A., Talmor, A., Jia, R., Seo, M., Choi, E., Chen, D.: MRQA 2019 shared task: evaluating generalization in reading comprehension. In: Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 1–13. Association for Computational Linguistics, Hong Kong, China (2019)
Gehring, J., Miao, Y., Metze, F., Waibel, A.: Extracting deep bottleneck features using stacked auto-encoders. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3377–3381 (2013)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016). http://www.deeplearningbook.org
Joshi, M., Chen, D., Liu, Y., Weld, D.S., Zettlemoyer, L., Levy, O.: SpanBERT: improving pre-training by representing and predicting spans, vol. 8, pp. 64–77. MIT Press, Cambridge, MA (2020)
Kang, M., Baek, J., Hwang, S.J.: KALA: knowledge-augmented language model adaptation. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5144–5167. Association for Computational Linguistics, Seattle, United States (2022). https://doi.org/10.18653/v1/2022.naacl-main.379, https://aclanthology.org/2022.naacl-main.379
Lee, S., Kang, M., Lee, J., Hwang, S.J.: Learning to perturb word embeddings for out-of-distribution QA. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 5583–5595. Association for Computational Linguistics (2021)
Liu, Y., et al.: RoBERTa: A robustly optimized BERT pretraining approach. ArXiv preprint abs/1907.11692. (019)
Luo, D., et al.: Evidence augment for multiple-choice machine reading comprehension by weak supervision. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds.) ICANN 2021. LNCS, vol. 12895, pp. 357–368. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86383-8_29
Ng, N., Cho, K., Ghassemi, M.: SSMBA: self-supervised manifold based data augmentation for improving out-of-domain robustness. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1268–1283. Association for Computational Linguistics (2020)
Seonwoo, Y., Kim, J.H., Ha, J.W., Oh, A.: Context-aware answer extraction in question answering. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2418–2428. Association for Computational Linguistics (2020)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(56), 1929–1958 (2014)
Sun, K., Yu, D., Yu, D., Cardie, C.: Improving machine reading comprehension with general reading strategies. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2633–2643. Association for Computational Linguistics, Minneapolis, Minnesota (2019)
Villani, C.: Optimal Transport Old and New. Grundlehren der mathematischen Wissenschaften, vol. 338. Springer, Berlin (2009). https://doi.org/10.1007/978-3-540-71050-9
Wang, R., et al.: K-Adapter: infusing knowledge into pre-trained models with adapters. In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 1405–1418. Association for Computational Linguistics (2021)
Wei, J., Zou, K.: EDA: easy data augmentation techniques for boosting performance on text classification tasks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 6382–6388. Association for Computational Linguistics, Hong Kong, China (2019)
Zhang, C., et al.: Read, attend, and exclude: multi-choice reading comprehension by mimicking human reasoning process. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2020, pp. 1945–1948. Association for Computing Machinery, New York (2020)
Acknowledgments
This work was partially supported by the Australian Research Council Discovery Project (DP210101426) and AEGiS Advance Grant(888/008/268), University of Wollongong.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Yao, X., Ma, J., Hu, X., Yang, J., Guo, Y., Liu, J. (2023). Towards Robust Token Embeddings for Extractive Question Answering. In: Zhang, F., Wang, H., Barhamgi, M., Chen, L., Zhou, R. (eds) Web Information Systems Engineering – WISE 2023. WISE 2023. Lecture Notes in Computer Science, vol 14306. Springer, Singapore. https://doi.org/10.1007/978-981-99-7254-8_7
Download citation
DOI: https://doi.org/10.1007/978-981-99-7254-8_7
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-7253-1
Online ISBN: 978-981-99-7254-8
eBook Packages: Computer ScienceComputer Science (R0)