Abstract
Reasoning is an essential ability in QA systems, and the integration of this ability into QA systems has been the subject of considerable research. A prevalent strategy involves incorporating domain knowledge graphs using Graph Neural Networks (GNNs) to augment the performance of pre-trained language models. However, this approach primarily focuses on individual nodes and fails to leverage the extensive relational information present within the graph fully. In this paper, we present a novel model called Path-Aware Cross-Attention Network (PCN), which incorporates meta-paths containing relational information into the model. The PCN features a multi-layered, bidirectional cross-attention mechanism that facilitates information exchange between the textual representation and the path representation at each layer. By integrating rich inference information into the language model and contextual semantic information into the path representation, this mechanism enhances the overall effectiveness of the model. Furthermore, we incorporate a self-learning mechanism for path scoring, enabling weighted evaluation. The performance of our model is assessed across three benchmark datasets, covering the domains of commonsense question answering (CommonsenseQA, OpenbookQA) and medical question answering (MedQA-USMLE). The experimental results validate the efficacy of our proposed model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alsentzer, E., et al.: Publicly available clinical BERT embeddings. In: Proceedings of the 2nd Clinical Natural Language Processing Workshop (2019)
Chen, D., Li, Y., Yang, M., Zheng, H.T., Shen, Y.: Knowledge-aware textual entailment with graph attention network. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2019)
Feng, Y., Chen, X., Lin, B.Y., Wang, P., Yan, J., Ren, X.: Scalable multi-hop relational reasoning for knowledge-aware question answering. In: EMNLP (2020)
Guan, X., Cao, B., Gao, Q., Yin, Z., Liu, B., Cao, J.: CORN: co-reasoning network for commonsense question answering. In: Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea, October 2022. International Committee on Computational Linguistics (2022)
Gururangan, S., et al.: Don’t stop pretraining: adapt language models to domains and tasks. In: Proceedings of ACL (2020)
Jiang, J., Zhou, K., Wen, J.R., Zhao, X.: \(great~truths~are ~always ~simple\): a rather simple knowledge encoder for enhancing the commonsense reasoning capacity of pre-trained models. In: Findings of the Association for Computational Linguistics (2022)
Jin, D., Pan, E., Oufattole, N., Weng, W.H., Fang, H., Szolovits, P.: What disease does this patient have? A large-scale open domain question answering dataset from medical exams. Appl. Sci. 11(14), 6421 (2021)
Lee, J., et al.: BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4), 1234–1240 (2020)
Lin, B.Y., Chen, X., Chen, J., Ren, X.: KagNet: knowledge-aware graph networks for commonsense reasoning. In: EMNLP-IJCNLP 2019 (2019)
Liu, F., Shareghi, E., Meng, Z., Basaldella, M., Collier, N.: Self-alignment pretraining for biomedical entity representations. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
Lv, S., et al.: Graph-based reasoning over heterogeneous external knowledge for commonsense question answering. In: AAAI 2020 (2020)
Lv, S., et al.: Graph-based reasoning over heterogeneous external knowledge for commonsense question answering. Proc. AAAI Conf. Artif. Intell. 34(05), 8449–8456 (2020)
Mihaylov, T., Clark, P., Khot, T., Sabharwal, A.: Can a suit of armor conduct electricity? A new dataset for open book question answering. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (2018)
Mihaylov, T., Frank, A.: Knowledgeable reader: enhancing cloze-style reading comprehension with external commonsense knowledge. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (2018)
Mihaylov, T., Frank, A.: Knowledgeable reader: enhancing cloze-style reading comprehension with external commonsense knowledge. In: ACL (2018)
Pan, X., et al.: Improving question answering with external knowledge. In: Proceedings of the 2nd Workshop on Machine Reading for Question Answering, November 2019
Park, J., et al.: Relation-aware language-graph transformer for question answering (2022)
Santoro, A., et al.: A simple neural network module for relational reasoning. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30 (2017)
Schlichtkrull, M., Kipf, T.N., Bloem, P., Van Den Berg, R., Titov, I., Welling, M.: Modeling relational data with graph convolutional networks. In: The Semantic Web: 15th International Conference, ESWC 2018 (2018)
Sun, Y., Shi, Q., Qi, L., Zhang, Y.: JointLK: joint reasoning with language models and knowledge graphs for commonsense question answering. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Talmor, A., Herzig, J., Lourie, N., Berant, J.: CommonsenseQA: a question answering challenge targeting commonsense knowledge. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2019)
Wang, P., Peng, N., Ilievski, F., Szekely, P., Ren, X.: Connecting the dots: a knowledgeable path generator for commonsense question answering. In: Findings of the Association for Computational Linguistics, EMNLP 2020, November 2020 (2020)
Yang, A., et al.: Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019)
Yasunaga, M., Ren, H., Bosselut, A., Liang, P., Leskovec, J.: QA-GNN: reasoning with language models and knowledge graphs for question answering. In: North American Chapter of the Association for Computational Linguistics (NAACL) (2021)
Zhang, X., et al.: GreaseLM: graph reasoning enhanced language models. In: International Conference on Learning Representations (2021)
Acknowledgements
We thank the reviewers for their insightful comments and valuable suggestions. This study is partially supported by National Key RD Program of China (2021ZD0113402), National Natural Science Foundation of China (62276082), Major Key Project of PCL (PCL2021A06), Shenzhen Soft Science Research Program Project (RKX20220705152815035), Shenzhen Science and Technology Research and Development Fund for Sustainable Development Project (GXWD20231128103819001, No. KCXFZ20201221173613036) and the Fundamental Research Fund for the Central Universities (HIT.DZJJ.2023117).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Luo, Z., Xiong, Y., Tang, B. (2024). Path-Aware Cross-Attention Network for Question Answering. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14646. Springer, Singapore. https://doi.org/10.1007/978-981-97-2253-2_9
Download citation
DOI: https://doi.org/10.1007/978-981-97-2253-2_9
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-2252-5
Online ISBN: 978-981-97-2253-2
eBook Packages: Computer ScienceComputer Science (R0)