Abstract
The Knowledge Graph Question Answering (KGQA) task is useful for information retrieval systems, intelligent customer service systems, etc., which has attracted the attention of a large number of researchers. Although the performance of KGQA has been further improved by introducing the Deep Learning models, there are still some difficulties to be solved, such as the representation of questions and answers, the efficient construction way of candidate path set, etc. In this paper, we propose a complete approach for KGQA task. Firstly, we devise a novel candidate path generation process, which effectively improves computation performance by reducing the number of candidate paths corresponding to a question and at the same time guarantees the accuracy of results. Secondly, considering the textual expression diversity of questions and stochastic of candidate paths, we present four models to learn semantic features of Chinese sequence with different focuses. Finally, in order to combine the advantages of each presented model, we propose a dedicated fusion policy which can get the most suitable path from the path set predicted by our presented models. We conduct experiments on Chinese Knowledge Base Question Answering (CKBQA) dataset. Experiment results show that our approach achieves better performance than the best one published in CCKS2019 competition.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bordes, A., Chopra, S., Weston, J.: Question answering with subgraph embeddings. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp. 615–620. Association for Computational Linguistics, October 2014
Bordes, A., Weston, J., Usunier, N.: Open question answering with weakly supervised embedding models. In: Proceedings of the 2014th European Conference on Machine Learning and Knowledge Discovery in Databases-Volume Part I, pp. 165–180 (2014)
Cai, Q., Yates, A.: Large-scale semantic parsing via schema matching and lexicon extension. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Sofia, Bulgaria, pp. 423–433. Association for Computational Linguistics, August 2013
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186 (2019)
Dong, L., Wei, F., Zhou, M., Xu, K.: Question answering over Freebase with multi-column convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Beijing, China, pp. 260–269. Association for Computational Linguistics, July 2015
Guo, D., Tang, D., Duan, N., Zhou, M., Yin, J.: Dialog-to-action: conversational question answering over a large-scale knowledge base. In: NeurIPS, pp. 2946–2955 (2018)
Gur, I., Yavuz, S., Su, Y., Yan, X.: DialSQL: dialogue based structured query generation. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers, Melbourne, Australia, pp. 1339–1349. Association for Computational Linguistics, July 2018
Hao, Y., et al.: An end-to-end model for question answering over knowledge base with cross-attention combining global knowledge. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada, pp. 221–231. Association for Computational Linguistics, July 2017
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Hu, S., Zou, L., Yu, J.X., Wang, H., Zhao, D.: Answering natural language questions by subgraph matching over knowledge graphs. IEEE Trans. Knowl. Data Eng. 30(5), 824–837 (2018)
Lafferty, J., Mccallum, A., Pereira, F.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: Proceedings of the Eighteenth International Conference on Machine Learning, pp. 282–289 (2001)
Liang, P., Jordan, M., Klein, D.: Learning dependency-based compositional semantics. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, Oregon, USA, pp. 590–599. Association for Computational Linguistics, June 2011
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp. 1532–1543. Association for Computational Linguistics, October 2014
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners (2019)
Reddy, S., Täckström, O., Collins, M., Kwiatkowski, T., Das, D., Steedman, M., Lapata, M.: Transforming dependency structures to logical forms for semantic parsing. Trans. Assoc. Comput. Linguist. 4, 127–140 (2016)
Tai, K.S., Socher, R., Manning, C.D.: Improved semantic representations from tree-structured long short-term memory networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Beijing, China, pp. 1556–1566. Association for Computational Linguistics, July 2015
Wang, R., Wang, M., Liu, J., Chen, W., Cochez, M., Decker, S.: Leveraging knowledge graph embeddings for natural language question answering. In: Li, G., Yang, J., Gama, J., Natwichai, J., Tong, Y. (eds.) Database Systems for Advanced Applications, pp. 659–675. Springer International Publishing, Cham (2019)
Wiseman, S., Rush, A.M.: Sequence-to-sequence learning as beam-search optimization. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1296–1306 (2016)
Xu, K., Feng, Y., Reddy, S., Huang, S., Zhao, D.: Enhancing freebase question answering using textual evidence. CoRR abs/1603.00957 (2016). http://arxiv.org/abs/1603.00957
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: XLNET: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, pp. 5754–5764 (2019)
Yao, X., Van Durme, B.: Information extraction over structured data: question answering with Freebase. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Baltimore, Maryland, pp. 956–966. Association for Computational Linguistics, June 2014
Zhang, Y., Dai, H., Kozareva, Z., Smola, A.J., Song, L.: Variational reasoning for question answering with knowledge graph. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Acknowledgment
This work was supported by project 2018AAA0102502.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Tang, M., Xiong, H., Wang, L., Lin, X. (2020). A Dynamic Answering Path Based Fusion Model for KGQA. In: Li, G., Shen, H., Yuan, Y., Wang, X., Liu, H., Zhao, X. (eds) Knowledge Science, Engineering and Management. KSEM 2020. Lecture Notes in Computer Science(), vol 12274. Springer, Cham. https://doi.org/10.1007/978-3-030-55130-8_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-55130-8_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-55129-2
Online ISBN: 978-3-030-55130-8
eBook Packages: Computer ScienceComputer Science (R0)