Skip to main content
Log in

A multi-granularity semantic space learning approach for cross-lingual open domain question answering

  • Published:
World Wide Web Aims and scope Submit manuscript

Abstract

Cross-lingual Open Domain Question Answering (Cross-lingual Open-QA) has been developed since it was proposed in the mid-1990s. It can be divided into two mainstream tasks according to the training corpus used in the answer extraction stage. One is that both of the training and testing data are in the target language. The other is that the training data is in the source language, and the testing data is in the target language. For a long time, the former has been studied mainly through translation based approaches. Until 2019, the latter appeared and non-translation based approaches become available thanks to multilingual BERT model. Therefore, the two tasks have been discussed separately, which encourages our work on whether it is possible to achieve these two tasks simultaneously without any additional transformation. It is observed that the existence of the multilingual BERT model makes a solution to establish a unified framework. However, there are two problems with using the multilingual BERT model directly. The one is in the document retrieval stage, directly working multilingual pretraining model for similarity calculation will result in insufficient retrieval accuracy. The other is in the answer extraction stage, the answers will involve different levels of abstraction related to retrieved documents, which needs deep exploration. This paper puts forward a multi-granularity semantic space learning based approach for cross-lingual Open-QA. It consists of the Match-Retrieval module and the Multi-granularity-Extraction module. The matching network in the retrieval module makes heuristic adjustment and expansion on the learned features to improve the retrieval quality. In the answer extraction module, the reuse of deep semantic features is realized at the network structure level through cross-layer concatenation, and it enables us to learn multi-granularity semantic space. Experimental results on two public cross-lingual Open-QA datasets show the superiority of our proposed approach over the state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. http://research.nii.ac.jp/ntcir/index-en.html

References

  1. Jawahar, G., Sagot, B., Seddah, D.: What does BERT learn about the structure of language. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, vol. 1, pp 3651–3657 (2019)

  2. Schwenk, H., Li, X.: A corpus for multilingual document classification in eight languages. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation, pp 3548–3551 (2018)

  3. Liu, J., Lin, Y., Liu, Z., Sun, M.: XQA: a cross-lingual open-domain question answering dataset. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp 2358–2368 (2019)

  4. Lample, G., Conneau, A., Denoyer, L., Ranzato, M.: Unsupervised machine translation using monolingual corpora only. In: 6th International Conference on Learning Representations, ICLR 2018–Conference Track Proceedings (2018)

  5. Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, pp 2261–2269 (2017)

  6. Conneau, A., Rinott, R., Lample, G., Williams, A., Bowman, S.R., Schwenk, H., Stoyanov, V.: XNLI: evaluating cross-lingual sentence representations. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp 2475–2485 (2018)

  7. Conneau, A., Kiela, D., Schwenk, H., Barrault, L., Bordes, A.: Supervised learning of universal sentence representations from natural language inference data. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP, vol. 2017, pp 670–680 (2017)

  8. Mou, L., Men, R., Li, G., Xu, Y., Zhang, L., Yan, R., Jin, Z.: Recognizing entailment and contradiction by tree-based convolution. CoRR, arXiv:1512.08422 (2015)

  9. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K. : BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 conference of the North American chapter of the Association for Computational Linguistics, NAACL-HLT, pp 4171–4186 (2018)

  10. Clark, C., Gardner, M.: Simple and effective multi-paragraph reading comprehension, ACL. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp 845–855 (2018)

  11. Gouws, S., Bengio, Y., Corrado, G.: BilBOWA: fast bilingual distributed representations without word alignments. In: Proceedings of the 32nd International Conference on Machine Learning, PMLR, vol. 37, pp 748–756 (2015)

  12. Luong, T., Pham, H., Manning, C.D.: Bilingual word representations with monolingual quality in mind. In: Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, VS@NAACL-HLT 2015, pp 151–159 (2015)

  13. Zhang, M., Liu, Y., Luan, H.-B., Sun, M., Izuha, T., Hao, J. : Building earth mover’s distance on bilingual word embeddings for machine translation. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp 2870–2876 (2016)

  14. Artetxe, M., Labaka, G., Agirre, E.: Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18, pp 5012–5019 (2018)

  15. Cer, D.M., Diab, M.T., Agirre, E., Lopez-gazpio, I., Specia, L.: SemEval-2017 task 1: semantic textual similarity multilingual and crosslingual focused evaluation. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp 1–14 (2017)

  16. Artetxe, M., Schwenk, H.: Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics 7, 597–610 (2019)

    Article  Google Scholar 

  17. Klementiev, A., Titov, I., Bhattarai, B.: Inducing crosslingual distributed representations of words. In: Proceedings of COLING, vol. 2012, pp 1459–1474 (2012)

  18. Schuster, S., Gupta, S., Shah, R., Lewis, M.: Cross-lingual transfer learning for multilingual task oriented dialog. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:, Human Language Technologies, vol. 1, pp 3795–3805 (2019)

  19. Lample, G., Conneau, A., Ranzato, M., Denoyer, L., Jégou, E.: Word translation without parallel data. In: International Conference on Learning Representations 2018 (2018)

  20. Pires, T., Schlinger, E., Garrette, D.: How multilingual is multilingual BERT. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, pp 4996–5001 (2019)

  21. Sasaki, Y., Lin, C.-J., Chen, K.-H., Chen, H.-H.: Overview of the NTCIR-6 cross-lingual question answering (CLQA) task. In: Proceedings of the 6th NTCIR Workshop Meeting on Evaluation of Information, pp 15–18 (2007)

  22. Zhang, J., Ding, Y., Shen, S., Cheng, Y., Sun, M., Luan, H.-B., Liu, Y.: THUMT: an open source toolkit for neural machine translation. In: Proceedings of the 14th Conference of the Association for Machine Translation in the Americas, vol. 1, pp 116–122 (2020)

  23. Magnini, B., Vallin, A., Ayache, C., Erbach, G., Peñas, A., de Rijke, M., Rocha, P., Simov, K.I., Sutcliffe, R.F.E.: Overview of the clef 2004 multilingual question answering track. In: Proceedings of Workshop of CLEF, pp 371–391 (2004)

  24. Vallin, A., Magnini, B., Giampiccolo, D., Aunimo, L., Ayache, C., Osenova, P., Peñas, A., de Rijke, M., Sacaleanu, B., Santos, D., Sutcliffe, R.F.E.: Overview of the clef 2005 multilingual question answering track. In: Proceedings of Workshop of CLEF, pp 307–331 (2005)

  25. Magnini, B., Giampiccolo, D., Forner, P., Ayache, C., Jijkoun, V., Osenova, P., Peñas, A., Rocha, P. , Sacaleanu, B., Sutcliffe, R.F.E.: Overview of the clef 2006 multilingual question answering track. In: Proceedings of Workshop of CLEF, pp 223–256 (2006)

  26. Andreas, J., Rohrbach, M., Darrell, T., Klein, D.: Neural module networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 39–48 (2016)

  27. Ellen, M.: Voorhees, the TREC-8 question answering track report. In: Proceedings of The Eighth Text REtrieval Conference, pp 77–82 (1999)

  28. Kwok, C.C.T., Etzioni, O., Weld, D.S.: Scaling question answering to the web. ACM Transactions on Information Systems 19(3), 242–262 (2001)

    Article  Google Scholar 

  29. Bordes, A., Usunier, N., Chopra, S., Weston, J. : Large-scale simple question answering with memory networks. CoRR, arXiv:1506.02075 (2015)

  30. Chen, D., Bolton, J., Manning, C.D. : A thorough examination of the CNN/daily mail reading comprehension task. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp 2358–2367 (2016)

  31. Dhingra, B., Liu, H., Yang, Z., Cohen, W.W., Salakhutdinov, R.: Gated attention readers for text comprehension. In: Proceedings of the 54th Conference of the Association for Computational Linguistics, vol. 1, pp 1832–1846 (2017)

  32. Cui, Y., Chen, Z., Wei, S., Wang, S., Liu, T., Hu, G. : Attention-over-attention neural networks for reading comprehension. In: Proceedings of the 55th Conference of the Association for Computational Linguistics, vol. 1, pp 593–602 (2017)

  33. Chen, D., Fisch, A., Weston, J., Bordes, A. : Reading wikipedia to answer open domain questions. In: Proceedings of the 55th Conference of the Association for Computational Linguistics, vol. 1, pp 1870–1879 (2017)

  34. Wang, S., Yu, M., Guo, X., Wang, Z., Klinger, T., Zhang, W., Chang, S., Tesauro, G., Zhou, B., Jiang, J. : R3: reinforced ranker-reader for open-domain question answering. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18, pp 5981–5988 (2018)

  35. Choi, E., Hewlett, D., Uszkoreit, J., Polosukhin, I., Lacoste, A., Berant, J. : Coarse-to-fine question answering for long documents. In: Proceedings of the 55th Conference of the Association for Computational Linguistics, vol. 1, pp 209–220 (2017)

  36. Min, S., Zhong, V., Socher, R., Xiong, C.: Efficient and robust question answering from minimal context 697 over documents. In: Proceedings of the 56th Conference of the Association for Computational Linguistics, vol. 1, pp 1725–1735 (2018)

  37. Lin, Y., Ji, H., Liu, Z., Sun, M. : Denoising distantly supervised open-domain question answering. In: Proceedings of the 56th Conference of the Association for Computational Linguistics, vol. 1, pp 1735–1745 (2018)

  38. Wang, S., Yu, M., Jiang, J., Zhang, W., Guo, X., Chang, S., Wang, Z., Klinger, T., Tesauro, G., Campbell, M.: Evidence aggregation for answer re-ranking in open-domain question answering. In: 6th International Conference on Learning Representations (2018)

  39. Clark, C., Gardner, M.: Simple and effective multi-paragraph reading comprehension. In: Proceedings of the 56th Conference of the Association for Computational Linguistics, vol. 1, pp 845–855 (2018)

  40. Grosz, B.J., Jones, K.S., Webber, B.L.: Readings in Natural Language Processing. Morgan Kaufmann Publishers Inc., pp. 545–549 (1986)

Download references

Acknowledgements

This work was supported by NSFC Funding (No. 61876062).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lin Li.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Special Issue on Web Intelligence = Artificial Intelligence in the Connected World

Guest Editors: Yuefeng Li, Amit Sheth, Athena Vakali, and Xiaohui Tao

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, L., Kong, M., Li, D. et al. A multi-granularity semantic space learning approach for cross-lingual open domain question answering. World Wide Web 24, 1065–1088 (2021). https://doi.org/10.1007/s11280-021-00879-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11280-021-00879-2

Keywords

Navigation