skip to main content
10.1145/3578741.3578762acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmlnlpConference Proceedingsconference-collections
research-article

Semantic Representation Based on AMR Graph for Document Level Question Generation

Published:06 March 2023Publication History

ABSTRACT

Question Generation (QG) is the task of generating questions from reference sentences and answers specified in the sentences. Intuitively, as one type of semantic representations, Abtract Meaning Representation (AMR) can be helpful for question generation by enforcing meaning preservation and handling data sparsity due to being abstracted from syntactic representations. However, little work has been done to leverage rich semantics in AMR for question generation. Therefore, we design AMR Question Generation (AMRQG) model to construct document-level AMR graphs from sentence-level AMR graphs. The model solves the alignment issue between nodes of AMR graph and words in sequence text at graph-level and node-level. To our knowledge, we are the first to leverage AMR representation into question generation for document-level input. Automatic evaluation results demonstrate that incorporating AMR as additional knowledge can significantly improve a sequence-to-sequence natural question generation model and show our model’s superiority.

References

  1. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473(2014).Google ScholarGoogle Scholar
  2. Xuefeng Bai, Yulong Chen, Linfeng Song, and Yue Zhang. 2021. Semantic representation for dialogue modeling. arXiv preprint arXiv:2105.10188(2021).Google ScholarGoogle Scholar
  3. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th linguistic annotation workshop and interoperability with discourse. 178–186.Google ScholarGoogle Scholar
  4. Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 12564–12573.Google ScholarGoogle ScholarCross RefCross Ref
  5. Yu Chen, Lingfei Wu, and Mohammed J Zaki. 2019. Reinforcement learning based graph-to-sequence model for natural question generation. arXiv preprint arXiv:1908.04942(2019).Google ScholarGoogle Scholar
  6. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation. 376–380.Google ScholarGoogle ScholarCross RefCross Ref
  7. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. arXiv preprint arXiv:1705.00106(2017).Google ScholarGoogle Scholar
  8. Zichu Fei, Qi Zhang, and Yaqian Zhou. 2021. Iterative GNN-based decoder for question generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2573–2582.Google ScholarGoogle ScholarCross RefCross Ref
  9. Vrindavan Harrison and Marilyn Walker. 2018. Neural generation of diverse questions using answer focus, contextual and linguistic features. arXiv preprint arXiv:1809.02637(2018).Google ScholarGoogle Scholar
  10. Payal Khullar, Konigari Rachna, Mukul Hase, and Manish Shrivastava. 2018. Automatic question generation using relative pronouns and adverbs. In Proceedings of ACL 2018, Student Research Workshop. 153–158.Google ScholarGoogle ScholarCross RefCross Ref
  11. Yanghoon Kim, Hwanhee Lee, Joongbo Shin, and Kyomin Jung. 2019. Improving neural question generation using answer separation. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 6602–6609.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980(2014).Google ScholarGoogle Scholar
  13. Zhongli Li, Qingyu Zhou, Chao Li, Ke Xu, and Yunbo Cao. 2020. Improving BERT with syntax-aware local attention. arXiv preprint arXiv:2012.15150(2020).Google ScholarGoogle Scholar
  14. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out. 74–81.Google ScholarGoogle Scholar
  15. Bang Liu, Mingjun Zhao, Di Niu, Kunfeng Lai, Yancheng He, Haojie Wei, and Yu Xu. 2019. Learning to generate questions by learningwhat not to generate. In The World Wide Web Conference. 1106–1118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Xiyao Ma, Qile Zhu, Yanlin Zhou, and Xiaolin Li. 2020. Improving question generation with sentence-level semantic matching and answer position inferring. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 8464–8471.Google ScholarGoogle ScholarCross RefCross Ref
  17. Long HB Nguyen, Viet Pham, and Dien Dinh. 2020. Integrating AMR to Neural Machine Translation using Graph Attention Networks. In 2020 7th NAFOSTED Conference on Information and Computer Science (NICS). IEEE, 158–162.Google ScholarGoogle ScholarCross RefCross Ref
  18. Boyuan Pan, Hao Li, Ziyu Yao, Deng Cai, and Huan Sun. 2019. Reinforced dynamic reasoning for conversational question generation. arXiv preprint arXiv:1907.12667(2019).Google ScholarGoogle Scholar
  19. Liangming Pan, Yuxi Xie, Yansong Feng, Tat-Seng Chua, and Min-Yen Kan. 2020. Semantic graphs for generating deep questions. arXiv preprint arXiv:2004.12704(2020).Google ScholarGoogle Scholar
  20. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics. 311–318.Google ScholarGoogle Scholar
  21. Dan Su, Yan Xu, Wenliang Dai, Ziwei Ji, Tiezheng Yu, and Pascale Fung. 2020. Multi-hop question generation with graph convolutional network. arXiv preprint arXiv:2010.09240(2020).Google ScholarGoogle Scholar
  22. Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. 2018. Answer-focused and position-aware neural question generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 3930–3939.Google ScholarGoogle ScholarCross RefCross Ref
  23. Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027(2017).Google ScholarGoogle Scholar
  24. Liuyin Wang, Zihan Xu, Zibo Lin, Haitao Zheng, and Ying Shen. 2020. Answer-driven deep question generation based on reinforcement learning. In Proceedings of the 28th International Conference on Computational Linguistics. 5159–5170.Google ScholarGoogle ScholarCross RefCross Ref
  25. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600(2018).Google ScholarGoogle Scholar
  26. Kaichun Yao, Libo Zhang, Tiejian Luo, Lili Tao, and Yanjun Wu. 2018. Teaching Machines to Ask Questions.. In IJCAI. 4546–4552.Google ScholarGoogle Scholar
  27. Wei Yuan, Tieke He, and Xinyu Dai. 2021. Improving neural question generation using deep linguistic representation. In Proceedings of the Web Conference 2021. 3489–3500.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 conference on empirical methods in natural language processing. 3901–3910.Google ScholarGoogle ScholarCross RefCross Ref
  29. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing. Springer, 662–671.Google ScholarGoogle Scholar

Index Terms

  1. Semantic Representation Based on AMR Graph for Document Level Question Generation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      MLNLP '22: Proceedings of the 2022 5th International Conference on Machine Learning and Natural Language Processing
      December 2022
      406 pages
      ISBN:9781450399067
      DOI:10.1145/3578741

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 6 March 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)43
      • Downloads (Last 6 weeks)2

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format