ABSTRACT
Question Generation (QG) is the task of generating questions from reference sentences and answers specified in the sentences. Intuitively, as one type of semantic representations, Abtract Meaning Representation (AMR) can be helpful for question generation by enforcing meaning preservation and handling data sparsity due to being abstracted from syntactic representations. However, little work has been done to leverage rich semantics in AMR for question generation. Therefore, we design AMR Question Generation (AMRQG) model to construct document-level AMR graphs from sentence-level AMR graphs. The model solves the alignment issue between nodes of AMR graph and words in sequence text at graph-level and node-level. To our knowledge, we are the first to leverage AMR representation into question generation for document-level input. Automatic evaluation results demonstrate that incorporating AMR as additional knowledge can significantly improve a sequence-to-sequence natural question generation model and show our model’s superiority.
- Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473(2014).Google Scholar
- Xuefeng Bai, Yulong Chen, Linfeng Song, and Yue Zhang. 2021. Semantic representation for dialogue modeling. arXiv preprint arXiv:2105.10188(2021).Google Scholar
- Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th linguistic annotation workshop and interoperability with discourse. 178–186.Google Scholar
- Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 12564–12573.Google ScholarCross Ref
- Yu Chen, Lingfei Wu, and Mohammed J Zaki. 2019. Reinforcement learning based graph-to-sequence model for natural question generation. arXiv preprint arXiv:1908.04942(2019).Google Scholar
- Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation. 376–380.Google ScholarCross Ref
- Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. arXiv preprint arXiv:1705.00106(2017).Google Scholar
- Zichu Fei, Qi Zhang, and Yaqian Zhou. 2021. Iterative GNN-based decoder for question generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2573–2582.Google ScholarCross Ref
- Vrindavan Harrison and Marilyn Walker. 2018. Neural generation of diverse questions using answer focus, contextual and linguistic features. arXiv preprint arXiv:1809.02637(2018).Google Scholar
- Payal Khullar, Konigari Rachna, Mukul Hase, and Manish Shrivastava. 2018. Automatic question generation using relative pronouns and adverbs. In Proceedings of ACL 2018, Student Research Workshop. 153–158.Google ScholarCross Ref
- Yanghoon Kim, Hwanhee Lee, Joongbo Shin, and Kyomin Jung. 2019. Improving neural question generation using answer separation. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 6602–6609.Google ScholarDigital Library
- Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980(2014).Google Scholar
- Zhongli Li, Qingyu Zhou, Chao Li, Ke Xu, and Yunbo Cao. 2020. Improving BERT with syntax-aware local attention. arXiv preprint arXiv:2012.15150(2020).Google Scholar
- Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out. 74–81.Google Scholar
- Bang Liu, Mingjun Zhao, Di Niu, Kunfeng Lai, Yancheng He, Haojie Wei, and Yu Xu. 2019. Learning to generate questions by learningwhat not to generate. In The World Wide Web Conference. 1106–1118.Google ScholarDigital Library
- Xiyao Ma, Qile Zhu, Yanlin Zhou, and Xiaolin Li. 2020. Improving question generation with sentence-level semantic matching and answer position inferring. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 8464–8471.Google ScholarCross Ref
- Long HB Nguyen, Viet Pham, and Dien Dinh. 2020. Integrating AMR to Neural Machine Translation using Graph Attention Networks. In 2020 7th NAFOSTED Conference on Information and Computer Science (NICS). IEEE, 158–162.Google ScholarCross Ref
- Boyuan Pan, Hao Li, Ziyu Yao, Deng Cai, and Huan Sun. 2019. Reinforced dynamic reasoning for conversational question generation. arXiv preprint arXiv:1907.12667(2019).Google Scholar
- Liangming Pan, Yuxi Xie, Yansong Feng, Tat-Seng Chua, and Min-Yen Kan. 2020. Semantic graphs for generating deep questions. arXiv preprint arXiv:2004.12704(2020).Google Scholar
- Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics. 311–318.Google Scholar
- Dan Su, Yan Xu, Wenliang Dai, Ziwei Ji, Tiezheng Yu, and Pascale Fung. 2020. Multi-hop question generation with graph convolutional network. arXiv preprint arXiv:2010.09240(2020).Google Scholar
- Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. 2018. Answer-focused and position-aware neural question generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 3930–3939.Google ScholarCross Ref
- Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027(2017).Google Scholar
- Liuyin Wang, Zihan Xu, Zibo Lin, Haitao Zheng, and Ying Shen. 2020. Answer-driven deep question generation based on reinforcement learning. In Proceedings of the 28th International Conference on Computational Linguistics. 5159–5170.Google ScholarCross Ref
- Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600(2018).Google Scholar
- Kaichun Yao, Libo Zhang, Tiejian Luo, Lili Tao, and Yanjun Wu. 2018. Teaching Machines to Ask Questions.. In IJCAI. 4546–4552.Google Scholar
- Wei Yuan, Tieke He, and Xinyu Dai. 2021. Improving neural question generation using deep linguistic representation. In Proceedings of the Web Conference 2021. 3489–3500.Google ScholarDigital Library
- Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 conference on empirical methods in natural language processing. 3901–3910.Google ScholarCross Ref
- Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing. Springer, 662–671.Google Scholar
Index Terms
- Semantic Representation Based on AMR Graph for Document Level Question Generation
Recommendations
PAMR: Persian Abstract Meaning Representation Corpus
One of the most used and well-known semantic representation models is Abstract Meaning Representation (AMR). This representation has had numerous applications in natural language processing tasks in recent years. Currently, for English and Chinese ...
An Easier and Efficient Framework to Annotate Semantic Roles: Evidence from the Chinese AMR Corpus
Chinese Lexical SemanticsAbstractSemantic role labeling (SRL) is a fundamental task in Chinese language processing, but there are three major problems about the construction of SRL corpora. First, disagreements occurred in previous studies over the definition and number of ...
Towards Timeline Generation with Abstract Meaning Representation
WWW '23 Companion: Companion Proceedings of the ACM Web Conference 2023Timeline summarization (TLS) is a challenging research task that requires researchers to distill extensive and intricate temporal data into a concise and easily comprehensible representation. This paper proposes a novel approach to timeline ...
Comments