Skip to main content

Query Focused Abstractive Summarization via Incorporating Query Relevance and Transfer Learning with Transformer Models

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12109))

Abstract

In the Query Focused Abstractive Summarization (QFAS) task, the goal is to generate abstractive summaries from the source document that are relevant to the given query. In this paper, we propose a new transfer learning technique by utilizing the pre-trained transformer architecture for the QFAS task in the Debatepedia dataset. We find that the Diversity Driven Attention model (DDA), which was the first model applied on this dataset, only performs well when the dataset is augmented by creating more training instances. In contrast, without requiring any in-domain data augmentation, our proposed approach outperforms the DDA model as well as sets a new state-of-the-art result.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://www.debatepedia.org/.

  2. 2.

    https://github.com/tahmedge/QR-BERTSUM-TL-for-QFAS.

  3. 3.

    https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset.

  4. 4.

    https://git.io/JeBZX.

  5. 5.

    We used the following package for calculation: https://pypi.org/project/pyrouge/.

References

  1. Baumel, T., et al.: Query focused abstractive summarization: Incorporating query relevance, multi-document coverage, and summary length constraints into seq2seq models. arXiv preprint arXiv:1801.07704 (2018)

  2. Devlin, J., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, pp. 4171–4186 (2019)

    Google Scholar 

  3. Lewis, M., et al.: Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 (2019)

  4. Liu, Y., Lapata, M.: Text summarization with pretrained encoders. In: Proceedings of EMNLP-IJCNLP, pp. 3721–3731 (2019)

    Google Scholar 

  5. Liu, Y., et al.: ARSA: a sentiment-aware model for predicting sales performance using blogs. In: Proceedings of SIGIR, pp. 607–614 (2007)

    Google Scholar 

  6. Liu, Y., et al.: Modeling and predicting the helpfulness of online reviews. In: Proceedings of ICDM, pp. 443–452 (2008)

    Google Scholar 

  7. Nallapati, R., et al.: Abstractive text summarization using sequence-to-sequence RNNs and beyond. In: Proceedings of ACL, pp. 280–290 (2016)

    Google Scholar 

  8. Nema, P., et al.: Diversity driven attention model for query-based abstractive summarization. In: Proceedings of ACL, pp. 1063–1072 (2017)

    Google Scholar 

  9. Pennington, J., et al.: GloVe: global vectors for word representation. In: Proceedings of EMNLP, pp. 1532–1543 (2014)

    Google Scholar 

  10. Rush, A.M., et al.: A neural attention model for abstractive sentence summarization. In: Proceedings of EMNLP, pp. 379–389 (2015)

    Google Scholar 

  11. See, A., et al.: Get to the point: Summarization with pointer-generator networks. In: Proceedings of ACL, pp. 1073–1083 (2017)

    Google Scholar 

  12. Vaswani, A., et al.: Attention is all you need. In: Proceedings of NIPS, pp. 5998–6008 (2017)

    Google Scholar 

  13. Yao, J., Wan, X., Xiao, J.: Recent advances in document summarization. Knowl. Inf. Syst. 53(2), 297–336 (2017). https://doi.org/10.1007/s10115-017-1042-4

    Article  Google Scholar 

Download references

Acknowledgements

This research is supported by the Natural Sciences & Engineering Research Council (NSERC) of Canada and an ORF-RE (Ontario Research Fund-Research Excellence) award in BRAIN Alliance.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Md Tahmid Rahman Laskar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Laskar, M.T.R., Hoque, E., Huang, J. (2020). Query Focused Abstractive Summarization via Incorporating Query Relevance and Transfer Learning with Transformer Models. In: Goutte, C., Zhu, X. (eds) Advances in Artificial Intelligence. Canadian AI 2020. Lecture Notes in Computer Science(), vol 12109. Springer, Cham. https://doi.org/10.1007/978-3-030-47358-7_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-47358-7_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-47357-0

  • Online ISBN: 978-3-030-47358-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics