Skip to main content
Log in

Neural abstractive summarization fusing by global generative topics

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Various efforts have been dedicated to automatically generate coherent, condensed and informative summaries. Most concentrate on improving the capability of generating neural language models locally, but do not consider global information. In real cases, a summary is comprehensively influenced by the full content of the source text and is especially guided by its core sense. To seamlessly integrate global semantic representation into a summarization generation system, we propose to incorporate a neural generative topic matrix as an abstractive level of topic information. By mapping global semantics into a local generative language model, the abstractive summarization is capable of generating succinct and recapitulative words or phrases. Extensive experiments on DUC-2004 and Gigaword datasets convincingly validate the proposed model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. We paired the first sentence of each article with its headline to form sentence–headline summary pairs. Then, we used the PTB tokenization to pre-process the pairs.

  2. The splits of Gigaword for training can be found at https://github.com/facebook/NAMAS.

  3. Obtained from https://github.com/harvardnlp/sent-summary.

  4. It can be downloaded from http://duc.nist.gov/ with permission.

  5. The ROUGE evaluation is same as [22] which is the official ROUGE script, -m -n 2 -w 1.2.

  6. https://github.com/AEGISEDGE/GLEAM.

  7. Code is from: https://github.com/ysmiao/nvdm.

References

  1. Bing L, Li P, Liao Y, Lam W, Guo W, Passonneau, R (2015) Abstractive multi-document summarization via phrase selection and merging. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing, vol 1, pp 1587–1597

  2. Cao Z, Luo C, Li W, Li S (2017) Joint copying and restricted generation for paraphrase. In: AAAI, pp 3152–3158

  3. Cao Z, Wei F, Li W, Li S (2017) Faithful to the original: fact aware neural abstractive summarization. Preprint. arXiv:1711.04434

  4. Chopra S, Auli M, Rush AM (2016) Abstractive sentence summarization with attentive recurrent neural networks. In: Conference of the North American chapter of the association for computational linguistics: human language technologies, pp 93–98

  5. Doersch C (2016) Tutorial on variational autoencoders. CoRR arXiv:abs/1606.05908

  6. Dorr B, Zajic D, Schwartz R (2003) Hedge trimmer: a parse-and-trim approach to headline generation. In: HLT-NAACL, pp 1–8

  7. Genest PE, Lapalme G (2011) Framework for abstractive summarization using text-to-text generation. In: Proceedings of the workshop on monolingual text-to-text generation. Association for Computational Linguistics, pp 64–73

  8. Gu J, Lu Z, Li H, Li VOK (2016) Incorporating copying mechanism in sequence-to-sequence learning. In: Proceedings of the 54th annual meeting of the association for computational linguistics, ACL 2016

  9. Gülçehre Ç, Ahn S, Nallapati R, Zhou B, Bengio Y (2016) Pointing the unknown words. In: Proceedings of the 54th annual meeting of the association for computational linguistics, ACL 2016, August 7–12, 2016, vol 1. Long Papers. The Association for Computer Linguistics, Berlin

  10. Hinton GE, Dayan P, Frey BJ, Neal RM (1995) The “wake-sleep” algorithm for unsupervised neural networks. Science 268(5214):1158–1161

    Article  Google Scholar 

  11. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  12. Kingma DP, Welling M (2013) Auto-encoding variational bayes. CoRR arXiv:abs/1312.6114

  13. Koehn P, Hoang H, Birch A, Callison-Burch C, Federico M, Bertoldi N, Cowan B, Shen W, Moran C, Zens R, et al. (2007) Moses: open source toolkit for statistical machine translation. In: Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions. Association for Computational Linguistics, pp 177–180

  14. Li P, Lam W, Bing L, Wang Z (2017) Deep recurrent generative decoder for abstractive text summarization. In: Proceedings of the 2017 conference on empirical methods in natural language processing, EMNLP 2017, pp 2091–2100

  15. Lin CY (2004) Rouge: a package for automatic evaluation of summaries. In: Text summarization branches out: proceedings of the ACL-04 workshop, vol 8, Barcelona

  16. Miao Y, Grefenstette E, Blunsom P (2017) Discovering discrete latent topics with neural variational inference. In: Precup D, Teh YW (eds) Proceedings of the 34th international conference on machine learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017, proceedings of machine learning research, vol 70. PMLR, pp 2410–2419

  17. Miao Y, Yu L, Blunsom P (2016) Neural variational inference for text processing. In: International conference on machine learning, pp 1727–1736

  18. Nallapati R, Zhou B, dos Santos CN, Gülçehre ç, Xiang B (2016) Abstractive text summarization using sequence-to-sequence rnns and beyond. In: Proceedings of the 20th SIGNLL conference on computational natural language learning, CoNLL 2016, Berlin, Germany, August 11–12, 2016, pp 280–290

  19. Parker R, Graff D, Kong J, Chen K, Maeda K (2011) English Gigaword fifth edition, linguistic data consortium. Technical report, Linguistic Data Consortium, Philadelphia

  20. Pascanu R, Mikolov T, Bengio Y (2013) On the difficulty of training recurrent neural networks. In: International conference on machine learning, pp 1310–1318

  21. Paulus R, Xiong C, Socher R (2017) A deep reinforced model for abstractive summarization. Preprint. arXiv:1705.04304

  22. Rush AM, Chopra S, Weston J (2015) A neural attention model for abstractive sentence summarization. In: Proceedings of the 2015 conference on empirical methods in natural language processing, EMNLP 2015, Lisbon, Portugal, September 17–21, 2015, pp 379–389

  23. See A, Liu PJ, Manning CD (2017) Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th annual meeting of the association for computational linguistics (volume 1: long papers), vol 1, pp 1073–1083

  24. Srivastava A, Sutton C (2017) Autoencoding variational inference for topic models. Preprint. arXiv:1703.01488

  25. Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. Adv Neural Inf Process Syst 4:3104–3112

    Google Scholar 

  26. Takase S, Suzuki J, Okazaki N, Hirao T, Nagata M (2016) Neural headline generation on abstract meaning representation. In: EMNLP, pp 1054–1059

  27. Tan J, Wan X, Xiao J (2017) From neural sentence summarization to headline generation: a coarse-to-fine approach. In: Proceedings of the 26th international joint conference on artificial intelligence, IJCAI-17, pp 4109–4115

  28. Wang L, Jiang J, Chieu HL, Ong CH, Song D, Liao L (2017) Can syntax help? improving an LSTM-based sentence compression model for new domains. In: Proceedings of the 55th annual meeting of the association for computational linguistics, vol 1, pp 1385–1393

  29. Wang W, Gan Z, Wang W, Shen D, Huang J, Ping W, Satheesh S, Carin L (2017) Topic compositional neural language model. Preprint. arXiv:1712.09783

  30. Zhang H, Chow TW, Wu QJ (2016) Organizing books and authors by multilayer som. IEEE Trans Neural Netw Learn Syst 27(12):2537–2550

    Article  Google Scholar 

  31. Zhang H, Li J, Ji Y, Yue H (2017) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Ind Inform 13(2):616–624

    Article  Google Scholar 

  32. Zhang H, Wang S, Zhao M, Xu X, Ye Y (2018) Locality reconstruction models for book representation. IEEE Trans Knowl Data Eng 30(10):1873–1886

    Article  Google Scholar 

  33. Zhou Q, Yang N, Wei F, Zhou M (2017) Selective encoding for abstractive sentence summarization. In: Proceedings of the 55th annual meeting of the association for computational linguistics, ACL 2017, pp 1095–1104

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant No. 61602036, No. 61751201), and is supported by the Research Foundation of Beijing Municipal Science & Technology Commission (Grant No. Z181100008918002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Gao.

Ethics declarations

Conflict of interest

All authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, Y., Wang, Y., Liu, L. et al. Neural abstractive summarization fusing by global generative topics. Neural Comput & Applic 32, 5049–5058 (2020). https://doi.org/10.1007/s00521-018-3946-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-018-3946-7

Keywords

Navigation