Skip to main content

Abstractive Summarization with the Aid of Extractive Summarization

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10987))

Abstract

Currently the abstractive method and extractive method are two main approaches for automatic document summarization. To fully integrate the relatedness and advantages of both approaches, we propose in this paper a general framework for abstractive summarization which incorporates extractive summarization as an auxiliary task. In particular, our framework is composed of a shared hierarchical document encoder, an attention-based decoder for abstractive summarization, and an extractor for sentence-level extractive summarization. Learning these two tasks jointly with the shared encoder allows us to better capture the semantics in the document. Moreover, we constrain the attention learned in the abstractive task by the salience estimated in the extractive task to strengthen their consistency. Experiments on the CNN/DailyMail dataset demonstrate that both the auxiliary task and the attention constraint contribute to improve the performance significantly, and our model is comparable to the state-of-the-art abstractive models.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Mihalcea, R., Tarau, P.: Textrank: bringing order into texts. In: EMNLP, Barcelona, pp. 404–411 (2004)

    Google Scholar 

  2. Erkan, G., Radev, D.R.: Lexrank: graph-based lexical centrality as salience in text summarization. JAIR 22, 457–479 (2004)

    Google Scholar 

  3. Zhang, J., Yao, J., Wan, X.: Towards constructing sports news from live text commentary. In: ACL, Berlin, pp. 1361–1371 (2016)

    Google Scholar 

  4. Cao, Z., Wei, F., Li, S., Li, W., Zhou, M., Wang, H.: Learning summary prior representation for extractive summarization. In: IJCNLP, Beijing, pp. 829–833 (2015)

    Google Scholar 

  5. Yao, J., Wan, X., Xiao, J.: Recent advances in document summarization. Knowl. Inf. Syst. 53, 297–336 (2017). https://doi.org/10.1007/s10115-017-1042-4

    Article  Google Scholar 

  6. Cheung, J.C.K., Penn, G.: Unsupervised sentence enhancement for automatic summarization. In: EMNLP, Doha, pp. 775–786 (2014)

    Google Scholar 

  7. Gerani, S., Mehdad, Y., Carenini, G., Ng, R.T., Nejat, B.: Abstractive summarization of product reviews using discourse structure. In: EMNLP, Doha, pp. 1602–1613 (2014)

    Google Scholar 

  8. Fang, Y., Zhu, H., Muszynska, E., Kuhnle, A., Teufel, S.H.: A proposition-based abstractive summarizer. In: COLING, Osaka, pp. 567–578 (2016)

    Google Scholar 

  9. Liu, F., Flanigan, J., Thomson, S., Sadeh, N., Smith, N.A.: Toward abstractive summarization using semantic representations. In: NAACL-HLT, Denver, pp. 1077–1086 (2015)

    Google Scholar 

  10. Li, J., Luong, M.T., Jurafsky, D.: A hierarchical neural autoencoder for paragraphs and documents. arXiv preprint arXiv:1506.01057 (2015)

  11. Yang, Z., Yang, D., Dyer, C., He, X., Smola, A.J., Hovy, E.H.: Hierarchical attention networks for document classification. In: NAACL-HLT, San Diego, pp. 1480–1489 (2016)

    Google Scholar 

  12. Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685 (2015)

  13. Chopra, S., Auli, M., Rush, A.M.: Abstractive sentence summarization with attentive recurrent neural networks. In: NAACL-HLT, San Diego, pp. 93–98 (2016)

    Google Scholar 

  14. Tan, J., Wan, X., Xiao, J.: Abstractive document summarization with a graph-based attentional neural model. In: ACL, Vancouver, pp. 1171–1181 (2017)

    Google Scholar 

  15. Cho, K., Van Merriënboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. arXiv preprint arxIV:1409.1259 (2014)

  16. Cheng, J., Lapata, M.: Neural summarization by extracting sentences and words. arXiv preprint arXiv:1603.07252 (2016)

  17. Graves, A.: Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711 (2012)

  18. Boulanger-Lewandowski, N., Bengio, Y., Vicent, P.: Audio chord recognition with recurrent neural networks. In: ISMIR, Curitiba, pp. 335–340 (2013)

    Google Scholar 

  19. Hermann, K.M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., Blunsom, P.: Teaching machines to read and comprehend. In: NIPS, Montreal, pp. 1693–1701 (2015)

    Google Scholar 

  20. Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: ACL workshop (2014)

    Google Scholar 

  21. Nallapati, R., Zhou, B., Gulcehre, C., Xiang, B.: Abstractive text summarization using sequence-to-sequence RNNS and beyond. arXiv preprint arXiv:1602.06023 (2016)

  22. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  23. Lopyrev, K.: Generating news headlines with recurrent neural networks. arXiv preprint arXiv:1512.01712 (2015)

  24. Gu, J., Lu, Z., Li, H., Li, V.O.: Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393 (2016)

  25. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368 (2017)

  26. Cao, Z., Luo C., Li, W., Li, S.: Joint copying and restricted generation for paraphrase. In: AAAI, San Francisco, pp. 3152–3158 (2017)

    Google Scholar 

  27. Luong, M.T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. arXiv preprint (2015) arXiv:1508.04025

  28. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS, Montreal, pp. 3104–3112 (2014)

    Google Scholar 

  29. Nallapati, R., Zhai, F., Zhou, B.: Summarunner: a recurrent neural network based sequence model for extractive summarization of documents. In: AAAI, San Francisco, pp. 3075–3081 (2017)

    Google Scholar 

  30. Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., et al.: Google’s neural machine translation system: bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016)

  31. Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. JMLR 12, 2121–2159 (2011)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This research has been supported by an innovative technology fund (project no. GHP/036/17SZ) from the Innovation and Technology Commission of Hong Kong, and a donated research project (project no. 9220089) at City University of Hong Kong.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yangbin Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, Y., Ma, Y., Mao, X., Li, Q. (2018). Abstractive Summarization with the Aid of Extractive Summarization. In: Cai, Y., Ishikawa, Y., Xu, J. (eds) Web and Big Data. APWeb-WAIM 2018. Lecture Notes in Computer Science(), vol 10987. Springer, Cham. https://doi.org/10.1007/978-3-319-96890-2_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-96890-2_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-96889-6

  • Online ISBN: 978-3-319-96890-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics