Skip to main content
Log in

A weakly-supervised extractive framework for sentiment-preserving document summarization

  • Published:
World Wide Web Aims and scope Submit manuscript

Abstract

The popularity of social media sites provides new ways for people to share their experiences and convey their opinions, leading to an explosive growth of user-generated content. Text data, owing to the amazing expressiveness of natural language, is of great value for people to explore various kinds of knowledge. However, considerable user-generated text contents are longer than what a reader expects, making automatic document summarization a necessity to facilitate knowledge digestion. In this paper, we focus on the reviews-like sentiment-oriented textual data. We propose the concept of Sentiment-preserving Document Summarization (SDS), aiming at summarizing a long textual document to a shorter version while preserving its main sentiments and not sacrificing readability. To tackle this problem, using deep neural network-based models, we devise an end-to-end weakly-supervised extractive framework, consisting of a hierarchical document encoder, a sentence extractor, a sentiment classifier, and a discriminator to distinguish the extracted summaries from the natural short reviews. The framework is weakly-supervised in that no ground-truth summaries are used for training, while the sentiment labels are available to supervise the generated summary to preserve the sentiments of the original document. In particular, the sentence extractor is trained to generate summaries i) making the sentiment classifier predict the same sentiment category as the original longer documents, and ii) fooling the discriminator into recognizing them as human-written short reviews. Experimental results on two public datasets validate the effectiveness of our framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2

Similar content being viewed by others

References

  1. Amplayo, R.K., Song, M.: An adaptable fine-grained sentiment analysis for summarization of multiple short online reviews. Data Knowl. Eng. 110, 54–67 (2017)

    Article  Google Scholar 

  2. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: Proceedings of the International Conference on Learning Representations (2015)

  3. Beineke, P., Hastie, T., Manning, C., Vaithyanathan, S.: Exploring sentiment summarization. In: Proceedings of the AAAI spring symposium on exploring attitude and affect in text: theories and applications, Vol. 39, pp. 1-4. (2004)

  4. Cao, Z., Li, W., Li, S., Wei, F.: Improving multi-document summarization via text classification. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 3053-3059. (2017)

  5. Cheng, J, Lapata, M.: Neural summarization by extracting sentences and words. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics (2016)

  6. Conroy, J.M., O'Leary D.P.: Text summarization via hidden Markov models. In: Proceedings of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 406–407. (2001)

  7. Dai, A. M., Le, Q. V.: Semi-supervised sequence learning. In: Proceedings of the Neural Information Processing Systems, pp. 3079-3087. (2015)

  8. Diao, Q., Qiu, M., Wu, C.-Y., Smola A. J., Jiang, J., Wang, C.: Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS). In: Proceedings of the International Conference on Knowledge Discovery and Data Mining, pp. 193-202. (2014)

  9. Dieng, A. B., Wang C., Gao, J, Paisley, J.W.: TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency. In: Proceedings of the International Conference on Learning Representations, 2017

  10. Filatova, E., Hatzivassiloglou, V.: Event-based extractive summarization. In: Proceedings of the Meeting of the Association for Computational Linguistics Workshop on Summarization, pp. 104–111. (2004)

  11. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A.C., Bengio, Y. Generative adversarial nets. In: Proceedings of the Neural Information Processing Systems, pp. 2672-2680. (2014)

  12. Gu, J., Lu, Z., Li, H., Li, V.O.K.: Incorporating copying mechanism in sequence-to-sequence learning”. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (2016)

  13. Hu, M., Li, B.: Mining and summarizing customer reviews. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 168-177. (2004)

  14. Jean, S., Cho, K., Memisevic, R, Bengio, Y.: On using very large target vocabulary for neural machine translation”. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics (2015)

  15. Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics, pp. 655-665. (2014)

  16. Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1746-1751. (2014)

  17. Kim, D.H., Ganesan, K., Sondhi P., Zhai, C.X.: Comprehensive review of opinion summarization. 2011

  18. Kingma, D.P., Ba, O.: Adam: a method for stochastic optimization. In: Proceedings of the International Conference on Learning Representations (2015)

  19. Kiros, R., Zhu, Y., Salakhutdinov, R., Zemel, R.S., Urtasun, R., Torralba, A., Fidler, S.: Skip-thought vectors. In: Proceedings of the Neural Information Processing Systems, pp. 3294-3302 (2015)

  20. Kupiec, J., Pedersen, J.O., Chen, F.: A trainable document summarizer. In: Proceedings of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 68–73. (1995)

  21. Lerman, K., Blair-Goldensohn, S., McDonald, R.T.: Sentiment summarization: evaluating and learning user preferences. In: Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pp. 514-522. (2009)

  22. Li, J., Chen, X., Hovy E. H., Jurafsky, D.: Visualizing and Understanding Neural Models in NLP. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 681-691. (2016)

  23. Li, J., Monroe, W., Shi, T., Jean, S, Ritter, A., Jurafsky, D.: Adversarial learning for neural dialogue generation. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 2157-2169. (2017)

  24. Liu, P., Qiu, X., Huang, X.: Adversarial multi-task learning for text classification. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics (2017)

  25. Liu, L., Lu, Y, Yang, M., Qu, Q., Zhu, J., Li, H.: Generative Adversarial Network for Abstractive Text Summarization. In: Proceedings of the AAAI Conference on Artificial Intelligence (Abstract) (2018)

  26. Manning, C.D., Surdeanu, M., Bauer, J., Finkel, J.R., Bethard, S., McClosky, D.: The Stanford CoreNLP natural language processing toolkit. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics (System Demonstrations), pp. 55-60. (2014)

  27. Miao, Y., Blunsom, P.: Language as a latent variable: discrete generative models for sentence compression. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 319-328 (2016)

  28. Mihalcea, R., Tarau, P.: TextRank: Bringing Order into Text. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 404-411. (2004)

  29. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Proceedings of the Neural Information Processing Systems, pp. 3111-3119. (2013)

  30. Mithun, S., Leila, K.: Summarizing blog entries versus news texts. In: Proceedings of the Workshop on Events in Emerging Text Types, pp. 1–8. (2009)

  31. Mou, L., Peng, H., Li, G., Xu, Y., Zhang, L, Jin, Z.: Discriminative neural sentence modeling by tree-based convolution. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 2315-2325. (2015)

  32. Nallapati, R., Zhou, B., dos Santos, C.N., Gülçehre, Ç, Xiang, B.: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond. In: Proceedings of the SIGNLL Conference on Computational Natural Language Learning, pp. 280-290. (2016)

  33. Nallapati, R, Zhai, F, Zhou, B.: SummaRuNNer: a recurrent neural network based sequence model for extractive summarization of documents. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 3075-3081. (2017)

  34. Nenkova, A., Vanderwende, L., McKeown, K.: A compositional context sensitive multi-document summarizer: exploring the factors that influence summarization. In: Proceedings of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 573–580. (2006)

  35. Paulus, R., Xiong, C, Socher, R.: A deep reinforced model for abstractive summarization. CoRR abs/1705.04304, (2017)

  36. Radev, D. R., Allison, T., Blair-Goldensohn, S., Blitzer, J, Çelebi, A, Dimitrov, S, Drábek, E., Hakim, A., Lam, W., Liu, D., Otterbacher, J., Qi, H., Saggion, H., Teufel, S., Topper, M., Winkel, A., Zhang, Z.: MEAD - a platform for multidocument multilingual text summarization. In: Proceedings of the International Conference on Language Resources and Evaluation (2004)

  37. Rennie, S.J., Marcheret, E., Mroueh, Y, Ross, J, Goel, V.: Self-critical sequence training for image captioning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1179-1195. (2017)

  38. Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 379–389. (2015)

  39. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics (2017)

  40. Sutton, R.S., McAllester, D.A., Singh, S.P., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: Proceedings of the Neural Information Processing Systems, pp. 1057-1063. (1999)

  41. Tai, K.S., Socher, R, Manning, C.D.: Improved semantic representations from tree-structured long short-term memory networks. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics, pp. 1556-1566. (2015)

  42. Tang, D., Qin, B., Liu, T.: Document modeling with gated recurrent neural network for sentiment classification. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1422-1432. (2015)

  43. Titov, I., McDonald, R.T.: A joint model of text and aspect ratings for sentiment summarization. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics, pp. 308-316. (2008)

  44. Woodsend, K., Lapata, M.: Automatic generation of story highlights. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics, pp. 565–574. (2010)

  45. Xu, W., Sun, H., Deng, C., Tan, Y.: Variational Autoencoder for Semi-Supervised Text Classification”. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 3358-3364. (2017)

  46. Yang, Z., Yang, D., Dyer, C., He, X., Smola, A.J., Hovy, E.H.: Hierarchical attention networks for document classification. In: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1480-1489. (2016)

  47. Yang, Z., Chen, W., Wang, F., Xu B.: Improving neural machine translation with conditional sequence generative adversarial nets. CoRR abs/1703.04887 (2017)

  48. Zhang, X., Zhao, J.J., LeCun, Y.: Character-level convolutional networks for text classification. In: Proceedings of the Neural Information Processing Systems, pp. 649-657. (2015)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yun Ma.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, Y., Li, Q. A weakly-supervised extractive framework for sentiment-preserving document summarization. World Wide Web 22, 1401–1425 (2019). https://doi.org/10.1007/s11280-018-0591-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11280-018-0591-0

Keywords

Navigation