skip to main content
10.1145/3584871.3584874acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicsimConference Proceedingsconference-collections
research-article

An Extractive Text Summarization Based on Reinforcement Learning

Authors Info & Claims
Published:26 June 2023Publication History

ABSTRACT

Abstract: In recent years, with the rapid development of network information technology, network text information also presents an explosive growth trend. As an efficient information processing technology in the digital age, text summarization can bring the advantage of focusing on key information in all directions in massive text information. However, text summarization is still faced with some problems such as difficulty in extracting long text and information redundancy. Therefore, combining with the deep learning framework, this paper proposes an extractive text summarization that uses reinforcement learning to optimize the long text extraction process and uses the attention mechanism to achieve the effect of redundancy removal. On CNN/Daily Mail datasets, the automatic evaluation shows that our model outperforms the previous on ROUGE, and the ablation experiment proves the effectiveness of the de-redundant attention module.

References

  1. Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang: Extractive Summarization as Text Matching. ACL 2020: 6197-6208.Google ScholarGoogle Scholar
  2. Baoyu Jing, Zeyu You, Tao Yang, Wei Fan, Hanghang Tong: Multiplex Graph Neural Network for Extractive Text Summarization. EMNLP (1) 2021: 133-139.Google ScholarGoogle Scholar
  3. Neto J L, Freitas A A, Kaestner C A A. Automatic text summarization using a machine learning approach[C]. Brazilian symposium on artificial intelligence, 2002: 205-215.Google ScholarGoogle Scholar
  4. Rush A M, Chopra S, Weston J. A Neural Attention Model for Abstractive Sentence Summarization[C]. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 2015: 379-389.Google ScholarGoogle Scholar
  5. Mihalcea R, Tarau P. Textrank: Bringing order into text[C]. Proceedings of the 2004 conference on empirical methods in natural language processing. 2004: 404-411.Google ScholarGoogle Scholar
  6. Cheng J, Lapata M. Neural summarization by extracting sentences and words[J]. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  7. Nallapati R, Zhai F, Zhou B. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents[C]. Thirty-first AAAI conference on artificial intelligence. 2017.Google ScholarGoogle ScholarCross RefCross Ref
  8. Liu Y. Fine-tune BERT for extractive summarization[J]. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, pages, 2019.Google ScholarGoogle Scholar
  9. NOVA E. STRASS: A Light and Effective Method for Extractive Summarization Based on Sentence Embeddings[J]. ACL 2019, 2019, 1(2): 243.Google ScholarGoogle Scholar
  10. Banerjee S, Mitra P, Sugiyama K. Multi-document abstractive summarization using ilp based multi-sentence compression[C]. Twenty-Fourth International Joint Conference on Artificial Intelligence. 2015.Google ScholarGoogle Scholar
  11. Liu H, Yu H, Deng Z H. Multi-document summarization based on two-level sparse representation model[C]. Twenty-ninth AAAI conference on artificial intelligence. 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Yao K, Zhang L, Luo T, Deep reinforcement learning for extractive document summarization[J]. Neurocomputing, 2018, 284: 52-62.Google ScholarGoogle ScholarCross RefCross Ref
  13. West P, Holtzman A, Buys J, Bottlesum: Unsupervised and self-supervised sentence summarization using the information bottleneck principle[J]. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  14. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. Neural document summarization by jointly learning to score and select sentences[J]. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–663, Melbourne, Australia. Association for Computational Linguistics. 2018.Google ScholarGoogle ScholarCross RefCross Ref
  15. Zhou Q, Wei F, Zhou M. At which level should we extract? An empirical analysis on extractive document summarization[J]. Proceedings of the 28th International Conference on Computational Linguistics, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  16. Vaswani A, Shazeer N, Parmar N, Attention is all you need[J]. Advances in neural information processing systems, 2017, 30.Google ScholarGoogle Scholar
  17. See A, Liu P J, Manning C D. Get to the point: Summarization with pointer-generator networks[J]. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017.Google ScholarGoogle Scholar
  18. Celikyilmaz A, Bosselut A, He X, Deep communicating agents for abstractive summarization[J]. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  19. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics.Google ScholarGoogle Scholar

Index Terms

  1. An Extractive Text Summarization Based on Reinforcement Learning

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICSIM '23: Proceedings of the 2023 6th International Conference on Software Engineering and Information Management
      January 2023
      300 pages
      ISBN:9781450398237
      DOI:10.1145/3584871

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 26 June 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)46
      • Downloads (Last 6 weeks)1

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format