skip to main content
10.1145/3578741.3578752acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmlnlpConference Proceedingsconference-collections
research-article

A Faster Method For Generating Chinese Text Summaries-Combining Extractive Summarization And Abstractive Summarization

Authors Info & Claims
Published:06 March 2023Publication History

ABSTRACT

Extractive summarization and generative summarization are the two main ways to generate summarization.However,previous work treats both of them as two independent subtasks.In this paper,we obtain new summarization by combining extractive summarization and generative summarization.This method extracts the key information of the article firstly,and then generates the summarization of the extracted information.The experimental result shows that this method can significantly improve the quality of the generative text compared with extractive summarization,and can significantly improve the generative speed compared with generative summarization.

References

  1. Ho Chung Wu,Roberta Wing Pong Lunk,et al. Interpreting TF-IDF term weights as marking relevance decisions, 2008.Google ScholarGoogle Scholar
  2. Ashish V,Noam S,Niki P,et al. Attention Is All You Need, 2017.Google ScholarGoogle Scholar
  3. Chin-Yew Lin,A Package for Automatic Evaluation of Summaries, 2004.Google ScholarGoogle Scholar
  4. Wojciech Zaremba,Hya Sutskever,et al,Recall-oriented understudy for gisting evaluation, 2019.Google ScholarGoogle Scholar
  5. Jacob Devlin,Ming-Wei Chang,et al,BERT:Pre-training of Deep Bidirectional Transformers for language Understanding,2018.Google ScholarGoogle Scholar
  6. R Mihalcea,P Tarau,et al,LexRank: Graph-based Lexical Centrality as Salience in Text Summarization, 2004.Google ScholarGoogle Scholar
  7. Junqiu Wei,Xiaozhe Ren,et al,NEZHA:Neural Contextualized Representation for Chinese Language Understanding, 2019.Google ScholarGoogle Scholar
  8. Paulius Micikevicius,Sharan Narang,et al,Mixed Precision Training,2017.Google ScholarGoogle Scholar
  9. Yang You,Jing Li,et al.Large Batch Optimization For Deep Learning Training BERT in 76 minutes,2019.Google ScholarGoogle Scholar
  10. Jonas Gehring,Michael Auli,et al,Convolutional Sequence to Sequence Learning,2017.Google ScholarGoogle Scholar
  11. Kaiming He,Xiang yu Zhang,et al,Deep Residual Learning for Image Recognition,2015.Google ScholarGoogle Scholar
  12. Kyunghyun Cho,Bart van,et al,Learning Phrase Representations using RNN Encoder-Decoder for S tatistical Machine Translation,2014.Google ScholarGoogle Scholar
  13. Li Dong,Nan Yang,et al,Unified Language Model Pre-training for Natural Understanding and Generation,2019.Google ScholarGoogle Scholar
  14. Alec Radford,Karthik Kishore Papineni,Salim Roukos,et al,BLEU:a Method for Automatic Evaluation of Machine Translation,2002.Google ScholarGoogle Scholar
  15. Narasimhan,et al,Improving Language Understanding by Generative Pre-Training,2018.Google ScholarGoogle Scholar
  16. IIya Loshchilov,Frank Hutter,et al,Fixing Weight Decay Regularization in Adam,2018.Google ScholarGoogle Scholar
  17. Diederik P.Kingma,Jimmy Lei Ba,Adam:A method for Stochastic Optimization,2017.Google ScholarGoogle Scholar

Index Terms

  1. A Faster Method For Generating Chinese Text Summaries-Combining Extractive Summarization And Abstractive Summarization
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Other conferences
            MLNLP '22: Proceedings of the 2022 5th International Conference on Machine Learning and Natural Language Processing
            December 2022
            406 pages
            ISBN:9781450399067
            DOI:10.1145/3578741

            Copyright © 2022 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 6 March 2023

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed limited
          • Article Metrics

            • Downloads (Last 12 months)57
            • Downloads (Last 6 weeks)6

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format