Skip to main content

Recent Progress on Text Summarisation Based on BERT and GPT

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14120))

  • 1564 Accesses

Abstract

Text summarisation is one of the essential topics in natural language processing. Pre-trained language models, especially BERT and GPT, are the most advanced methods for various natural language processing tasks; thus, many researchers have tried to use BERT and GPT for text summarisation. To facilitate further research on this topic, this paper surveys its state-of-the-art. Specifically, we summarise the topic’s main research issues and BERT- and GPT-based solutions, compare these methods (especially their pros and cons), explore their applications, and discuss the challenges to future research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    BLEU measures precision: how many words in the machine-generated summaries are also in the human reference summaries.

  2. 2.

    ROUGE measures recall: how many words in the human reference summaries are also in the machine-generated summaries.

References

  1. Alexandr, N., Irina, O., Tatyana, K., Inessa, K., Arina, P.: Fine-tuning GPT-3 for Russian text summarization. In: Silhavy, R., Silhavy, P., Prokopova, Z. (eds.) CoMeSySo 2021. LNNS, vol. 231, pp. 748–757. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-90321-3_61

    Chapter  Google Scholar 

  2. Batra, H., et al.: CoVShorts: news summarization application based on deep NLP transformers for SARS-CoV-2. In: 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions), pp. 1–6 (2021)

    Google Scholar 

  3. Brown, T., Mann, B., et al.: Language models are few-shot learners. In: Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901 (2020)

    Google Scholar 

  4. Cai, X., Liu, S., Han, J., Yang, L., Liu, Z., Liu, T.: ChestXRayBERT: a pre-trained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021)

    Google Scholar 

  5. Cai, X., et al.: COVIDSum: a linguistically enriched SciBERT-based summarization model for COVID-19 scientific papers. J. Biomed. Inform. 127, 103999 (2022)

    Article  Google Scholar 

  6. Chintagunta, B., Katariya, N., Amatriain, X., Kannan, A.: Medically aware GPT-3 as a data generator for medical dialogue summarization. In: Proceedings of the 6th Machine Learning for Healthcare Conference, pp. 354–372 (2021)

    Google Scholar 

  7. Deepika, S., Shridevi, S., et al.: Extractive text summarization for COVID-19 medical records. In: 2021 Innovations in Power and Advanced Computing Technologies (i-PACT), pp. 1–5 (2021)

    Google Scholar 

  8. Dehru, V., Tiwari, P.K., Aggarwal, G., Joshi, B., Kartik, P.: Text summarization techniques and applications. IOP Conf. Ser. Mater. Sci. Eng. 1099, 012042 (2021). IOP Publishing (2021)

    Google Scholar 

  9. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 1st 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186 (2019)

    Google Scholar 

  10. Dhivyaa, C., Nithya, K., Janani, T., Kumar, K.S., Prashanth, N.: Transliteration based generative pre-trained transformer 2 model for Tamil text summarization. In: 2022 International Conference on Computer Communication and Informatics, pp. 1–6 (2022)

    Google Scholar 

  11. Du, Y., Li, Q., Wang, L., He, Y.: Biomedical-domain pre-trained language model for extractive summarization. Knowl.-Based Syst. 199, 105964 (2020)

    Article  Google Scholar 

  12. El-Kassas, W.S., Salama, C.R., Rafea, A.A., Mohamed, H.K.: Automatic text summarization: a comprehensive survey. Expert Syst. Appl. 165, 113679 (2021)

    Article  Google Scholar 

  13. Farahani, M., Gharachorloo, M., Manthouri, M.: Leveraging ParsBERT and pretrained mT5 for Persian abstractive text summarization. In: 2021 26th International Computer Conference, Computer Society of Iran, pp. 1–6 (2021)

    Google Scholar 

  14. Ghadimi, A., Beigy, H.: Hybrid multi-document summarization using pre-trained language models. Expert Syst. Appl. 192, 116292 (2022)

    Article  Google Scholar 

  15. Grail, Q., Perez, J., Gaussier, E.: Globalizing BERT-based transformer architectures for long document summarization. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main volume, pp. 1792–1810 (2021)

    Google Scholar 

  16. Kano, T., Ogawa, A., Delcroix, M., Watanabe, S.: Attention-based multi-hypothesis fusion for speech summarization. In: 2021 IEEE Automatic Speech Recognition and Understanding Workshop, pp. 487–494 (2021)

    Google Scholar 

  17. Kieuvongngam, V., Tan, B., Niu, Y.: Automatic text summarization of COVID-19 medical research articles using BERT and GPT-2 (2020). arXiv preprint arXiv:2006.01997

  18. Lamsiyah, S., Mahdaouy, A.E., Ouatik, S.E.A., Espinasse, B.: Unsupervised extractive multi-document summarization method based on transfer learning from BERT multi-task fine-tuning. J. Inf. Sci. 49(1), 0165551521990616 (2021)

    Google Scholar 

  19. Li, L.H., Yatskar, M., Yin, D., Hsieh, C.J., Chang, K.W.: VisualBERT: a simple and performant baseline for vision and language (2019). arXiv preprint arXiv:1908.03557

  20. Liu, J., Wu, J., Luo, X.: Chinese judicial summarising based on short sentence extraction and GPT-2. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, S.-Y. (eds.) KSEM 2021. LNCS (LNAI), vol. 12816, pp. 376–393. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-82147-0_31

    Chapter  Google Scholar 

  21. Liu, M., Wang, Z., Wang, L.: Automatic Chinese text summarization for emergency domain. J. Phys: Conf. Ser. 1754(1), 012213 (2021)

    Google Scholar 

  22. Lucky, H., Suhartono, D.: Investigation of pre-trained bidirectional encoder representations from transformers checkpoints for Indonesian abstractive text summarization. J. Inf. Commun. Technol. 21(1), 71–94 (2022)

    Google Scholar 

  23. Ma, K., Tian, M., Tan, Y., Xie, X., Qiu, Q.: What is this article about? Generative summarization with the BERT model in the geosciences domain. Earth Sci. Inf. 15(1), 21–36 (2022)

    Article  Google Scholar 

  24. Ma, T., Pan, Q., Rong, H., Qian, Y., Tian, Y., Al-Nabhan, N.: T-BERTSum: Topic-aware text summarization based on BERT. IEEE Trans. Comput. Soc. Syst. 9(3), 879–890 (2021)

    Article  Google Scholar 

  25. Moradi, M., Dorffner, G., Samwald, M.: Deep contextualized embeddings for quantifying the informative content in biomedical text summarization. Comput. Methods Programs Biomed. 184, 105117 (2020)

    Article  Google Scholar 

  26. Patel, P.M.: Financial news summarisation using transformer neural network (2022). https://doi.org/10.21203/rs.3.rs-2132871/v1

  27. Prodan, G., Pelican, E.: Prompt scoring system for dialogue summarization using GPT-3. TechRxiv Preprint (2022)

    Google Scholar 

  28. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training, openAI (2018)

    Google Scholar 

  29. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)

    Google Scholar 

  30. Ramina, M., Darnay, N., Ludbe, C., Dhruv, A.: Topic level summary generation using BERT induced abstractive summarization model. In: Proceedings of 4th International Conference on Intelligent Computing and Control Systems, pp. 747–752 (2020)

    Google Scholar 

  31. Su, M.H., Wu, C.H., Cheng, H.T.: A two-stage transformer-based approach for variable-length abstractive summarization. IEEE/ACM Trans. Audio Speech Lang. Process. 28, 2061–2072 (2020)

    Article  Google Scholar 

  32. Sun, K., Luo, X., Luo, M.Y.: A survey of pretrained language models. In: Memmi, G., Yang, B., Kong, L., Zhang, T., Qiu, M. (eds.) Knowledge Science, Engineering and Management. KSEM 2022. Lecture Notes in Computer Science, vol. 13369, pp. 442–456. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-10986-7_36

  33. Syed, A.A., Gaol, F.L., Boediman, A., Matsuo, T., Budiharto, W.: A survey of abstractive text summarization utilising pretrained language models. In: Nguyen, N.T., Tran, T.K., Tukayev, U., Hong, TP., Trawinski, B., Szczerbicki, E. (eds.) Intelligent Information and Database Systems. ACIIDS 2022. Lecture Notes in Computer Science, vol. 13757, pp. 532–544. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-21743-2_42

  34. Wang, Q., Liu, P., Zhu, Z., Yin, H., Zhang, Q., Zhang, L.: A text abstraction summary model based on BERT word embedding and reinforcement learning. Appl. Sci. 9(21), 4701 (2019)

    Article  Google Scholar 

  35. Xie, Q., Bishop, J.A., Tiwari, P., Ananiadou, S.: Pre-trained language models with domain knowledge for biomedical extractive summarization. Knowl.-Based Syst. 252, 109460 (2022)

    Article  Google Scholar 

  36. Xu, J., Gan, Z., Cheng, Y., Liu, J.: Discourse-aware neural extractive text summarization (2019). arXiv preprint arXiv:1910.14142

  37. Yoon, J., Junaid, M., Ali, S., Lee, J.: Abstractive summarization of Korean legal cases using pre-trained language models. In: Proceedings of the 16th International Conference on Ubiquitous Information Management and Communication, pp. 1–7 (2022)

    Google Scholar 

  38. Yu, B.: Evaluating pre-trained language models on multi-document summarization for literature reviews. In: Proceedings of the 3rd Workshop on Scholarly Document Processing, pp. 188–192 (2022)

    Google Scholar 

  39. Zhao, S., You, F., Liu, Z.Y.: Leveraging pre-trained language model for summary generation on short text. IEEE Access 8, 228798–228803 (2020)

    Article  Google Scholar 

  40. Zhong, M., Liu, Y., Xu, Y., Zhu, C., Zeng, M.: DialogLM: pre-trained model for long dialogue understanding and summarization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 11765–11773 (2022)

    Google Scholar 

  41. Zhou, Y., Portet, F., Ringeval, F.: Effectiveness of French language models on abstractive dialogue summarization task. In: Proceedings of the 13th Language Resources and Evaluation Conference, pp. 3571–3581 (2022)

    Google Scholar 

  42. Zhu, Q., Li, L., Bai, L., Hu, F.: Chinese text summarization based on fine-tuned GPT2. In: 3rd International Conference on Electronics and Communication; Network and Computer Technology. vol. 12167, pp. 304–309 (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xudong Luo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, B., Luo, X., Sun, K., Luo, M.Y. (2023). Recent Progress on Text Summarisation Based on BERT and GPT. In: Jin, Z., Jiang, Y., Buchmann, R.A., Bi, Y., Ghiran, AM., Ma, W. (eds) Knowledge Science, Engineering and Management. KSEM 2023. Lecture Notes in Computer Science(), vol 14120. Springer, Cham. https://doi.org/10.1007/978-3-031-40292-0_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40292-0_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40291-3

  • Online ISBN: 978-3-031-40292-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics