Skip to main content

Can a Machine Generate a Meta-Review? How Far Are We?

  • Conference paper
  • First Online:
Text, Speech, and Dialogue (TSD 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13502))

Included in the following conference series:

Abstract

A meta-review usually written by the editor of a journal or the area/program chair in a conference is a summary of the peer-reviews and a concise interpretation of the editors/chairs decision. Although the task closely simulates a multi-document summarization problem, automatically writing reviews on top of human-generated reviews is something very less explored. In this paper, we investigate how current state-of-the-art summarization techniques fare on this problem. We come up with qualitative and quantitative evaluation of four radically different summarization approaches on the current problem. We explore how the summarization models perform on preserving aspects and sentiments in original peer reviews and meta-reviews. Finally, we conclude with our observations on why the task is challenging, different from simple summarization, and how one should approach to design a meta-review generation model. We have provided link for our git repository https://github.com/PrabhatkrBharti/MetaGen.git so as to enable readers to replicate the findings.

P. K. Bharti and A. Kumar—Equal Contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://iclr.cc/Conferences/2020/MetareviewGuide.

  2. 2.

    https://openreview.net/.

  3. 3.

    https://openreview.net/forum?id=H1eH4n09KX.

  4. 4.

    https://iclr.cc/Conferences/2020/MetareviewGuide.

References

  1. Banerjee, S., Lavie, A.: METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In: Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pp. 65–72. Association for Computational Linguistics, Ann Arbor, Michigan, June 2005. https://www.aclweb.org/anthology/W05-0909

  2. Bhatia, C., Pradhan, T., Pal, S.: MetaGen: an academic meta-review generation system. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1653–1656 (2020)

    Google Scholar 

  3. Bražinskas, A., Lapata, M., Titov, I.: Unsupervised opinion summarization as copycat-review generation. arXiv preprint arXiv:1911.02247 (2019)

  4. Chu, E., Liu, P.: MeanSum: a neural model for unsupervised multi-document abstractive summarization. In: International Conference on Machine Learning, pp. 1223–1232. PMLR (2019)

    Google Scholar 

  5. Clark, E., Celikyilmaz, A., Smith, N.A.: Sentence mover’s similarity: automatic evaluation for multi-sentence texts. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2748–2760. Association for Computational Linguistics, Florence, Italy, July 2019. https://doi.org/10.18653/v1/P19-1264, https://www.aclweb.org/anthology/P19-1264

  6. Cohan, A., Beltagy, I., King, D., Dalvi, B., Weld, D.S.: Pretrained language models for sequential sentence classification. arXiv preprint arXiv:1909.04054 (2019)

  7. Denkowski, M., Lavie, A.: Meteor universal: language specific translation evaluation for any target language. In: Proceedings of the Ninth Workshop on Statistical Machine Translation, pp. 376–380 (2014)

    Google Scholar 

  8. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota, June 2019. https://doi.org/10.18653/v1/N19-1423.https://www.aclweb.org/anthology/N19-1423

  9. Ganesan, K., Zhai, C., Han, J.: Opinosis: a graph based approach to abstractive summarization of highly redundant opinions (2010)

    Google Scholar 

  10. Gao, Y., Zhao, W., Eger, S.: SUPERT: towards new frontiers in unsupervised evaluation metrics for multi-document summarization. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1347–1354. Association for Computational Linguistics, Online, July 2020. https://doi.org/10.18653/v1/2020.acl-main.124,https://www.aclweb.org/anthology/2020.acl-main.124

  11. Hu, M., Liu, B.: Mining and summarizing customer reviews. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 168–177 (2004)

    Google Scholar 

  12. Li, J., Wang, X., Yin, D., Zong, C.: Attribute-aware sequence network for review summarization. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3000–3010 (2019)

    Google Scholar 

  13. Lin, C.Y.: ROUGE: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81. Association for Computational Linguistics, Barcelona, Spain, July 2004. https://www.aclweb.org/anthology/W04-1013

  14. Mihalcea, R., Tarau, P.: TextRank: bringing order into text. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pp. 404–411 (2004)

    Google Scholar 

  15. Miller, D.: Leveraging BERT for extractive text summarization on lectures (2019)

    Google Scholar 

  16. Nenkova, A., Passonneau, R., McKeown, K.: The pyramid method: incorporating human content selection variation in summarization evaluation. ACM Trans. Speech Lang. Process. (TSLP) 4(2), 4-es (2007)

    Google Scholar 

  17. Nguyen, T.S., Lauw, H.W., Tsaparas, P.: Review synthesis for micro-review summarization. In: Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pp. 169–178 (2015)

    Google Scholar 

  18. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation, October 2002. https://doi.org/10.3115/1073083.1073135

  19. Peyrard, M., Botschen, T., Gurevych, I.: Learning to score system summaries for better content selection evaluation. In: Proceedings of the Workshop on New Frontiers in Summarization, pp. 74–84. Association for Computational Linguistics, Copenhagen, Denmark, September 2017. https://doi.org/10.18653/v1/W17-4510, https://www.aclweb.org/anthology/W17-4510

  20. Zhang, J., Zhao, Y., Saleh, M., Liu, P.J.: PEGASUS: pre-training with extracted gap-sentences for abstractive summarization (2020)

    Google Scholar 

  21. Zhang, T., Kishore, V., Wu, F., Weinberger, K.Q., Artzi, Y.: BERTScore: evaluating text generation with BERT. CoRR abs/1904.09675 (2019), https://arxiv.org/abs/1904.09675

  22. Zhao, W., Peyrard, M., Liu, F., Gao, Y., Meyer, C.M., Eger, S.: MoverScore: text generation evaluating with contextualized embeddings and earth mover distance. CoRR abs/1909.02622 (2019). https://arxiv.org/abs/1909.02622

  23. Zhuang, L., Jing, F., Zhu, X.Y.: Movie review mining and summarization. In: Proceedings of the 15th ACM International Conference on Information and Knowledge Management, pp. 43–50 (2006)

    Google Scholar 

Download references

Acknowledgement

Prabhat Kumar Bharti acknowledges a fellowship grant from Quality Improvement Programme initiated by the All India Council for Technical Education (AICTE), Government of India. Asif Ekbal, the fourth author, has received the Visvesvaraya Young Faculty Award. The author acknowledges and thanks to Digital India Corporation, Ministry of Electronics and Information Technology, Government of India. Tirthankar Ghosal acknowledges and extends his thanks to Cactus Communications, India for funding him.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Prabhat Kumar Bharti .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bharti, P.K., Kumar, A., Ghosal, T., Agrawal, M., Ekbal, A. (2022). Can a Machine Generate a Meta-Review? How Far Are We?. In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds) Text, Speech, and Dialogue. TSD 2022. Lecture Notes in Computer Science(), vol 13502. Springer, Cham. https://doi.org/10.1007/978-3-031-16270-1_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16270-1_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16269-5

  • Online ISBN: 978-3-031-16270-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics