Skip to main content

Automatic Evaluation of a Summary’s Linguistic Quality

  • Conference paper
  • First Online:
Natural Language Processing and Information Systems (NLDB 2016)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 9612))

Abstract

The Evaluation of a summary’s linguistic quality is a difficult task because several linguistic aspects (e.g. grammaticality, coherence, etc.) must be verified to ensure the well formedness of a text’s summary. In this paper, we report the result of combining “Adapted ROUGE” scores and linguistic quality features to assess linguistic quality. We build and evaluate models for predicting the manual linguistic quality score using linear regression. We construct models for evaluating the quality of each text summary (summary level evaluation) and of each summarizing system (system level evaluation). We assess the performance of a summarizing system using the quality of a set of summaries generated by the system. All models are evaluated using the Pearson correlation and the Root mean squared error.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Barzilay, R., Lapata, M.: Modeling local coherence: an entity-based approach. Comput. Linguist. 34(1), 1–34 (2008)

    Article  Google Scholar 

  2. Conroy, J.M., Dang, H.T.: Mind the gap: dangers of divorcing evaluations of summary content from linguistic quality. In: Proceedings of the International Conference on Computational Linguistics, pp. 145–152 (2008)

    Google Scholar 

  3. Dell’Orletta, F., Wieling, M., Cimino, A., Venturi, G., Montemagni, S.: Assessing the readability of sentences: which corpora and features? In: Workshop on Innovative Use of NLP for Building Educational Applications, pp. 163–173 (2014)

    Google Scholar 

  4. Ellouze, S., Jaoua, M., Belguith, L.H.: An evaluation summary method based on a combination of content and linguistic metrics. In: Proceedings of RANLP Conference, pp. 245–251 (2013)

    Google Scholar 

  5. Falkenjack, J., Mühlenbock, K.H., Jönsson, A.: Features indicating readability in Swedish text. In: Proceedings of NoDaLiDa Conference, pp. 27–40 (2013)

    Google Scholar 

  6. Feng, L., Jansche, M., Huenerfauth, M., Elhadad, N.: A comparison of features for automatic readability assessment. In: Proceedings of COLING, pp. 276–284 (2010)

    Google Scholar 

  7. Finkel, J.R., Grenager, T., Manning, C.: Incorporating non-local information into information extraction systems by gibbs sampling. In: Proceedings of the Association for Computational Linguistics Conference, pp. 363–370 (2005)

    Google Scholar 

  8. Flesch, R.F.: How to Test Readability. Harper & Brothers, New York (1951)

    Google Scholar 

  9. Giannakopoulos, G., Karkaletsis, V., Vouros, G., Stamatopoulos, P.: Summarization system evaluation revisited: N-gram graphs. ACM Trans. Speech Lang. Process. J. 5(3), 1–39 (2008)

    Article  Google Scholar 

  10. Graesser, A.C., Mcnamara, D.S., Louwerse, M.M., Cai, Z.: Coh-Metrix: Analysis of text on cohesion and language. Behav. Res. Methods Instrum. Comput. J. 36(2), 193–202 (2004)

    Article  Google Scholar 

  11. Gunning, R.: The techniques of clear writing. McGraw-Hill, New York (1952)

    Google Scholar 

  12. Halliday, M.A.K., Hasan, R.: Cohesion in English. Longman, London (1976)

    Google Scholar 

  13. Islam, Z., Mehler, A.: Automatic readability classification of crowd-sourced data based on linguistic and information-theoretic features. Computacin Sistemas J. 17(2), 113–123 (2013)

    Google Scholar 

  14. Kate, R.J., Luo, X., Patwardhan, S., Franz, M., Florian, R., Mooney, R.J., Roukos, S., Welty, C.: Learning to predict readability using diverse linguistic features. In: Proceedings of Coling Conference, pp. 546–554 (2010)

    Google Scholar 

  15. Kincaid, J.P., Fishburne, Jr., R.P., Rogers, R.L., Chissom, B.S.: Derivation of new readability formulas for Navy enlisted personnel. Research Branch Report 8–75, U.S. Naval Air Station, Memphis (1975)

    Google Scholar 

  16. Louis, A., Nenkova, A.: Automatically assessing machine summary contentwithout a gold standard. Comput. Linguist. J. 39(2), 267–300 (2013)

    Article  Google Scholar 

  17. Lin, C.Y.: Rouge: a package for automatic evaluation of summaries, text summarization branches out. In: Proceedings of the ACL-04 Workshop, pp. 74–81 (2004)

    Google Scholar 

  18. Lin, Z., Liu, C., Ng, H.T., Kan, M.Y.: Combining coherence models and machine translation evaluation metrics for summarization evaluation. In: Proceedings of Association for Computational Linguistics, pp. 1006–1014 (2012)

    Google Scholar 

  19. Pitler, E., Louis, A., Nenkova, A.: Automatic evaluation of linguistic quality in multi-document summarization. In: Proceedings of ACL, pp. 544–554 (2010)

    Google Scholar 

  20. Pitler, E., Nenkova, A.: Revisiting readability: a unified framework for predicting text quality. In: Proceedings of the EMNLP Conference, pp. 186–195 (2008)

    Google Scholar 

  21. Smith, E., Senter, R.: Automated readability index. AMRL-TR. Aerospace Medical Research Laboratories (6570th), 1 (1967)

    Google Scholar 

  22. Stolcke, A.: SRILM - an extensible language modeling toolkit. In: Proceedings of International Conference on Spoken Language Processing, vol. 2, pp. 901–904 (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samira Ellouze .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Ellouze, S., Jaoua, M., Belguith, L.H. (2016). Automatic Evaluation of a Summary’s Linguistic Quality. In: Métais, E., Meziane, F., Saraee, M., Sugumaran, V., Vadera, S. (eds) Natural Language Processing and Information Systems. NLDB 2016. Lecture Notes in Computer Science(), vol 9612. Springer, Cham. https://doi.org/10.1007/978-3-319-41754-7_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-41754-7_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-41753-0

  • Online ISBN: 978-3-319-41754-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics