Skip to main content

Evaluating the Performance of Discourse Parser Systems

  • Conference paper
  • First Online:
Soft Computing Applications (SOFA 2014)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 356))

Included in the following conference series:

  • 754 Accesses

Abstract

In this paper are analyzed two different methods for comparing discourse tree structures. The aim is to identify the best technique to evaluate the output of discourse parser system. The first, is a traditional method and is based on quantifying the similarities between the parser output and a reference discourse tree (gold tree). It uses three scores: Precision, Recall, and F-measures. The second one examines the discourse tree representations in qualitative terms. Like first one, three scores are computed: Overlapping score, for analyzing the similarities in terms of topology, Nuclearity scores which take into account the type of nodes, and Veins scores which measure the coherence of discourse. By evaluating discourse tree structure resulted from a parser, the performance of discourse parser systems can be measured. A comparative survey is realized aiming to present assessment methods for discourse tree structures.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Langer H (2001) Syntax and parsing. In: Carstensen K-U, Ebert C, Endriss C, Jekat S, Klabunde R, Langer H (eds) Computerlinguistik und Sprachtechnologie. Eine Einführung, pp 203–245. Heidelberg, Berlin: Spektrum Akademischer Verlag

    Google Scholar 

  2. Soricut R, Marcu D (2003) Sentence level discourse parsing using syntactic and lexical information. In: Proceedings of the HLT/NAACL, pp 228–235

    Google Scholar 

  3. Kermes H (2003) On-line text analysis for computational lexicography. Ph.D. thesis, IMS, University of Stuttgart. Arbeitspapiere des Instituts für Maschinelle Sprachverarbeitung (AIMS), vol 9, number 3

    Google Scholar 

  4. Eckle-Kohler J (1999) Linguistisches Wissen zur automatischen Lexicon-Akquisition aus deutschen Textcorpora. Ph. D. thesis, Universität Stuttgart: IMS

    Google Scholar 

  5. Marcu D (2000) The rhetorical parsing of unrestricted texts: a surface-based approach. Comput Linguist 26(3):395–448

    Article  Google Scholar 

  6. Mitocariu E, Anechitei DA, Cristea D (2013) Comparing discourse tree structures. In: Gelbukh A (ed) Computational linguistics and intelligent text processing 1, vol 7816

    Google Scholar 

  7. Mann WC, Thompson SA (1998) Rhetorical structure theory: toward a functional theory of text organization. Text 8(3):243–81

    Google Scholar 

  8. Cristea D, Ide N, Romary L (1998) Veins theory: a model of global discourse cohesion and coherence. In: Proceedings of the 17th international conference on computational linguistics

    Google Scholar 

  9. Black E (1992) Meeting of interest group on evaluation of broad-coverage grammars of English. LINGUIST List 3.587. http://www.linguistlist.org/issues/3/3-587.html

  10. Carroll JA, Briscoe E, Sanlippo A (1998) Parser evaluation: a survey and a new proposal. In: Proceedings of the 1st international conference on language resources and evaluation, Granada, Spain, pp 447–454

    Google Scholar 

  11. Briscoe EJ, Carroll JA, Copestake A (2002) Relational evaluation schemes. In: Proceedings workshop beyond parseval—towards improved evaluation measures for parsing systems, 3rd international conference on language resources and evaluation, pp 4–38. Las Palmas, Canary Islands

    Google Scholar 

  12. Sampson Geoffrey, Babarczy Anna (2003) A test of the leaf-ancestor metric for parse accuracy. Nat Lang Eng 9(4):365–380

    Article  Google Scholar 

  13. Maziero EG, Pardo TAS (2009) Metodologia de avaliação automática de estruturas retóricas [methodology for automatic evaluation of rhetorical structures]. In: 7th Brazilian symposium in information and human language technology (STIL), São Carlos, Brazil

    Google Scholar 

Download references

Acknowledgments

This paper is supported by the Sectoral Operational Programme Human Resources Developed (SOP HRD), financed from the European Social Fund and by the Romanian Government under the contract number POSDRU/159/1.5/S/133675.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elena Mitocariu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Mitocariu, E. (2016). Evaluating the Performance of Discourse Parser Systems. In: Balas, V., C. Jain, L., Kovačević, B. (eds) Soft Computing Applications. SOFA 2014. Advances in Intelligent Systems and Computing, vol 356. Springer, Cham. https://doi.org/10.1007/978-3-319-18296-4_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-18296-4_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-18295-7

  • Online ISBN: 978-3-319-18296-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics