Skip to main content

Towards Better Evaluation for Human Language Technology

  • Conference paper
Large-Scale Knowledge Resources. Construction and Application (LKR 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4938))

Included in the following conference series:

  • 576 Accesses

Abstract

Both research and evaluation in human language technology have enjoyed a big surge for the last fifteen years. Performance has made major advances, partially due to the availability of resources and the interest in the many evaluation forums present today. But there is much more to do, both in terms of new areas of research and in improved evaluation for these areas. This paper addresses the current state-of-the-art in evaluation and then discusses some ideas for improving this evaluation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cleverdon, C.W., Mills, J., Keen, E.M.: Factors determining the performance of indexing systems, vol. 1: Design, vol. 2: Test Results. Aslib Cranfield Research Project (1966)

    Google Scholar 

  2. Text REtrieval Conference, http://trec.nist.gov/

  3. Voorhees, E.M., Harman, D.K.: TREC: Experiment in Evaluation in Information Retrieval. MIT Press, Cambridge (2005)

    Google Scholar 

  4. Cross-Language Evaluation Forum, http://www.clef-campaign.org/

  5. NII-NACSIS Test Collection for IR Systems, http://research.nii.ac.jp/ntcir/

  6. Francis, W.N., Kucera, H.: Brown Corpus Manual. Brown University (1964)

    Google Scholar 

  7. The Penn Treebank Project, http://www.cis.upenn.edu/~treebank/

  8. Miller, G.: WordNet, http://wordnet.princeton.edu/

  9. Message Understanding Conferences, http://www.aclweb.org/anthology-new/

  10. SemEval (2007), http://nlp.cs.swarthmore.edu/semeval/

  11. Document Understanding Conferences, http://duc.nist.gov/

  12. NIST speech evaluation benchmarks, http://www.nist.gov/speech/history/index.htm

  13. Falkedal, I.: Evaluation methods for machine translation systems; a historical overview and critical account. Report, ISSCO, Universite de Geneve (1991)

    Google Scholar 

  14. Papineni, K., et al.: Bleu: a method for automatic evaluation of machine translation. IBM Research Report, RC22176 (2001)

    Google Scholar 

  15. The Meeting of the MINDS: Future directions for human language technology Executive summary, http://www-nlpir.nist.gov/MINDS/FINAL/exec.summary.pdf

  16. Voorhees, Ellen: Variations in Relevance Judgments and the Measurement of Retrieval Effectiveness. Proceedings of SIGIR 1998, pp. 315–323 (1998)

    Google Scholar 

  17. Buttcher, S., et al.: Reliable Information Retrieval Evaluation with Incomplete and Biased Judgements. Proceedings of SIGIR 2007, pp. 63–70 (2007)

    Google Scholar 

  18. Sakai, T.: Alternatives to Bpref. Proceedings of SIGIR 2007, pp. 71–78 (2007)

    Google Scholar 

  19. Sparck Jones, K., Galliers, J.R. (eds.): Evaluating Natural Language Processing Systems. LNCS, vol. 1083. Springer, Heidelberg (1996)

    Google Scholar 

  20. Garofolo, G., et al.: The 1998 TREC-7 Spoken Document Retrieval Track Overview, http://trec.nist.gov/pubs/trec7/t7_proceedings.html

  21. Buckley, C.: Why current IR engines fail. In: Proceedings of SIGIR 2004, pp. 584–585 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Takenobu Tokunaga Antonio Ortega

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Harman, D. (2008). Towards Better Evaluation for Human Language Technology. In: Tokunaga, T., Ortega, A. (eds) Large-Scale Knowledge Resources. Construction and Application. LKR 2008. Lecture Notes in Computer Science(), vol 4938. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78159-2_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-78159-2_31

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-78158-5

  • Online ISBN: 978-3-540-78159-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics