Abstract
Both research and evaluation in human language technology have enjoyed a big surge for the last fifteen years. Performance has made major advances, partially due to the availability of resources and the interest in the many evaluation forums present today. But there is much more to do, both in terms of new areas of research and in improved evaluation for these areas. This paper addresses the current state-of-the-art in evaluation and then discusses some ideas for improving this evaluation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Cleverdon, C.W., Mills, J., Keen, E.M.: Factors determining the performance of indexing systems, vol. 1: Design, vol. 2: Test Results. Aslib Cranfield Research Project (1966)
Text REtrieval Conference, http://trec.nist.gov/
Voorhees, E.M., Harman, D.K.: TREC: Experiment in Evaluation in Information Retrieval. MIT Press, Cambridge (2005)
Cross-Language Evaluation Forum, http://www.clef-campaign.org/
NII-NACSIS Test Collection for IR Systems, http://research.nii.ac.jp/ntcir/
Francis, W.N., Kucera, H.: Brown Corpus Manual. Brown University (1964)
The Penn Treebank Project, http://www.cis.upenn.edu/~treebank/
Miller, G.: WordNet, http://wordnet.princeton.edu/
Message Understanding Conferences, http://www.aclweb.org/anthology-new/
SemEval (2007), http://nlp.cs.swarthmore.edu/semeval/
Document Understanding Conferences, http://duc.nist.gov/
NIST speech evaluation benchmarks, http://www.nist.gov/speech/history/index.htm
Falkedal, I.: Evaluation methods for machine translation systems; a historical overview and critical account. Report, ISSCO, Universite de Geneve (1991)
Papineni, K., et al.: Bleu: a method for automatic evaluation of machine translation. IBM Research Report, RC22176 (2001)
The Meeting of the MINDS: Future directions for human language technology Executive summary, http://www-nlpir.nist.gov/MINDS/FINAL/exec.summary.pdf
Voorhees, Ellen: Variations in Relevance Judgments and the Measurement of Retrieval Effectiveness. Proceedings of SIGIR 1998, pp. 315–323 (1998)
Buttcher, S., et al.: Reliable Information Retrieval Evaluation with Incomplete and Biased Judgements. Proceedings of SIGIR 2007, pp. 63–70 (2007)
Sakai, T.: Alternatives to Bpref. Proceedings of SIGIR 2007, pp. 71–78 (2007)
Sparck Jones, K., Galliers, J.R. (eds.): Evaluating Natural Language Processing Systems. LNCS, vol. 1083. Springer, Heidelberg (1996)
Garofolo, G., et al.: The 1998 TREC-7 Spoken Document Retrieval Track Overview, http://trec.nist.gov/pubs/trec7/t7_proceedings.html
Buckley, C.: Why current IR engines fail. In: Proceedings of SIGIR 2004, pp. 584–585 (2004)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Harman, D. (2008). Towards Better Evaluation for Human Language Technology. In: Tokunaga, T., Ortega, A. (eds) Large-Scale Knowledge Resources. Construction and Application. LKR 2008. Lecture Notes in Computer Science(), vol 4938. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78159-2_31
Download citation
DOI: https://doi.org/10.1007/978-3-540-78159-2_31
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-78158-5
Online ISBN: 978-3-540-78159-2
eBook Packages: Computer ScienceComputer Science (R0)