Hostname: page-component-8448b6f56d-mp689 Total loading time: 0 Render date: 2024-04-19T03:53:38.107Z Has data issue: false hasContentIssue false

The TREC question answering track

Published online by Cambridge University Press:  14 February 2002

ELLEN M. VOORHEES
Affiliation:
National Institute of Standards and Technology, Gaithersburg, MD 20899, USA

Abstract

The Text REtrieval Conference (TREC) question answering track is an effort to bring the benefits of large-scale evaluation to bear on a question answering (QA) task. The track has run twice so far, first in TREC-8 and again in TREC-9. In each case, the goal was to retrieve small snippets of text that contain the actual answer to a question rather than the document lists traditionally returned by text retrieval systems. The best performing systems were able to answer about 70% of the questions in TREC-8 and about 65% of the questions in TREC-9. While the 65% score is a slightly worse result than the TREC-8 scores in absolute terms, it represents a very significant improvement in question answering systems. The TREC-9 task was considerably harder than the TREC-8 task because TREC-9 used actual users’ questions while TREC-8 used questions constructed for the track. Future tracks will continue to challenge the QA community with more difficult, and more realistic, question answering tasks.

Type
Research Article
Copyright
© 2001 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This paper is an official contribution of the National Institute of Standards and Technology; not subject to copyright in the United States. It is in the public domain.