Abstract
In this paper, we present the LIMSI question-answering system which participated to the Question Answering on speech transcripts 2008 evaluation. This systems is based on a complete and multi-level analysis of both queries and documents. It uses an automatically generated research descriptor. A score based on those descriptors is used to select documents and snippets. The extraction and scoring of candidate answers is based on proximity measurements within the research descriptor elements and a number of secondary factors. We participated to all the subtasks and submitted 18 runs (for 16 sub-tasks). The evaluation results for manual transcripts range from 31% to 45% for accuracy depending on the task and from 16 to 41% for automatic transcripts.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Déchelotte, D., Schwenk, H., Adda, G., Gauvain, J.-L.: Improved machine translation of speech-to-text outputs, Antwerp. Belgium (2007)
Rosset, S., Galibert, O., Adda, G., Bilinski, E.: The limsi qast systems: comparison between human and automatic rules generation for question-answering on speech transcriptions. In: IEEE ASRU (December 2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Rosset, S., Galibert, O., Bernard, G., Bilinski, E., Adda, G. (2009). The LIMSI Multilingual, Multitask QAst System. In: Peters, C., et al. Evaluating Systems for Multilingual and Multimodal Information Access. CLEF 2008. Lecture Notes in Computer Science, vol 5706. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04447-2_59
Download citation
DOI: https://doi.org/10.1007/978-3-642-04447-2_59
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04446-5
Online ISBN: 978-3-642-04447-2
eBook Packages: Computer ScienceComputer Science (R0)