Skip to main content

Overview of the 2009 QA Track: Towards a Common Task for QA, Focused IR and Automatic Summarization Systems

  • Conference paper
Focused Retrieval and Evaluation (INEX 2009)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6203))

Abstract

QA@INEX aims to evaluate a complex question-answering task. In such a task, the set of questions is composed of factoid, precise questions that expect short answers, as well as more complex questions that can be answered by several sentences or by an aggregation of texts from different documents. Question-answering, XML/passage retrieval and automatic summarization are combined in order to get closer to real information needs. This paper presents the groundwork carried out in 2009 to determine the tasks and a novel evaluation methodology that will be used in 2010.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. National Institute of Standards and Technology: Proceedings of the First Text Analysis Conference (TAC 2008). In: Proceedings of the First Text Analysis Conference (TAC 2008), Gaithersburg, Maryland, USA, National Institute of Standards and Technology (November 2008)

    Google Scholar 

  2. Mitamura, T., Nyberg, E., Shima, H., Kato, T., Mori, T., Lin, C.Y., Song, R., Lin, C.J., Sakai, T., Ji, D., Kando, N.: Overview of the ntcir-7 aclia tasks: Advanced cross-lingual information access. In: Proceedings of the 7th NTCIR Workshop Meeting on Evaluation of Information Access Technologies, Tokyo, Japan (December 2008)

    Google Scholar 

  3. Lee, Y.H., Lee, C.W., Sung, C.L., Tzou, M.T., Wang, C.C., Liu, S.H., Shih, C.W., Yang, P.Y., Hsu, W.L.: Complex Question Answering with ASQA at NTCIR 7 ACLIA. In: Proceeding of the 7th NTCIR Workshop Meeting, Tokyo, Japan, December 2008, pp. 70–76 (2008)

    Google Scholar 

  4. Verberne, S., Boves, L., Oostdijk, N., Coppen, P.A.: Discourse-based answering of why-questions. Traitement Automatique des Langues, Discours et document: traitements automatiques 47(2), 21–41 (2007)

    Google Scholar 

  5. Verberne, S., Raaijmakers, S., Theijssen, D., Boves, L.: Learning to Rank Answers to Why-Questions. In: Proceedings of 9th Dutch-Belgian Information Retrieval Workshop (DIR 2009), pp. 34–41 (2009)

    Google Scholar 

  6. Yu, H., Hatzivassiloglou, V.: Towards Answering Opinion Questions: Separating Facts from Opinions and Identifying the Polarity of Opinion Sentences. In: Proceedings of 2003 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 129–136 (2003)

    Google Scholar 

  7. Voorhees Ellen, M., Harman, D.K.: Overview of the eighth text retrieval conference (trec-8). In: The Eighth Text REtrieval Conference (TREC 8), pp. 1–24, NIST Special Publication 500-246 (1999)

    Google Scholar 

  8. Voorhees, E.M.: Overview of the trec-9 question answering track. In: The Ninth Text Retrieval Conference (TREC-9), pp. 71–80, NIST Special Publication 500-249 (2000)

    Google Scholar 

  9. Voorhees, E.M.: Overview of the trec 2001 question answering track. In: The Tenth Text Retrieval Conference (TREC 2001), pp. 42–50, NIST Special Publication 500-251 (2001)

    Google Scholar 

  10. Voorhees, E.M.: Evaluating the evaluation: a case study using the trec 2002 question answering track. In: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, vol. 1, pp. 181–188. Association for Computational Linguistics (2003), 1073479

    Google Scholar 

  11. Ayache, C., Grau, B., Vilnat, A.: Equer: the french evaluation campaign of question answering systems. In: Fifth International Conference on Language Resources and Evaluation (LREC 2006), Genoa, Italy (2006)

    Google Scholar 

  12. Dang, H., Kelly, D., Lin, J.: Overview of the TREC 2007 question answering track. In: Proc. of TREC (2007)

    Google Scholar 

  13. Dang, H.: Overview of the TAC 2008 Opinion Question Answering and Summarization Tasks. In: Proc. of the First Text Analysis Conference (2008)

    Google Scholar 

  14. Macdonald, C., Ounis, I.: The TREC Blogs06 collection: Creating and analysing a blog test collection. Department of Computer Science, University of Glasgow Tech Report TR-2006-224 (2006)

    Google Scholar 

  15. Nenkova, A., Passonneau, R.: Evaluating content selection in summarization: The pyramid method. In: Proceedings of HLT-NAACL (2004)

    Google Scholar 

  16. Dang, H., Lin, J., Kelly, D.: Overview of the TREC 2006 Question Answering Track. In: Voorhees, E.M., Buckland, L.P. (eds.) Proceedings of the Fifteenth Text REtrieval Conference (TREC), Gaithersburg, MD, National Institute of Standards and Technology, Department of Commerce, National Institute of Standards and Technology (2006)

    Google Scholar 

  17. Magnini, B., Vallin, A., Ayache, C., Erbach, G., Peas, A., de Rijke, M., Rocha, P., Simov, K., Sutcliffe, R.: Overview of the CLEF 2004 Multilingual Question Answering Track. In: Peters, C., Clough, P., Gonzalo, J., Jones, G.J.F., Kluck, M., Magnini, B. (eds.) CLEF 2004. LNCS, vol. 3491, pp. 371–391. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  18. Voorhees, E.: The TREC question answering track. Journal of Natural Language Engineering 7, 361–378 (2001)

    Article  Google Scholar 

  19. Voorhees, E.: Overview of the TREC 2004 question answering track. In: Voorhees, E.M., Buckland, L.P. (eds.) Proceedings of the Thirteenth Text REtrieval Conference (TREC), Gaithersburg, MD, USA, National Institute of Standards and Technology, Department of Commerce, National Institute of Standards and Technology. NIST Special Publication 500-261 (2004)

    Google Scholar 

  20. Lin, J., Demner-Fushman, D.: Will pyramids built of nuggets topple over? In: Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics (2006)

    Google Scholar 

  21. Louis, A., Nenkova, A.: Performance confidence estimation for automatic summarization. In: EACL, The Association for Computer Linguistics, pp. 541–548 (2009)

    Google Scholar 

  22. Torres-Moreno, J.M., Saggion, H., da Cunha, I., Velàzquez-Morales, P., SanJuan, E.: Evaluation automatique de résumés avec et sans références. In: TALN, ATALA - Association pour le Traitement Automatique des Langues (to appear, 2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Moriceau, V., SanJuan, E., Tannier, X., Bellot, P. (2010). Overview of the 2009 QA Track: Towards a Common Task for QA, Focused IR and Automatic Summarization Systems. In: Geva, S., Kamps, J., Trotman, A. (eds) Focused Retrieval and Evaluation. INEX 2009. Lecture Notes in Computer Science, vol 6203. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14556-8_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-14556-8_35

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-14555-1

  • Online ISBN: 978-3-642-14556-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics