Skip to main content

Evaluating Interactive Cross-Language Information Retrieval: Document Selection

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2069))

Abstract

The problem of finding documents that are written in a language that the searcher cannot read is perhaps the most challenging application of Cross-Language Information Retrieval (CLIR) technology. The first Cross-Language Evaluation Forum (CLEF) provided an excellent venue for assessing the performance of automated CLIR techniques, but little is known about how searchers and systems might interact to achieve better cross-language search results than automated systems alone can provide. This paper explores the question of how interactive approaches to CLIR might be evaluated, suggesting an initial focus on evaluation of interactive document selection. Important evaluation issues are identified, the structure of an interactive CLEF evaluation is proposed, and the key research communities that could be brought together by such an evaluation are introduced.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Workshop on interactive searching in foreign-language collections (2000) http://www.clis.umd.edu/conferences/hcil00

  2. Davis, M., Ogden, W. C.: Quilt: Implementing a large-scale cross-language text retrieval system. In Proceedings of the 20th International ACM SIGIR Conference on Research and Development in Information Retrieval (1997)

    Google Scholar 

  3. Hearst, M. A.: User interfaces and visualization. In Baeza-Yates, R., Ribeiro-Neto, B., Modern Information Retrieval, chapter 10. Addison Wesley, New York (1999) http://www.sims.berkeley.edu/~hearst/irbook/chapters/chap10.html.

    Google Scholar 

  4. Hersh, W., Over, P.: TREC-8 interactive track report. In The Eighth Text REtrieval Conference (TREC-8) (1999) 57–64 http://trec.nist.gov.

  5. Hersh, W., Turpin, A., Price, S., Chan, B., Kraemer, D., Sacherek, L., Olson, D.: Do batch and user evaluations give the same results? In Proceedings of the 23nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1998) 17–24

    Google Scholar 

  6. National Information Standards Organization: Guidelines for Abstracts (ANSI/NISO Z39.14-1997) NISO Press (1997)

    Google Scholar 

  7. Oard, D. W., Levow, G.-A., Cabezas, C. I.: TREC-9 experiments at Maryland: Interactive CLIR. In The Ninth Text Retrieval Conference (TREC-9) (2000) To appear. http://trec.nist.gov.

  8. Oard, D. W., Resnik, P. Support for interactive document selection in crosslanguage information retrieval. Information Processing and Management 35(3) (1999) 363–379

    Article  Google Scholar 

  9. Ogden, W., Cowie, J., Davis, M., Ludovik, E., Molina-Salgado, H., Shin, H.: Getting information from documents you cannot read: An interactive cross-language text retrieval and summarization system. In Joint ACM DL/SIGIR Workshop on Multilingual Information Discovery and Access (1999) http://www.clis.umd.edu/conferences/midas.html.

  10. Over, P.: TREC-5 interactive track report. In The Fifth Text REtrieval Conference (TREC-5) (1996) 29–56 http://trec.nist.gov.

  11. Over, P.: TREC-6 interactive track report. In The Sixth Text REtrieval Conference (TREC-6) (1997) 73–82 http://trec.nist.gov.

  12. Over, P.: TREC-7 interactive track report. In The Seventh Text REtrieval Conference (TREC-7) (1998) 65–71 http://trec.nist.gov.

  13. Taylor, K., White, J.: Predicting what MT is good for: User judgments and task performance. In Third Conference of the Association for Machine Translation in the Americas (1998) 364–373 Lecture Notes in Artificial Intelligence 1529.

    Google Scholar 

  14. van Rijsbergen, C. J.: Information Retrieval. Butterworths, London, second edition (1979)

    Google Scholar 

  15. Wilbur, W. J.: A comparison of group and individual performance among subject experts and untrained workers at the document retrieval task Journal of the American Society for Information Science, 49(6) (1998) 517–529

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Oard, D.W. (2001). Evaluating Interactive Cross-Language Information Retrieval: Document Selection. In: Peters, C. (eds) Cross-Language Information Retrieval and Evaluation. CLEF 2000. Lecture Notes in Computer Science, vol 2069. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44645-1_6

Download citation

  • DOI: https://doi.org/10.1007/3-540-44645-1_6

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42446-8

  • Online ISBN: 978-3-540-44645-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics