Skip to main content

Choosing the Best MT Programs for CLIR Purposes – Can MT Metrics Be Helpful?

  • Conference paper
Advances in Information Retrieval (ECIR 2009)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 5478))

Included in the following conference series:

Abstract

This paper describes usage of MT metrics in choosing the best candidates for MT-based query translation resources. Our main metrics is METEOR, but we also use NIST and BLEU. Language pair of our evaluation is English → German, because MT metrics still do not offer very many language pairs for comparison. We evaluated translations of CLEF 2003 topics of four different MT programs with MT metrics and compare the metrics evaluation results to results of CLIR runs. Our results show, that for long topics the correlations between achieved MAPs and MT metrics is high (0.85-0.94), and for short topics lower but still clear (0.63-0.72). Overall it seems that MT metrics can easily distinguish the worst MT programs from the best ones, but smaller differences are not so clearly shown. Some of the intrinsic properties of MT metrics do not also suit for CLIR resource evaluation purposes, because some properties of translation metrics, especially evaluation of word order, are not significant in CLIR.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kishida, K.: Technical Issues of Cross-Language Information Retrieval: A Review. Information Processing & Management 41, 433–455 (2005)

    Article  Google Scholar 

  2. Church, K.W., Hovy, E.H.: Good applications for crummy machine translation. Machine Translation 8, 239–258 (1993)

    Article  Google Scholar 

  3. McNamee, P., Mayfield, J.: Comparing Cross-Language Query Expansion Techniques by Degrading Translation Resources. In: Proceedings of Sigir 2002, Tampere, Finland, pp. 159–166 (2002)

    Google Scholar 

  4. Kraaij, W.: TNO at CLEF-2001: Comparing Translation Resources. In: Working Notes for the CLEF 2001 Workshop (2001), http://www.ercim.org/publication/ws-proceedings/CLEF2/kraaij.pdf

  5. Zhu, J., Wang, H.: The Effect of Translation Quality in MT-Based Cross-Language Information Retrieval. In: Proceedings of the 21st International Conference on Computational Linguistics and 44th annual Meeting of the ACL, pp. 593–600 (2006)

    Google Scholar 

  6. Kishida, K.: Prediction of performance of cross-language information retrieval system using automatic evaluation of translation. Library & Information Science Research 30, 138–144 (2008)

    Article  Google Scholar 

  7. Kettunen, K.: MT-based query translation CLIR meets Frequent Case Generation (submitted)

    Google Scholar 

  8. Lavie, A., Agarwal, A.: The METEOR Automatic Machine Translation Evaluation System, http://www.cs.cmu.edu/~alavie/METEOR/

  9. Banerjee, S., Lavie, A.: METEOR: Automatic Metric for MT Evaluation with Improved Correlation with Human Judgements. In: Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Ann Arbor, pp. 65–72 (2005)

    Google Scholar 

  10. Banerjee, S., Lavie, A.: METEOR: Automatic Metric for MT Evaluation with Improved Correlation with Human Judgements. In: Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Ann Arbor, pp. 65–72 (2005)

    Google Scholar 

  11. Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: a method for automatic Evaluation of Machine Translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002), pp. 311–318 (2002)

    Google Scholar 

  12. Lavie, A., Sagae, K., Jayarman, S.: The Significance of Recall in Automatic Metrics for MT Evaluation. In: Frederking, R.E., Taylor, K.B. (eds.) AMTA 2004. LNCS, vol. 3265, pp. 134–143. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  13. Lavie, A., Agarwal, A.: METEOR: An automatic Metric for MT Evaluation with High Levels of Correlation with Human judgements. In: Proceedings of the Second Workshop on Statistical Machine Translation, Prague, June 2007, pp. 228–231 (2007)

    Google Scholar 

  14. Doddington, G.: Automatic Evaluation of Machine Translation Quality Using N-Gram Co-Occurrence Statistics. In: Proceedings of the Second International Conference on Human Language Technology Research, pp. 138–145 (2002)

    Google Scholar 

  15. Mteval-v.11b, http://www.nist.gov/speech/tools/

  16. Kettunen, K.: Facing the machine translation Babel in CLIR – can MT metrics help in choosing CLIR resources? (2008) (manuscript)

    Google Scholar 

  17. Clough, P., Sanderson, M.: Assessing Translation Quality for Cross Language Image Retrieval. In: Peters, C., Gonzalo, J., Braschler, M., Kluck, M. (eds.) CLEF 2003. LNCS, vol. 3237, pp. 594–610. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kettunen, K. (2009). Choosing the Best MT Programs for CLIR Purposes – Can MT Metrics Be Helpful?. In: Boughanem, M., Berrut, C., Mothe, J., Soule-Dupuy, C. (eds) Advances in Information Retrieval. ECIR 2009. Lecture Notes in Computer Science, vol 5478. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-00958-7_71

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-00958-7_71

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-00957-0

  • Online ISBN: 978-3-642-00958-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics