Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5459))

Included in the following conference series:

  • 851 Accesses

Abstract

In this paper we report our recent work on the evaluation of a number of popular automatic evaluation metrics for machine translation using parallel legal texts. The evaluation is carried out, following a recognized evaluation protocol, to assess the reliability, the strengths and weaknesses of these evaluation metrics in terms of their correlation with human judgment of translation quality. The evaluation results confirm the reliability of the well-known evaluation metrics, BLEU and NIST for English-to-Chinese translation, and also show that our evaluation metric ATEC outperforms all others for Chinese-to-English translation. We also demonstrate the remarkable impact of different evaluation metrics on the ranking of online machine translation systems for legal translation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Doyon, J., Taylor, K., White, J.: The DARPA Machine Translation Evaluation Methodology: Past and Present. In: AMTA 1998, Philadelphia, PA (1998)

    Google Scholar 

  2. Tomita, M., Shirai, M., Tsutsumi, J., Matsumura, M., Yoshikawa, Y.: Evaluation of MT Systems by TOEFL. In: TMI 1993: The Fifth International Conference on Theoretical and Methodological Issues in Machine Translation, Kyoto, Japan, pp. 252–265 (1993)

    Google Scholar 

  3. Yu, S.: Automatic Evaluation of Quality for Machine Translation Systems. Machine Translation 8, 117–126 (1993)

    Article  Google Scholar 

  4. Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a Method for Automatic Evaluation of Machine Translation. IBM Research Report, RC22176 (2001)

    Google Scholar 

  5. Doddington, G.: Automatic Evaluation of Machine Translation Quality Using N-gram Co-occurrence Statistics. In: Second International Conference on Human Language Technology Research, San Diego, California, pp. 138–145 (2002)

    Google Scholar 

  6. Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A Study of Translation Edit Rate with Targeted Human Annotation. In: AMTA 2006, Cambridge, Massachusetts, USA, pp. 223–231 (2006)

    Google Scholar 

  7. Banerjee, S., Lavie, A.: METEOR: an Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In: ACL 2005: Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pp. 65–72. University of Michigan, Ann Arbor (2005)

    Google Scholar 

  8. Liu, Q., Hou, H., Lin, S., Qian, Y., Zhang, Y., Isahara, H.: Introduction to China’s HTRDP Machine Translation Evaluation. In: MT Summit X, pp. 18–22. Phuket, Thailand (2005)

    Google Scholar 

  9. Choukri, K., Hamon, O., Mostefa, D.: MT evaluation & TC-STAR. In: MT Summit XI Workshop: Automatic Procedures in MT Evaluation, Copenhagen, Denmark (2007)

    Google Scholar 

  10. NIST Open MT Evaluation, http://www.nist.gov/speech/tests/mt/

  11. Callison-Burch, C., Fordyce, C., Koehn, P., Monz, C., Schroeder, J.: Further Meta-evaluation of Machine Translation. In: ACL 2008: HLT - Third Workshop on Statistical Machine Translation, pp. 70–106. Ohio State University, Columbus (2008)

    Google Scholar 

  12. Culy, C., Riehemann, S.Z.: The Limits of N-gram Translation Evaluation Metrics. In: MT Summit IX, New Orleans, USA (2003)

    Google Scholar 

  13. Callison-Burch, C., Osborne, M., Koehn, P.: Re-evaluating the Role of Bleu in Machine Translation Research. In: EACL 2006, Trento, Italy, pp. 249–256 (2006)

    Google Scholar 

  14. Babych, B., Hartley, A., Elliott, D.: Estimating the Predictive Power of N-gram MT Evaluation Metrics across Language and Text Types. In: MT Summit X, Phuket, Thailand, pp. 412–418 (2005)

    Google Scholar 

  15. Kit, C., Wong, T.M.: Comparative Evaluation of Online Machine Translation Systems with Legal Texts. Law Library Journal 100-2, 299–321 (2008)

    Google Scholar 

  16. Kit, C., Liu, X., Sin, K.K., Webster, J.J.: Harvesting the Bitexts of the Laws of Hong Kong from the Web. In: 5th Workshop on Asian Language Resources, Jeju Island, pp. 71–78 (2005)

    Google Scholar 

  17. Estrella, P., Hamon, O., Popescu-Belis, A.: How much Data is Needed for Reliable MT Evaluation? Using Bootstrapping to Study Human and Automatic Metrics. In: MT Summit XI, Copenhagen, Denmark, pp. 167–174 (2007)

    Google Scholar 

  18. NIST’s Guideline of Machine Translation Assessment, http://projects.ldc.upenn.edu/TIDES/Translation/TransAssess04.pdf

  19. Wong, T.M., Kit, C.: Word Choice and Word Position for Automatic MT Evaluation. In: AMTA 2008 Workshop: Metrics for Machine Translation Challenge, Waikiki, Hawaii (2008)

    Google Scholar 

  20. Zhao, H., Huang, C.N., Li, M.: An Improved Chinese Word Segmentation System with Conditional Random Field. In: Fifth SIGHAN Workshop on Chinese Language Processing, Sydney, Australia, pp. 162–165 (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wong, B.TM., Kit, C. (2009). Meta-evaluation of Machine Translation Using Parallel Legal Texts. In: Li, W., Mollá-Aliod, D. (eds) Computer Processing of Oriental Languages. Language Technology for the Knowledge-based Economy. ICCPOL 2009. Lecture Notes in Computer Science(), vol 5459. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-00831-3_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-00831-3_33

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-00830-6

  • Online ISBN: 978-3-642-00831-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics