Abstract
In this paper we report our recent work on the evaluation of a number of popular automatic evaluation metrics for machine translation using parallel legal texts. The evaluation is carried out, following a recognized evaluation protocol, to assess the reliability, the strengths and weaknesses of these evaluation metrics in terms of their correlation with human judgment of translation quality. The evaluation results confirm the reliability of the well-known evaluation metrics, BLEU and NIST for English-to-Chinese translation, and also show that our evaluation metric ATEC outperforms all others for Chinese-to-English translation. We also demonstrate the remarkable impact of different evaluation metrics on the ranking of online machine translation systems for legal translation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Doyon, J., Taylor, K., White, J.: The DARPA Machine Translation Evaluation Methodology: Past and Present. In: AMTA 1998, Philadelphia, PA (1998)
Tomita, M., Shirai, M., Tsutsumi, J., Matsumura, M., Yoshikawa, Y.: Evaluation of MT Systems by TOEFL. In: TMI 1993: The Fifth International Conference on Theoretical and Methodological Issues in Machine Translation, Kyoto, Japan, pp. 252–265 (1993)
Yu, S.: Automatic Evaluation of Quality for Machine Translation Systems. Machine Translation 8, 117–126 (1993)
Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a Method for Automatic Evaluation of Machine Translation. IBM Research Report, RC22176 (2001)
Doddington, G.: Automatic Evaluation of Machine Translation Quality Using N-gram Co-occurrence Statistics. In: Second International Conference on Human Language Technology Research, San Diego, California, pp. 138–145 (2002)
Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A Study of Translation Edit Rate with Targeted Human Annotation. In: AMTA 2006, Cambridge, Massachusetts, USA, pp. 223–231 (2006)
Banerjee, S., Lavie, A.: METEOR: an Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In: ACL 2005: Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pp. 65–72. University of Michigan, Ann Arbor (2005)
Liu, Q., Hou, H., Lin, S., Qian, Y., Zhang, Y., Isahara, H.: Introduction to China’s HTRDP Machine Translation Evaluation. In: MT Summit X, pp. 18–22. Phuket, Thailand (2005)
Choukri, K., Hamon, O., Mostefa, D.: MT evaluation & TC-STAR. In: MT Summit XI Workshop: Automatic Procedures in MT Evaluation, Copenhagen, Denmark (2007)
NIST Open MT Evaluation, http://www.nist.gov/speech/tests/mt/
Callison-Burch, C., Fordyce, C., Koehn, P., Monz, C., Schroeder, J.: Further Meta-evaluation of Machine Translation. In: ACL 2008: HLT - Third Workshop on Statistical Machine Translation, pp. 70–106. Ohio State University, Columbus (2008)
Culy, C., Riehemann, S.Z.: The Limits of N-gram Translation Evaluation Metrics. In: MT Summit IX, New Orleans, USA (2003)
Callison-Burch, C., Osborne, M., Koehn, P.: Re-evaluating the Role of Bleu in Machine Translation Research. In: EACL 2006, Trento, Italy, pp. 249–256 (2006)
Babych, B., Hartley, A., Elliott, D.: Estimating the Predictive Power of N-gram MT Evaluation Metrics across Language and Text Types. In: MT Summit X, Phuket, Thailand, pp. 412–418 (2005)
Kit, C., Wong, T.M.: Comparative Evaluation of Online Machine Translation Systems with Legal Texts. Law Library Journal 100-2, 299–321 (2008)
Kit, C., Liu, X., Sin, K.K., Webster, J.J.: Harvesting the Bitexts of the Laws of Hong Kong from the Web. In: 5th Workshop on Asian Language Resources, Jeju Island, pp. 71–78 (2005)
Estrella, P., Hamon, O., Popescu-Belis, A.: How much Data is Needed for Reliable MT Evaluation? Using Bootstrapping to Study Human and Automatic Metrics. In: MT Summit XI, Copenhagen, Denmark, pp. 167–174 (2007)
NIST’s Guideline of Machine Translation Assessment, http://projects.ldc.upenn.edu/TIDES/Translation/TransAssess04.pdf
Wong, T.M., Kit, C.: Word Choice and Word Position for Automatic MT Evaluation. In: AMTA 2008 Workshop: Metrics for Machine Translation Challenge, Waikiki, Hawaii (2008)
Zhao, H., Huang, C.N., Li, M.: An Improved Chinese Word Segmentation System with Conditional Random Field. In: Fifth SIGHAN Workshop on Chinese Language Processing, Sydney, Australia, pp. 162–165 (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wong, B.TM., Kit, C. (2009). Meta-evaluation of Machine Translation Using Parallel Legal Texts. In: Li, W., Mollá-Aliod, D. (eds) Computer Processing of Oriental Languages. Language Technology for the Knowledge-based Economy. ICCPOL 2009. Lecture Notes in Computer Science(), vol 5459. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-00831-3_33
Download citation
DOI: https://doi.org/10.1007/978-3-642-00831-3_33
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-00830-6
Online ISBN: 978-3-642-00831-3
eBook Packages: Computer ScienceComputer Science (R0)