Skip to main content

Contemplating Automatic MT Evaluation

  • Conference paper
  • First Online:
Envisioning Machine Translation in the Information Future (AMTA 2000)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1934))

Included in the following conference series:

Abstract

Researchers, developers, translators and information consumers all share the problem that there is no accepted standard for machine translation. The problem is much further confounded by the fact that MT evaluations properly done require a considerable commitment of time and resources, an anachronism in this day of cross-lingual information processing when new MT systems may developed in weeks instead of years. This paper surveys the needs addressed by several of the classic “types” of MT, and speculates on ways that each of these types might be automated to create relevant, near-instantaneous evaluation of approaches and systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Arnold, A., Sadler, L., and Humphreys, R.: “Evaluation: an assessment.” Machine Translation 8-1/2 (1993) 1–24

    Article  Google Scholar 

  2. Dostert, B. User’s Evaluation of Machine Translation, Georgetown MT System, 1963-1973. Rome Air Development Center Report AD-768 451. Texas A&M University (1973)

    Google Scholar 

  3. Doyon, J., Taylor, K., and White, J.: “Task-Based Evaluation for Machine Translation.” Singapore: Proceedings of Machine Translation Summit VII’ 99 (1999)

    Google Scholar 

  4. Hovy, E., 1994. „Apples, Oranges, or Kiwis? Criteria for the comparison of MT Systems” Panel Discussion, Vasconcellos, M., moderator. In Vasconellos, M. (ed. ) MT Evaluation: Basis for Future Directions. National Science Foundation (1994)

    Google Scholar 

  5. Jones, D. and Rusk G.: Toward a Scoring Function for Quality-Driven Machine Translation. Proceedings of Coling-2000 (2000)

    Google Scholar 

  6. King, M, rochair. 1991. “Evaluation of MT systems panel discussion.” Proceedings of MT Summit III (1991) 141–146

    Google Scholar 

  7. Nagao, M., Tsujii, J. Nakamura, J.: “The Japanese Government project for machine translation.” Computational Linguistics 11-2/3 (1985) 91–109

    Google Scholar 

  8. Pierce, J., (roChair): Language and Machines: Computers in Translation and Linguistics. Report by the Automatic Language Processing Advisory Committee (ALPAC). Publication 1416. National Academy of Sciences National Research Council (1966)

    Google Scholar 

  9. Taylor, K., and White, J.: “Predicting what MT is Good for: User Judgments and Task Performance.” Proceedings of Third Conference of the Association for Machine Translation in the Americas, AMTA98. Philadelphia, PA (1998)

    Google Scholar 

  10. Van Slype, G. Critical Methods for Evaluating the Quality of Machine Translation. Prepared for the European Commission Directorate General Scientific and Technical Information and Information Management. Report BR 19142. Bureau Marcel van Dijk (1979)

    Google Scholar 

  11. Vasconcellos, M. (ed.): MT Evaluation: Basis for Future Directions. Proceedings of a workshop sponsored by the National Science Foundation. Washington, D.C.: Association for Machine Translation (1994)

    Google Scholar 

  12. White, J. Toward an Automated, Task-Based MT Evaluation Strategy. Athens, Greece: Proceedings of the Workshop on Evaluation, Language Resources and Evaluation Conference (2000)

    Google Scholar 

  13. White, J., and O’Connell, T.: 1994. The ARPA MT evaluation methodologies: evolution, lessons, and future approaches. Proceedings of the 1994 Conference, Association for Machine Translation in the Americas (1994)

    Google Scholar 

  14. White, J., and O’ Connell, T.: „Adaptation of the DARPA machine translation evaluation paradigm to end-to-end systems” Proceedings of AMTA-96 (1996)

    Google Scholar 

  15. White, J. and Taylor, K. 1998. „A task-oriented metric for machine translation.” Granada, Spain: Proceedings of the First Language Resources and Evaluation Conference (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

White, J.S. (2000). Contemplating Automatic MT Evaluation. In: White, J.S. (eds) Envisioning Machine Translation in the Information Future. AMTA 2000. Lecture Notes in Computer Science(), vol 1934. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-39965-8_10

Download citation

  • DOI: https://doi.org/10.1007/3-540-39965-8_10

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41117-8

  • Online ISBN: 978-3-540-39965-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics