Abstract
This paper is based on the Evaluation of English-to-Urdu Machine Translation. Evaluation measures the quality characteristic of the Machine Translation output and is based on two approaches: Human Evaluation and Automatic Evaluation. In this paper, we are mainly concentrating over Human Evaluation. Machine Translation is an emerging research area in which human beings play a very crucial role. Since language is so vast and because of its diverse nature, the accuracy is not maintained. To maintain this accuracy, Human Evaluation is taken as a base. Human Evaluation can be used with different parameters to judge the quality of sentences.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
White, J.S., O'Connell, T.A., Carlson, L.M.: Evaluation of machine translation. In: Proceedings of the Workshop on Human Language Technology, pp. 206–210 (1993)
Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A study of translation edit rate with targeted human annotation. In: Proceedings of the 7th Conference of the Association for Machine Translation in the Americas, pp. 223–231 (2006)
Bremin, S., Hu, H., Karlsson, J., Lillkull, A.P., Wester, M., Danielsson, H., Stymne, S.: Methods for human evaluation of machine translation. Small 14, 55–67 (2010)
Wojak, A., Gralinski, F.: Matura evaluation experiment based on human evaluation of machine translation. In: Proceedings of the International Multiconference on Computer Science and Information Technology, pp. 547–551 (2010)
Vilar, D., Leusch, G., Ney, H., Banchs, R.E.: Human evaluation of machine translation through binary system comparisons. In: Proceedings of the Second Workshop on Statistical Machine Translation, pp. 96–103 (2007)
Denkowski, M., Lavie, A.: Choosing the right evaluation for machine translation: an examination of annotator and automatic metric performance on human judgment tasks. In: Proceeding of AMTA. (2010)
Adly, N., Al Ansary, S.: Evaluation of Arabic machine translation system based on the universal networking language. In: Language, Natural (ed.) Processing and Information Systems, pp. 243–257. Springer, Berlin (2010)
Arnold, D., Sadler, L., Humprheys, R.L.: Evaluation: an assessment. Mach. Transl. 8(1–2), 1–24 (1993)
Vasconcellos, M.: Panel: Apples, oranges, or kiwis? Criteria for the comparison of MT systems. In: MT Evaluation: Basis for Future Directions. Proceedings of a Workshop Sponsored by the National Science Foundation, pp. 37–50 (1992)
Flanagan, M.: Recycling texts: human evaluation of example-based machine translation subtitles for DVD. School of Applied Language and Intercultural Studies, Dublin City University (2009)
Joshi, N., Darbari, H., Mathur, I.: Human and automatic evaluation of English–Hindi machine translation systems. In: Advances in Computer Science, Engineering and Applications. Advances in Intelligent and Soft Computing Series, vol. 166, pp. 423–432. Springer, Berlin (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer India
About this paper
Cite this paper
Gupta, V., Joshi, N., Mathur, I. (2014). Evaluation of English-to-Urdu Machine Translation. In: Mohapatra, D.P., Patnaik, S. (eds) Intelligent Computing, Networking, and Informatics. Advances in Intelligent Systems and Computing, vol 243. Springer, New Delhi. https://doi.org/10.1007/978-81-322-1665-0_33
Download citation
DOI: https://doi.org/10.1007/978-81-322-1665-0_33
Publisher Name: Springer, New Delhi
Print ISBN: 978-81-322-1664-3
Online ISBN: 978-81-322-1665-0
eBook Packages: EngineeringEngineering (R0)