Abstract
Machine translation quality estimation (Quality Estimation, QE) aims to evaluate the quality of machine translation automatically without golden reference. QE is an important component in making machine translation useful in real-world applications. Existing approaches require large amounts of expert annotated data. Recently, there are some trials to perform QE in an unsupervised manner, but these methods are based on glass-box features which demands probation inside the machine translation system. In this paper, we propose a new paradigm to perform unsupervised QE in black-box setting, without relying on human-annotated data or model-related features. We create pseudo-data based on Machine Translation Evaluation (MTE) metrics from existing machine translation parallel dataset, and the data are used to fine-tune multilingual pre-trained language models to fit human evaluation. Experiment results show that our model surpasses the previous unsupervised methods by a large margin, and achieve state-of-the-art results on MLQE Dataset.
H. Huang---Work was done when Hui Huang was an intern at Research and Develop Center, Toshiba (China) Co., Ltd., China.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
John, B., et al.: Confidence estimation for machine translation. In: Proceedings of the International Conference on Computational Linguistics, p. 315 (2004)
Matthew, S., Bonnie, D., Richard, S., Linnea, M., John, M.: A study of translation edit rate with targeted human annotation. In: Proceedings of association for machine translation in the Americas, vol. 200, No. 6 (2006)
Yvette, G., Timothy, B., Alistair, M., Justin, Z.: Can machine translation systems be evaluated by the crowd alone. Nat. Lang. Eng. 23, 1–28 (2015)
Hyun, K., Jong-Hyeok, L., Seung-Hoon, N.: Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In: Proceedings of the Second Conference on Machine Translation, vol. 2, Shared Tasks Papers, pp. 562–568 (2017)
Erick, F., Lisa, Y., André, M., Mark, F., Christian, F.: Findings of the WMT 2019 shared tasks on quality estimation. In: Proceedings of the Fourth Conference on Machine Translation (Shared Task Papers, Day 2), vol. 3, pp. 1–10 (2019)
Fomicheva, M., et al.: Unsupervised Quality Estimation for Neural Machine Translation. arXiv preprint arXiv:2005.10608 (2020)
Kishore, P., Salim, R., Todd, W., Wei-Jing, Z.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the Association for Computa-tional Linguistics, pp. 311–318 (2002)
Zhang, T., Kishore, V., Wu, F., Weinberger, K. Q., Artzi, Y.: Bertscore: evaluating text generation with bert. arXiv preprint arXiv:1904.09675 (2019)
Sellam, T., Das, D., Parikh, A.P.: BLEURT: Learning Robust Metrics for Text Generation. arXiv preprint arXiv:2004.04696 (2020)
Lucia, S.: Exploiting objective annotations for measuring translation post-editing effort. In: Proceedings of the 15th Conference of the European Association for Machine Translation, pp. 73–80 (2011)
Bojar, O., Chatterjee, R., Federmann, C., Graham, Y., Logacheva, V.: Findings of the 2017 conference on machine translation. In: Proceedings of the Second Conference on Machine Translation, pp. 169–214 (2017)
Kim, H., Jung, H.-Y., Kwon, H., Lee, J.H., Na, S.-H.: Predictor-estimator: neural quality estimation based on target word prediction for machine translation. ACM Trans. Asian and Low-Resour. Lang. Inf. Proc. (TALLIP) 17(1), 3 (2017)
Kai, F., Bo, L., Fengming, Z., Jiayi W.: “Bilingual Expert” Can Find Translation Errors. arXiv preprint arXiv:1807.09433 (2018)
Kepler, F., et al.: Unbabel’s Participation in the WMT19 Translation Quality Estimation Shared Task. arXiv preprint arXiv:1907.10352 (2019)
Frédéric, B., Nikolaos, A., Lucia, S.: Quality in, quality out: Learning from actual mistakes. In: Proceedings of the 22nd Annual Conference of the European Association for Machine Translation (2020)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Lample, G., Conneau, A.: Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291 (2019)
Conneau, A., et al.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (2019)
Pires, T., Schlinger, E., Garrette, D. How multilingual is Multilingual BERT?. arXiv preprint arXiv:1906.01502 (2019)
Barrault, L., et al.: Findings of the 2019 conference on machine translation (wmt19). In: Proceedings of the Fourth Conference on Machine Translation (Shared Task Papers, Day 1), vol. 2, pp. 1–61 (2019)
Ma, Q., Wei, J., Bojar, O., Graham, Y.: Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges. In: Proceedings of the Fourth Conference on Machine Translation (Shared Task Papers, Day 1), vol. 2, pp. 62–90 (2019)
Niven, T., Kao, H.Y.: Probing neural network comprehension of natural language arguments. arXiv preprint arXiv:1907.07355 (2019)
Tandon, N., Varde, A.S., de Melo, G.: Commonsense knowledge in machine intelligence. ACM SIGMOD Rec. 46(4), 49–52 (2018)
Zhang, J., Liu, Y., Luan, H., Xu, J., Sun, M.: Prior knowledge integration for neural machine translation using posterior regularization. arXiv preprint arXiv:1811.01100 (2018)
Acknowledgement
This work is supported by the National Natural Science Foundation of China (Contract 61976015, 61976016, 61876198 and 61370130), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010), and Toshiba (China) Co., Ltd.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Huang, H., Di, H., Xu, J., Ouchi, K., Chen, Y. (2020). Unsupervised Machine Translation Quality Estimation in Black-Box Setting. In: Li, J., Way, A. (eds) Machine Translation. CCMT 2020. Communications in Computer and Information Science, vol 1328. Springer, Singapore. https://doi.org/10.1007/978-981-33-6162-1_3
Download citation
DOI: https://doi.org/10.1007/978-981-33-6162-1_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-33-6161-4
Online ISBN: 978-981-33-6162-1
eBook Packages: Computer ScienceComputer Science (R0)