Abstract
Machine translation quality estimation (Quality Estimation, QE) aims to evaluate the quality of machine translation automatically without golden reference. QE can be implemented on different granularities, thus to give an estimation for different aspects of machines translation output. In this paper, we propose an effective method to utilize pretrained language models to improve the performance of QE. Our model combines two popular pretrained models, which are Bert and XLM, to create a very strong baseline for both sentence-level and word-level QE. We also propose a simple yet effective strategy, ensemble distillation, to further improve the accuracy of QE system. Ensemble distillation can integrate different knowledge from multiple models into one model, and strengthen each single model by a large margin. We evaluate our system on CCMT2019 Chinese-English and English-Chinese QE dataset, which contains word-level and sentence-level subtasks. Experiment results show our model surpasses previous models to a large extend, demonstrating the effectiveness of our proposed method.
H. Huang—Work was done when Hui Huang was an intern at Research and Develop Center, Toshiba (China) Co., Ltd., China.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Lucia, S.: Exploiting objective annotations for measuring translation post-editing effort. In: Proceedings of the 15th Conference of the European Association for Machine Translation, pp. 73–80 (2011)
John, B., et al.: Confidence estimation for machine translation. In: Proceedings of the International Conference on Computational Linguistics, p. 315 (2004)
Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A study of translation edit rate with targeted human annotation. In: Proceedings of Association for Machine Translation in the Americas, vol. 200, no. 6 (2006)
Specia, L., Blain, F., Logacheva, V., Astudillo, R., Martins, A.F.: Findings of the WMT 2018 shared task on quality estimation. In: Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pp. 689–709 (2018)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Lample, G., Conneau, A.: Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291 (2019)
Bojar, O., et al.: Findings of the 2017 conference on machine translation. In: Proceedings of the Second Conference on Machine Translation, pp. 169–214 (2017)
Julia, K., Shigehiko, S., Stefan, R.: Quality estimation from ScraTCH (QUETCH): deep learning for word-level translation quality estimation. In: Proceedings of the Tenth Workshop on Statistical Machine Translation, pp. 316–322 (2015)
Martins, A.F., Junczys-Dowmunt, M., Kepler, F.N., Astudillo, R., Hokamp, C., Grundkiewicz, R.: Pushing the limits of translation quality estimation. Trans. Assoc. Comput. Linguist. 5, 205–218 (2017)
Kim, H., Jung, H.-Y., Kwon, H., Lee, J.H., Na, S.-H.: Predictor-estimator: neural quality estimation based on target word prediction for machine translation. ACM Trans. Asian Low-Resour. Lang. Inf. Process. (TALLIP) 17(1), 3 (2017)
Kai, F., Bo, L., Fengming, Z., Jiayi W.: “Bilingual expert” can find translation errors. arXiv preprint arXiv:1807.09433 (2018)
Kepler, F., et al.: Unbabel’s PARTICIPATIOn in the WMT19 translation quality estimation shared task. arXiv preprint arXiv:1907.10352 (2019)
Pires, T., Schlinger, E., Garrette, D.: How multilingual is multilingual BERT? arXiv preprint arXiv:1906.01502 (2019)
Hyun, K., Jong-Hyeok, L., Seung-Hoon, N.: Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In: Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, pp. 562–568 (2017)
Geoffrey, H., Oriol, V., Jeff, D.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Kim, Y., Rush, A.M.: Sequence-level knowledge distillation. arXiv preprint arXiv:606.07947 (2016)
Wang, Z., et al.: NiuTrans submission for CCMT19 quality estimation task. In: Huang, S., Knight, K. (eds.) CCMT 2019. CCIS, vol. 1104, pp. 82–92. Springer, Singapore (2019). https://doi.org/10.1007/978-981-15-1721-1_9
Acknowledgement
This work is supported by the National Natural Science Foundation of China (Contract 61976015, 61976016, 61876198 and 61370130), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010), and Toshiba (China) Co., Ltd.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Huang, H., Di, H., Xu, J., Ouchi, K., Chen, Y. (2020). Ensemble Distilling Pretrained Language Models for Machine Translation Quality Estimation. In: Zhu, X., Zhang, M., Hong, Y., He, R. (eds) Natural Language Processing and Chinese Computing. NLPCC 2020. Lecture Notes in Computer Science(), vol 12431. Springer, Cham. https://doi.org/10.1007/978-3-030-60457-8_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-60457-8_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60456-1
Online ISBN: 978-3-030-60457-8
eBook Packages: Computer ScienceComputer Science (R0)