Skip to main content

Quality Estimation with Transformer and RNN Architectures

  • Conference paper
  • First Online:
Machine Translation (CCMT 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1104))

Included in the following conference series:

Abstract

The goal of China Conference on Machine Translation (CCMT 2019) Shared Task on Quality Estimation (QE) is to investigate automatic methods for estimating the quality of \( {\text{Chinese}}\!\leftrightarrow\!{\text{English}} \) machine translation results without reference translations. This paper presents the submissions of our team for the sentence-level Quality Estimation shared task of CCMT19. Considering the good performance of neural models in previous shared tasks of WMT, our submissions also include two neural-based models: one is Bi-Transformer which proposes the model as a feature extractor with a bidirectional transformer and then processes the semantic representations of source and the translation output with a Bi-LSTM predictive model for automatic quality estimation, and the other BiRNN architecture uses only two bi-directional RNNs (bi-RNN) with Gated Recurrent Units (GRUs) as encoders, and learns representation of the source and translation sentence pairs to predict the quality of translation outputs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.quest.dcs.shef.ac.uk/.

  2. 2.

    www.chokkan.org/software/crfsuite/.

  3. 3.

    https://github.com/hankcs/HanLP.

  4. 4.

    https://github.com/moses-smt/mosesdecoder/tree/master/scripts/tokenizer.

References

  • Bicici, E.: Referential translation machines for quality estimation. In: Proceedings of the Eighth Workshop on Statistical Machine Translation, Sofia, Bulgaria, pp. 343–351 (2013)

    Google Scholar 

  • Blatz, J., Fitzgerald, E., Foster, G., et al.: Confidence estimation for machine translation. In: Proceedings of the 20th International Conference on Computational Linguistics, p. 315 (2004)

    Google Scholar 

  • Cho, K., van Merrienboer, B., Gulcehre, C., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp. 1724–1734 (2014)

    Google Scholar 

  • Fan, K., Li, B., Zhou, F., Wang, J.: “Bilingual expert” can find translation errors. arXiv preprint. arXiv:1807.09433 (2018)

  • Graves, A., Schmidhuber, J.: Frame-wise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 18(5–6), 602–610 (2005)

    Article  Google Scholar 

  • Kim, H., Lee, J.-H., Na, S.-H.: Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In: Proceedings of the Second Conference on Machine Translation, Copenhagen, Denmark, pp. 562–568 (2017)

    Google Scholar 

  • Martins, A.F.T., Kepler, F., Monteiro, J.: Unbabel’s participation in the WMT17 translation quality estimation shared task. In: Proceedings of the Second Conference on Machine Translation, Copenhagen, Denmark, pp. 569–574 (2017)

    Google Scholar 

  • Specia, L., Shah, K., Souza, J.G.C., Cohn, T.: QuEst-a translation quality estimation framework. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Sofia, Bulgaria, pp. 79–84 (2013)

    Google Scholar 

  • Specia, L., Paetzold, G., Scarton, C.: Multi-level translation quality prediction with QuEst++. In: Proceedings of ACL 2015, Beijing, China, pp. 115–120 (2015)

    Google Scholar 

  • Ueffing, N., Ney, H.: Word-level confidence estimation for machine translation. Comput. Linguist. 33(1), 9–40 (2007)

    Article  Google Scholar 

  • Wang, J., Fan, K., Li, B., et al.: Alibaba submission for WMT18 quality estimation task. In: Proceedings of the Third Conference on Machine Translation, Brussels, Belgium, pp. 822–828 (2018)

    Google Scholar 

  • Vaswani, A.., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  • Zeiler, M.D.: ADADELTA: an adaptive learning rate method. arXiv:1212.5701 (2012)

Download references

Acknowledgment

This work is supported by China Postdoctoral Science Foundation (CPSF, Grant No. 2018M640069).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chong Feng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Y., Feng, C., Li, H. (2019). Quality Estimation with Transformer and RNN Architectures. In: Huang, S., Knight, K. (eds) Machine Translation. CCMT 2019. Communications in Computer and Information Science, vol 1104. Springer, Singapore. https://doi.org/10.1007/978-981-15-1721-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-1721-1_7

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-1720-4

  • Online ISBN: 978-981-15-1721-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics