Abstract
Training neural machine translation models requires large amount of diverse training corpora. It poses a challenge for collecting sufficient data. In addition, labeling monolingual corpus demands professional knowledge in certain domain. Building collaboration between different institutes produces other problems such as legality of data exchange and commercial data leakage.
In this paper, we proposed a federated neural machine translation model FedNMT to train a robust machine translation system without sharing raw data from participants. By applying FedNMT, neural machine translation (NMT) systems can be ameliorated from the corpus held by different contributors without directly exposing them to one another. This approach preserves the user privacy by utilizing the federated learning framework, encryption techniques. In the federated learning paradigms, a global model is distributed to user clients, and a central server is built to aggregate the learning parameters and update the gradients. Experimental results show the effectiveness of our model in comparison with the data-centralized model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abadi, M., Chu, A., Goodfellow, I.J., et al.: Deep learning with differential privacy, pp. 308–318. ACM (2016)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate (2015)
Ben-David, S., Blitzer, J., Crammer, K., et al.: A theory of learning from different domains. Mach. Learn. 79(1-2), 151–175 (2010)
Bonawitz, K., Eichner, H., Grieskamp, W., et al.: Towards federated learning at scale: system design. CoRR abs/1902.01046 (2019)
Chen, M., Suresh, A.T., Mathews, R., et al.: Federated learning of N-gram language models, pp. 121–130 (2019)
Cheng, H.-P., et al.: Towards decentralized deep learning with differential privacy. In: Da Silva, D., Wang, Q., Zhang, L.-J. (eds.) CLOUD 2019. LNCS, vol. 11513, pp. 130–145. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23502-4_10
Gehring, J., Auli, M., Grangier, D., et al.: Convolutional sequence to sequence learning. Proc. Mach. Learn. Res. 70, 1243–1252 (2017)
Hisamoto, S., Post, M., Duh, K.: Membership inference attacks on sequence-to-sequence models: is my data in your machine translation system? Trans. Assoc. Comput. Linguistics 8, 49–63 (2020)
Lample, G., Ott, M., Conneau, A., et al.: Phrase-based & neural unsupervised machine translation. In: EMNLP (2018)
Li, W., et al.: Privacy-preserving federated brain tumour segmentation. In: Suk, H.-I., Liu, M., Yan, P., Lian, C. (eds.) MLMI 2019. LNCS, vol. 11861, pp. 133–141. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32692-0_16
Li, Z., Sun, M.: Punctuation as implicit annotations for Chinese word segmentation. Comput. Linguistics 35(4), 505–512 (2009)
McMahan, B., Moore, E., Ramage, D., et al.: Communication-efficient learning of deep networks from decentralized data. Proc. Mach. Learn. Res. 54, 1273–1282 (2017)
Moriai, S.: Privacy-preserving deep learning via additively homomorphic encryption. In: 26th IEEE Symposium on Computer Arithmetic, ARITH 2019, Kyoto, Japan, 10–12 June 2019, p. 198. IEEE (2019)
Papineni, K., Roukos, S., Ward, T., Zhu, W.: BLEU: a method for automatic evaluation of machine translation, pp. 311–318. ACL (2002)
Peterson, D., Kanani, P., Marathe, V.J.: Private federated learning with domain adaptation. CoRR abs/1912.06733 (2019)
Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need (2017)
Wang, S., Liu, Y., Wang, C., et al.: Improving back-translation with uncertainty-based confidence estimation, pp. 791–802 (2019)
Acknowledgement
This paper is supported by National Key Research and Development Program of China under grant No. 2018YFB0204403.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, J., Huang, Z., Kong, L., Li, D., Xiao, J. (2021). Modeling Without Sharing Privacy: Federated Neural Machine Translation. In: Zhang, W., Zou, L., Maamar, Z., Chen, L. (eds) Web Information Systems Engineering – WISE 2021. WISE 2021. Lecture Notes in Computer Science(), vol 13080. Springer, Cham. https://doi.org/10.1007/978-3-030-90888-1_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-90888-1_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90887-4
Online ISBN: 978-3-030-90888-1
eBook Packages: Computer ScienceComputer Science (R0)