Abstract
Based on the concept of letting training organizations only exchange their partial gradients instead of the proprietary datasets owned by them, federated learning has become a promising approach for organizations to train deep learning models collaboratively. However, conventional federated learning based on a centralized parameter server is susceptible to “recovery” attacks, in which the original data can be recovered if the attacker can collect enough gradients from the organizations. To solve the problem, we first propose a blockchain-based decentralized model training architecture for federated learning, which is more robust than the centralized architecture. Based on this architecture, we develop a joint efficiency and randomness aware gradient aggregation approach. Our real-world experiments show that our design is not affected by a single point of failure. Moreover, it can increase the model accuracy of the participating organization, while mitigating the data privacy disclosure risk and improving the gradient aggregation performance.
J. Zhao and X. Wu—Contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on CCS, pp. 308–318 (2016)
Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep learning: revisited and enhanced. In: Batten, L., Kim, D.S., Zhang, X., Li, G. (eds.) ATIS 2017. CCIS, vol. 719, pp. 100–110. Springer, Singapore (2017). https://doi.org/10.1007/978-981-10-5421-1_9
Banko, M., Brill, E.: Scaling to very very large corpora for natural language disambiguation. In: Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, pp. 26–33. Association for Computational Linguistics (2001)
Bonawitz, K., et al.: Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046 (2019)
Buterin, V., et al.: A next-generation smart contract and decentralized application platform. White Paper, vol. 3, no. 37 (2014)
Carlini, N., Liu, C., Kos, J., Erlingsson, Ú., Song, D.: The secret sharer: measuring unintended neural network memorization & extracting secrets. arXiv preprint arXiv:1802.08232 (2018)
Geyer, R.C., Klein, T., Nabi, M.: Differentially private federated learning: a client level perspective. arXiv preprint arXiv:1712.07557 (2017)
Halevy, A., Norvig, P., Pereira, F.: The unreasonable effectiveness of data (2009)
Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 603–618. ACM (2017)
HyperLedger Fabric. https://www.hyperledger.org
Li, M., et al.: Scaling distributed machine learning with the parameter server. In: 11th \(\{\)USENIX\(\}\) Symposium on Operating Systems Design and Implementation (\(\{\)OSDI\(\}\) 2014), pp. 583–598 (2014)
McMahan, B., Ramage, D.: Federated learning: collaborative machine learning without centralized training data. Google Research Blog 3 (2017)
Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2018)
Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on CCS, pp. 1310–1321. ACM (2015)
Song, C., Ristenpart, T., Shmatikov, V.: Machine learning models that remember too much. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 587–601. ACM (2017)
Yu, M., et al.: Gradiveq: vector quantization for bandwidth-efficient gradient aggregation in distributed CNN training. In: Advances in Neural Information Processing Systems, pp. 5123–5133 (2018)
Acknowledgments
This work is supported in part by NSFC (Grant No. 61872215), Shenzhen Science and Technology Program (Grant No. RCYX20200714114523079), Shenzhen Nanshan District Ling-Hang Team Project (Grant No. LHTD20170005), Featured Innovation Project of Guangdong Education Department (Grant No. 2020 KTSCX126), and Natural Science Foundation of Top Talent of SZTU (Grant No. 2018010801008).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhao, J., Wu, X., Zhang, Y., Wu, Y., Wang, Z. (2021). A Blockchain Based Decentralized Gradient Aggregation Design for Federated Learning. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2021. ICANN 2021. Lecture Notes in Computer Science(), vol 12892. Springer, Cham. https://doi.org/10.1007/978-3-030-86340-1_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-86340-1_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86339-5
Online ISBN: 978-3-030-86340-1
eBook Packages: Computer ScienceComputer Science (R0)