Abstract
Federated learning enables collaborative model training across multiple clients without sharing raw data, adhering to privacy regulations, which involves clients sending model updates (gradients) to a central server, where they are aggregated to improve a global model. Despite its benefits, federated learning faces threats from gradient inversion attacks, which can reconstruct private data from gradients. Traditional defenses, including cryptography, differential privacy, and perturbation techniques, offer protection but may suffer from a reduction in computational efficiency and model performance. Thus, in this paper, we introduce Secure Convolutional Neural Networks (SecCNN), a novel approach embedding an upsampling layer into CNNs to inherently defend against gradient inversion attacks. SecCNN leverages Rank Analysis for enhanced security without sacrificing model accuracy or incurring significant computational costs. Our results demonstrate SecCNN’s effectiveness in securing federated learning against privacy breaches, thereby building trust among participants and advancing secure collaborative learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aono, Y., Hayashi, T., Wang, L., Moriai, S., et al.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2017)
Berahas, A.S., Nocedal, J., Takác, M.: A multi-batch l-bfgs method for machine learning. In: Advances in Neural Information Processing Systems, vol. 29 (2016)
Bonawitz, K., et al.: Towards federated learning at scale: system design. Proc. Mach. Learn. Syst. 1, 374–388 (2019)
Bonawitz, K., et al.: Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175–1191 (2017)
Chilimbi, T., Suzue, Y., Apacible, J., Kalyanaraman, K.: Project Adam: building an efficient and scalable deep learning training system. In: 11th \(\{\)USENIX\(\}\) Symposium on Operating Systems Design and Implementation (\(\{\)OSDI\(\}\) 14), pp. 571–582 (2014)
Fan, L., et al.: Rethinking privacy preserving deep learning: how to evaluate and thwart privacy attacks. In: Federated Learning: Privacy and Incentive, pp. 32–50 (2020)
Geiping, J., Bauermeister, H., Dröge, H., Moeller, M.: Inverting gradients - how easy is it to break privacy in federated learning? In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 16937–16947. Curran Associates, Inc. (2020)
Harding, E.L., Vanto, J.J., Clark, R., Hannah Ji, L., Ainsworth, S.C.: Understanding the scope and impact of the California consumer privacy act of 2018. J. Data Protect. Privacy 2(3), 234–253 (2019)
Hatamizadeh, A., et al.: Gradvit: gradient inversion of vision transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10021–10030 (2022)
He, F., Wang, B., Tao, D.: Tighter generalization bounds for iterative differentially private learning algorithms. In: Uncertainty in Artificial Intelligence, pp. 802–812. PMLR (2021)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Howard, A., et al.: Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1314–1324 (2019)
Huang, Y., Gupta, S., Song, Z., Li, K., Arora, S.: Evaluating gradient inversion attacks and defenses in federated learning. In: Advances in Neural Information Processing Systems, vol. 34, pp. 7232–7241 (2021)
Huang, Y., Song, Z., Li, K., Arora, S.: Instahide: instance-hiding schemes for private distributed learning. In: International Conference on Machine Learning, pp. 4507–4518. PMLR (2020)
Huang, Z., Wang, Y., Mitra, S., Dullerud, G.E.: On the cost of differential privacy in distributed control systems. In: Proceedings of the 3rd International Conference on High Confidence Networked Systems, pp. 105–114 (2014)
Kim, J., Koo, D., Kim, Y., Yoon, H., Shin, J., Kim, S.: Efficient privacy-preserving matrix factorization for recommendation via fully homomorphic encryption. ACM Trans. Privacy Secur. (TOPS) 21(4), 1–30 (2018)
Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Lia, D., Togan, M.: Privacy-preserving machine learning using federated learning and secure aggregation. In: 2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), pp. 1–6. IEEE (2020)
Liang, H., Li, Y., Zhang, C., Liu, X., Zhu, L.: Egia: an external gradient inversion attack in federated learning. IEEE Trans. Inf. Forensics Secur. (2023)
Long, G., Tan, Y., Jiang, J., Zhang, C.: Federated learning for open banking. In: Yang, Q., Fan, L., Yu, H. (eds.) Federated Learning. LNCS (LNAI), vol. 12500, pp. 240–254. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-63076-8_17
Mangold, P., Perrot, M., Bellet, A., Tommasi, M.: Differential privacy has bounded impact on fairness in classification. In: International Conference on Machine Learning, pp. 23681–23705. PMLR (2023)
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)
McMahan, H.B., Moore, E., Ramage, D., Arcas, B.A.: Federated learning of deep networks using model averaging, 2, 2. arXiv preprint arXiv:1602.05629 (2016)
Pfitzner, B., Steckhan, N., Arnrich, B.: Federated learning in a medical context: a systematic literature review. ACM Trans. Internet Technology (TOIT) 21(2), 1–31 (2021)
Regulation, G.D.P.: General data protection regulation (GDPR). Intersoft Consulting, Accessed in October 24(1) (2018)
Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321 (2015)
Sun, J., Li, A., Wang, B., Yang, H., Li, H., Chen, Y.: Provable defense against privacy leakage in federated learning from representation perspective. arXiv preprint arXiv:2012.06043 (2020)
Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)
Truex, S., Liu, L., Chow, K.H., Gursoy, M.E., Wei, W.: LDP-FED: federated learning with local differential privacy. In: Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, pp. 61–66 (2020)
Wei, W., Liu, L., Wu, Y., Su, G., Iyengar, A.: Gradient-leakage resilient federated learning. In: 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS), pp. 797–807. IEEE (2021)
Ye, D., Shen, S., Zhu, T., Liu, B., Zhou, W.: One parameter defense-defending against data inference attacks via differential privacy. IEEE Trans. Inf. Forensics Secur. 17, 1466–1480 (2022)
Zhang, C., Li, S., Xia, J., Wang, W., Yan, F., Liu, Y.: \(\{\)BatchCrypt\(\}\): efficient homomorphic encryption for \(\{\)Cross-Silo\(\}\) federated learning. In: 2020 USENIX Annual Technical Conference (USENIX ATC 20), pp. 493–506 (2020)
Zhang, Q., Ma, J., Xiao, Y., Lou, J., Xiong, L.: Broadening differential privacy for deep learning against model inversion attacks. In: 2020 IEEE International Conference on Big Data (Big Data), pp. 1061–1070. IEEE (2020)
Zhang, R., Guo, S., Wang, J., Xie, X., Tao, D.: A survey on gradient inversion: attacks, defenses and future directions. arXiv preprint arXiv:2206.07284 (2022)
Zhao, B., Mopuri, K.R., Bilen, H.: IDLG: improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020)
Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-IID data. arXiv preprint arXiv:1806.00582 (2018)
Zhu, J., Blaschko, M.B.: R-\(\{\)gap\(\}\): Recursive gradient attack on privacy. In: International Conference on Learning Representations (2021). https://openreview.net/forum?id=RSU17UoKfJF
Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Acknowledgement
This work was supported in part by the National Science and Technology Council, Taiwan, under grant NSTC-112-2223-E-002-015, and by the Ministry of Education, Taiwan, under grant MOE 112L9009.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Liu, YH., Shen, YC., Chen, HW., Chen, MS. (2024). Construct a Secure CNN Against Gradient Inversion Attack. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14647. Springer, Singapore. https://doi.org/10.1007/978-981-97-2259-4_19
Download citation
DOI: https://doi.org/10.1007/978-981-97-2259-4_19
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-2261-7
Online ISBN: 978-981-97-2259-4
eBook Packages: Computer ScienceComputer Science (R0)