Abstract
Federated learning is a distributed machine learning paradigm that allows model training without centralizing sensitive data in a single place. However, non independent and identical distribution (non-IID) data can lead to degraded learning performance in federated learning. Data augmentation schemes have been proposed to address this issue, but they often require sharing clients’ original data, which poses privacy risks. To address these challenges, we propose FedDDA, a data augmentation-based federated learning architecture that uses diffusion models to generate data conforming to the global class distribution and alleviate the non-IID data problem. In FedDDA, a diffusion model is trained through federated learning and then used for data augmentation, thus mitigating the degree of non-IID data without disclosing clients’ original data. Our experiments on non-IID settings with various configurations show that FedDDA significantly outperforms FedAvg, with up to 43.04% improvement on the Cifar10 dataset and up to 20.05% improvement on the Fashion-MNIST dataset. Additionally, we find that relatively low-quality generated samples that conform to the global class distribution still improve federated learning performance considerably.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)
Zhu, H., Jinjin, X., Liu, S., Jin, Y.: Federated learning on non-IID data: a survey. Neurocomputing 465, 371–390 (2021)
Li, Q., Diao, Y., Chen, Q., He, B.: Federated learning on non-IID data silos: an experimental study. In: 2022 IEEE 38th International Conference on Data Engineering (ICDE), pp. 965–978. IEEE (2022)
Palihawadana, C., Wiratunga, N., Wijekoon, A., Kalutarage, H.: FedSim: similarity guided model aggregation for federated learning. Neurocomputing 483, 432–445 (2022)
Xin, B., et al.: Federated synthetic data generation with differential privacy. Neurocomputing 468, 1–10 (2022)
Wang, J., Liu, Q., Liang, H., Joshi, G., Poor, H.V.: Tackling the objective inconsistency problem in heterogeneous federated optimization. In: Advances in Neural Information Processing Systems, vol. 33, pp. 7611–7623 (2020)
Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. In: Proceedings of Machine Learning and Systems, vol. 2, pp. 429–450 (2020)
Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., Suresh, A.T.: SCAFFOLD: stochastic controlled averaging for federated learning. In: International Conference on Machine Learning, pp. 5132–5143. PMLR (2020)
Jeong, E., Oh, S., Kim, H., Park, J., Bennis, M., Kim, S.L.: Communication-efficient on-device machine learning: Federated distillation and augmentation under non-IID private data. arXiv preprint arXiv:1811.11479 (2018)
Qiong, W., Chen, X., Zhou, Z., Zhang, J.: FedHome: cloud-edge based personalized federated learning for in-home health monitoring. IEEE Trans. Mob. Comput. 21(8), 2818–2832 (2020)
Duan, M., Liu, D., Chen, X., Liu, R., Tan, Y., Liang, L.: Self-balancing federated learning with global imbalanced data in mobile systems. IEEE Trans. Parallel Distrib. Syst. 32(1), 59–71 (2020)
Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-IID data. arXiv preprint arXiv:1806.00582 (2018)
Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851 (2020)
van den Berg, R., Bottou, L., Kohler, J., Gans, A.: Denoising diffusion implicit models. In: International Conference on Learning Representations (2021)
Dhariwal, P., Nichol, A.: Diffusion models beat GANs on image synthesis. In: Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794 (2021)
Goodfellow, I., et al.: Generative adversarial networks. Commun. ACM 63(11), 139–144 (2020)
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695 (2022)
Mirza, M., Osindero, S.: Conditional generative adversarial nets. Comput. Vis. Image Underst. 150, 145–153 (2014)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)
Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, N., Khazaeni, Y.: Bayesian nonparametric federated learning of neural networks. In: International Conference on Machine Learning, pp. 7252–7261. PMLR (2019)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778 (2016)
Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.: Inception-v4, inception-ResNet and the impact of residual connections on learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhao, Z., Yang, F., Liang, G. (2024). Federated Learning Based on Diffusion Model to Cope with Non-IID Data. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14433. Springer, Singapore. https://doi.org/10.1007/978-981-99-8546-3_18
Download citation
DOI: https://doi.org/10.1007/978-981-99-8546-3_18
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8545-6
Online ISBN: 978-981-99-8546-3
eBook Packages: Computer ScienceComputer Science (R0)