Abstract
Federated learning (FL) is an emerging machine learning task that allows many clients to train a global model collaboratively while keeping their respective data nondisclosure. The regular method of federated learning is to average the parameters of client models, which may only work in the cases of few client models with same architectures and initialized parameters. Moreover, this strategy has a heavy communication burden as it requires a lot of communication rounds to reach acceptable performance. In this work, we propose a novel model aggregation method for federated learning (FedPS), which generate pseudo samples from each client model to train the global model on the server. The proposed method could support the aggregation of more than twenty heterogeneous models simultaneously, only requires a single round of communication and can achieve an improved aggregation performance than state-of-the-art federated learning methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bhardwaj, K., Suda, N., Marculescu, R.: Dream distillation: a data-independent model compression framework. arXiv preprint arXiv:1905.07072 (2019)
Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400 (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hsu, T.M.H., Qi, H., Brown, M.: Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335 (2019)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Master’s thesis, Computer Science Department (2009)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Lin, T., Kong, L., Stich, S.U., Jaggi, M.: Ensemble distillation for robust model fusion in federated learning. arXiv preprint arXiv:2006.07242 (2020)
Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738 (2015)
Lopes, R.G., Fenu, S., Starner, T.: Data-free knowledge distillation for deep neural networks. arXiv preprint arXiv:1710.07535 (2017)
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017)
Mordvintsev, A., Olah, C., Tyka, M.: Inceptionism: going deeper into neural networks (2015). https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html (2015)
Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, T.N., Khazaeni, Y.: Bayesian nonparametric federated learning of neural networks. arXiv preprint arXiv:1905.12022 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, M. (2021). FedPS: Model Aggregation with Pseudo Samples. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, SY. (eds) Knowledge Science, Engineering and Management. KSEM 2021. Lecture Notes in Computer Science(), vol 12815. Springer, Cham. https://doi.org/10.1007/978-3-030-82136-4_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-82136-4_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-82135-7
Online ISBN: 978-3-030-82136-4
eBook Packages: Computer ScienceComputer Science (R0)