Abstract
Federated learning (FL) researches attempt to alleviate the increasing difficulty of training machine learning models, when the training data is generated in a massively distributed way. The key idea behind these methods is moving the training to locations of data generation, and periodically collecting and redistributing the model updates. We present our approach for transforming the general training algorithm of FL into a peer-to-peer-like process. Our experiments on baseline image classification datasets show that omitting central coordination in FL is feasible.
This work was partially supported by the project “Application Domain Specific Highly Reliable IT Solutions” financed by the National Research, Development and Innovation Fund of Hungary (TKP2020-NKA-06).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
A simple heuristic we have used here was to extend the buffer size if the exponential moving average of the model performance on new datasets (on the new nodes) shows no improvement after \(\theta \) steps, where \(\theta \) is a hyper-parameter.
- 2.
Following the Keras reference model for MNIST (not available anymore):
input: 784 dimension vector (=28 \(\times \) 28) \(\rightarrow \) dropout \(\rightarrow \) dense with 128 units \(\rightarrow \) sigmoid activation \(\rightarrow \) dropout \(\rightarrow \) dense with 10 units \(\rightarrow \) softmax.
- 3.
Following the Keras reference model for CIFAR-10 (not available anymore):
input: \(32\times 32 \times 3\) image\(\rightarrow \)
2d convolution with 32 \(3\times 3\) filter and same padding and ReLU\(\rightarrow \) 2d convolution with 32 \(3\times 3\) filter and same padding and ReLU\(\rightarrow \) \(2\times 2\) maxpooling \(\rightarrow \) Droput\(\rightarrow \)
2d convolution with 64 \(3\times 3\) filter and same padding and ReLU\(\rightarrow \) 2d convolution with 64 \(3\times 3\) filter and same padding and ReLU \(\rightarrow \) \(2\times 2\) maxpooling \(\rightarrow \) Droput \(\rightarrow \)
dense with 512 units and ReLU \(\rightarrow \) Dropout \(\rightarrow \) dense with 10 units \(\rightarrow \) softmax.
- 4.
For the buffer size extension we set a threshold for improvement to \(\theta =15\), that is, if the accuracy did not improve in the last 15 relocations then we extend the number of models to aggregate.
- 5.
The accuracy, along with the communication and computation costs, of ensemble of MMs exceeds that one of FedAvg. The reason for that is that, due to resource intensity of CIFAR-10 training, the number of participating nodes have been reduced to 50, which means fewer models have been used in FedAvg, while the hyper-parameters (\(K'\) and maximum \(\sigma \)) of sMM and MM have been kept unchanged.
- 6.
Hyper-parameter search was not performed for the \(\gamma \) parameter of FedAvg, \(\gamma = 1/10\) was used according to [23]. The main reason was our limited computation possibilities. However, since the settings for \(\gamma \) also affect sMM and MM (e.g. the buffer size or the ensemble count), it is possible that the gap between the communication and computation costs might be different in case of a large-scale hyper-parameter search.
References
Aji, A.F., Heafield, K.: Sparse communication for distributed gradient descent. arXiv preprint arXiv:1704.05021 (2017)
Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient sgd via gradient quantization and encoding. In: Advances in Neural Information Processing Systems, pp. 1709–1720 (2017)
Bellet, A., Guerraoui, R., Taziki, M., Tommasi, M.: Personalized and private peer-to-peer machine learning. arXiv preprint arXiv:1705.08435 (2017)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
Chen, J., Pan, X., Monga, R., Bengio, S., Jozefowicz, R.: Revisiting distributed synchronous sgd. arXiv preprint arXiv:1604.00981 (2016)
Chen, Y., Sun, X., Jin, Y.: Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. IEEE Trans. Neural Networks Learn. Syst. (2019)
Dean, J., et al.: Large scale distributed deep networks. In: Advances in Neural Information Processing Systems, pp. 1223–1231 (2012)
Dimakis, A.G., Kar, S., Moura, J.M., Rabbat, M.G., Scaglione, A.: Gossip algorithms for distributed signal processing. Proc. IEEE 98(11), 1847–1864 (2010)
Dryden, N., Moon, T., Jacobs, S.A., Van Essen, B.: Communication quantization for data-parallel training of deep neural networks. In: 2016 2nd Workshop on Machine Learning in HPC Environments (MLHPC), pp. 1–8. IEEE (2016)
Du, W., Zeng, X., Yan, M., Zhang, M.: Efficient federated learning via variational dropout (2018)
Hegedűs, I., Danner, G., Jelasity, M.: Gossip learning as a decentralized alternative to federated learning. In: Pereira, J., Ricci, L. (eds.) DAIS 2019. LNCS, vol. 11534, pp. 74–90. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22496-7_5
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)
Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 603–618 (2017)
Hsu, T.M.H., Qi, H., Brown, M.: Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335 (2019)
Jaggi, M., et al.: Communication-efficient distributed dual coordinate ascent. In: Advances in Neural Information Processing Systems, pp. 3068–3076 (2014)
Khaled, A., Mishchenko, K., Richtárik, P.: First analysis of local gd on heterogeneous data (2019)
Kingma, D.P., Salimans, T., Welling, M.: Variational dropout and the local reparameterization trick. In: Advances in Neural Information Processing Systems, pp. 2575–2583 (2015)
Konečnỳ, J., McMahan, H.B., Ramage, D., Richtárik, P.: Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 (2016)
Li, T., Sahu, A.K., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks (2018)
Li, X., Huang, K., Yang, W., Wang, S., Zhang, Z.: On the convergence of fedavg on non-iid data. arXiv preprint arXiv:1907.02189 (2019)
Masters, D., Luschi, C.: Revisiting small batch training for deep neural networks. arXiv preprint arXiv:1804.07612 (2018)
McMahan, H.B., Moore, E., Ramage, D., Hampson, S., et al.: Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629 (2016)
Nishio, T., Yonetani, R.: Client selection for federated learning with heterogeneous resources in mobile edge. CoRR abs/1804.08333 (2018)
Seide, F., Fu, H., Droppo, J., Li, G., Yu, D.: 1-bit stochastic gradient descent and application to data-parallel distributed training of speech DNNs. In: Interspeech (2014)
Stich, S.U.: Local SGD converges fast and communicates little (2018)
Strom, N.: Scalable distributed DNN training using commodity GPU cloud computing. In: Sixteenth Annual Conference of the International Speech Communication Association (2015)
Vanhaesebrouck, P., Bellet, A., Tommasi, M.: Decentralized collaborative learning of personalized models over networks (2017)
Wang, H., Kaplan, Z., Niu, D., Li, B.: Optimizing federated learning on non-iid data with reinforcement learning. In: IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pp. 1698–1707. IEEE (2020)
Wang, J., Joshi, G.: Cooperative SGD: a unified framework for the design and analysis of communication-efficient SGD algorithms (2018)
Wang, S., et al.: Adaptive federated learning in resource constrained edge computing systems. In: IEEE INFOCOM 2018-IEEE Conference on Computer Communications (2018)
Wei, E., Ozdaglar, A.: On the o (1= k) convergence of asynchronous distributed alternating direction method of multipliers. In: 2013 IEEE Global Conference on Signal and Information Processing, pp. 551–554. IEEE (2013)
Woodworth, B., Wang, J., Smith, A., McMahan, B., Srebro, N.: Graph oracle models, lower bounds, and gaps for parallel stochastic optimization (2018)
Yang, H.H., Liu, Z., Quek, T.Q., Poor, H.V.: Scheduling policies for federated learning in wireless networks. IEEE Trans. Commun. (2019)
Yu, H., Yang, S., Zhu, S.: Parallel restarted SGD with faster convergence and less communication: demystifying why model averaging works for deep learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 5693–5700 (2019)
Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, T.N., Khazaeni, Y.: Bayesian nonparametric federated learning of neural networks. arXiv preprint arXiv:1905.12022 (2019)
Zhao, B., Mopuri, K.R., Bilen, H.: idlg: improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020)
Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-iid data. arXiv preprint arXiv:1806.00582 (2018)
Zhou, F., Cong, G.: On the convergence properties of a k-step averaging stochastic gradient descent algorithm for nonconvex optimization. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kiss, P., Horváth, T. (2021). Migrating Models: A Decentralized View on Federated Learning. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham. https://doi.org/10.1007/978-3-030-93736-2_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-93736-2_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93735-5
Online ISBN: 978-3-030-93736-2
eBook Packages: Computer ScienceComputer Science (R0)