ABSTRACT
Federated learning (FL) is an increasingly popular form of distributed learning across devices such as sensors and smartphones. To amortize the effort and cost of setting up FL training in real world systems, in practice multiple machine learning tasks may be trained during one FL execution. However, given that the tasks have varying complexities, naïve methods of allocating resource-constrained devices to work on each task may lead to highly variable performance across the tasks. We instead propose an α -fair based allocation algorithm that dynamically allocates tasks to users during multi-model FL training, based on the prevailing loss levels.
- Neelkamal Bhuyan, Sharayu Moharir, and Gauri Joshi. 2022. Multi-Model Federated Learning with Provable Guarantees. arXiv preprint arXiv:2207.04330 (2022).Google Scholar
- Tian Lan, David Kao, Mung Chiang, and Ashutosh Sabharwal. 2010. An axiomatic theory of fairness in network resource allocation. IEEE.Google Scholar
- Tian Li, Maziar Sanjabi, Ahmad Beirami, and Virginia Smith. 2019. Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497 (2019).Google Scholar
- Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.Google Scholar
- Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečnỳ, Sanjiv Kumar, and H Brendan McMahan. 2020. Adaptive federated optimization. arXiv preprint arXiv:2003.00295 (2020).Google Scholar
- Wenchao Xia, Tony QS Quek, Kun Guo, Wanli Wen, Howard H Yang, and Hongbo Zhu. 2020. Multi-armed bandit-based client scheduling for federated learning. IEEE Transactions on Wireless Communications 19, 11 (2020), 7108–7123.Google ScholarCross Ref
- Yuhang Yao, Mohammad Mahdi Kamani, Zhongwei Cheng, Lin Chen, Carlee Joe-Wong, and Tianqiang Liu. 2023. FedRule: Federated Rule Recommendation System with Graph Neural Networks. In Accepted to ACM/IEEE IoTDI.Google Scholar
Index Terms
- Poster Abstract: Fair Training of Multiple Federated Learning Models on Resource Constrained Network Devices
Recommendations
Federated Self-training for Semi-supervised Audio Recognition
Federated Learning is a distributed machine learning paradigm dealing with decentralized and personal datasets. Since data reside on devices such as smartphones and virtual assistants, labeling is entrusted to the clients or labels are extracted in an ...
An Analysis about Federated Learning in Low-Powerful Devices
Mid4CC '23: Proceedings of the 1st International Workshop on Middleware for the Computing ContinuumFederated Learning allows us to train Machine Learning models in a distributed way. This improves users' security and privacy and allows the computational load to be distributed. One of the advantages is the application of these models on low-powerful ...
Towards federated unsupervised representation learning
EdgeSys '20: Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and NetworkingMaking deep learning models efficient at inferring nowadays requires training with an extensive number of labeled data that are gathered in a centralized system. However, gathering labeled data is an expensive and time-consuming process, centralized ...
Comments