Loading [a11y]/accessibility-menu.js
Fast-Convergent Wireless Federated Learning: A Voting-Based TopK Model Compression Approach | IEEE Journals & Magazine | IEEE Xplore

Fast-Convergent Wireless Federated Learning: A Voting-Based TopK Model Compression Approach


Abstract:

Federated learning (FL) has been extensively exploited in the training of machine learning models to preserve data privacy. In particular, wireless FL enables multiple cl...Show More

Abstract:

Federated learning (FL) has been extensively exploited in the training of machine learning models to preserve data privacy. In particular, wireless FL enables multiple clients to collaboratively train models by sharing model updates via wireless communication without exposing raw data. The state-of-the-art wireless FL advocates efficient aggregation of model updates from multiple clients by over-the-air computing. However, a significant deficiency of over-the-air aggregation lies in the infeasibility of TopK model compression given that top model updates cannot be aggregated directly before they are aligned according to their indices. In view of the fact that TopK can greatly accelerate FL, we design a novel wireless FL with voting based TopK algorithm, namely WFL-VTopK, so that top model updates can be aggregated by over-the-air computing directly. Specifically, there are two phases in WFL-VTopK. In Phase 1, clients vote their top model updates, based on which global top model updates can be efficiently identified. In Phase 2, clients formally upload global top model updates so that they can be directly aggregated by over-the-air computing. Furthermore, the convergence of WFL-VTopK is theoretically guaranteed under non-convex loss. Based on the convergence of WFL-VTopK, we optimize model utility subjecting to training time and energy constraints. To validate the superiority of WFL-VTopK, we extensively conduct experiments with real datasets under wireless communication. The experimental results demonstrate that WFL-VTopK can effectively aggregate models by only communicating 1%-2% top models updates, and hence significantly outperforms the state-of-the-art baselines. By significantly reducing the wireless communication traffic, our work paves the road to train large models in wireless FL.
Published in: IEEE Journal on Selected Areas in Communications ( Volume: 42, Issue: 11, November 2024)
Page(s): 3048 - 3063
Date of Publication: 22 July 2024

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.