Abstract:
Federated learning (FL) enables effective model training across distributed devices while protecting data privacy. However, high communication costs challenge practical F...Show MoreMetadata
Abstract:
Federated learning (FL) enables effective model training across distributed devices while protecting data privacy. However, high communication costs challenge practical FL, particularly with computing-capability-limited local devices. To address this, we propose a novel FL strategy in device-to-device (D2D) networks with model pruning. Dynamic clustering using k-means algorithm enhances information exchange efficiency, while model pruning reduces local model complexity and training latency. We derive a convergence bound of \mathcal{O}\left( {{t^{ - 1}}} \right) and optimize the pruning ratio and bandwidth allocation using KKT conditions. Simulation results reveal that without the sacrifice of testing accuracy, the proposed FL algorithm requires only 88% of the training time compared to the system employing equal pruning, and 80% of the training time compared to the system without pruning.
Published in: 2024 IEEE 25th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)
Date of Conference: 10-13 September 2024
Date Added to IEEE Xplore: 07 October 2024
ISBN Information:
Electronic ISSN: 1948-3252