Abstract:
The paper proposes a novel Federated Learning (FL) algorithm involving signed gradient as feedback to reduce communication overhead. The Multi-task nature of the algorith...Show MoreMetadata
Abstract:
The paper proposes a novel Federated Learning (FL) algorithm involving signed gradient as feedback to reduce communication overhead. The Multi-task nature of the algorithm provides each device a custom neural network after completion. Towards improving the performance, a weighted average loss across devices is proposed which considers the similarity between their data distributions. A Probably Approximately Correct (PAC) bound on the true loss in terms of the proposed empirical loss is derived. The bound is in terms of (i) Rademacher complexity, (ii) discrepancy, and (iii) penalty term. A distributed algorithm is proposed to find the discrepancy as well as the fine tuned neural network at each node. It is experimentally shown that this proposed method outperforms existing algorithms such as FedSGD, DITTO, FedAvg and locally trained neural network with good generalization on various data sets.
Published in: 2022 National Conference on Communications (NCC)
Date of Conference: 24-27 May 2022
Date Added to IEEE Xplore: 04 July 2022
ISBN Information: