Evaluating the Performance of Federated Learning Across Different Training Sample Distributions | IEEE Conference Publication | IEEE Xplore

Evaluating the Performance of Federated Learning Across Different Training Sample Distributions


Abstract:

This research investigates how the distribution of samples impacts the performance of federated learning. By simulating datasets with independent identical distribution (...Show More

Abstract:

This research investigates how the distribution of samples impacts the performance of federated learning. By simulating datasets with independent identical distribution (IID) and nonindependent identical distribution (non-IID), and varying the number of collaborating units, we observe how differences in training sample distribution affect the effectiveness of federated learning. Specifically, we discuss the special situation of nonintersecting classes in the case of non-independent identical distribution. Using deep learning methods with both pretrained and trained-from-scratch models, this study comprehensively discusses the impact of the number and distribution of units and evaluates the results of joint training based on Top-1 and Top-5 accuracy. Experimental results show that the initial weight setting of joint training has a critical impact. Random weights lead to unstable model performance, while weights set based on the same criteria yield stable and more accurate results. Additionally, model performance varies depending on characteristics of data distribution. The performance of federated-learning model trained with independent identical distribution samples is the best, followed by imbalanced distribution in non-independent identical distribution, while non-intersecting class allocation is the least ideal.
Date of Conference: 03-05 January 2024
Date Added to IEEE Xplore: 12 February 2024
ISBN Information:
Conference Location: Kuala Lumpur, Malaysia

Contact IEEE to Subscribe

References

References is not available for this document.