Loading [MathJax]/extensions/MathMenu.js
Personalisation of Federated Learning Models Through Knowledge Distillation on Decentralised Data | IEEE Conference Publication | IEEE Xplore

Personalisation of Federated Learning Models Through Knowledge Distillation on Decentralised Data


Abstract:

A prevalent issue in the field of Machine Learning (ML) remains the extensive time required to train and utilise intricate Machine and Deep Learning (DL) models, optimisi...Show More

Abstract:

A prevalent issue in the field of Machine Learning (ML) remains the extensive time required to train and utilise intricate Machine and Deep Learning (DL) models, optimising them for the various applications that require them. This problem is all the more prevalent in the federated domain, where optimising a model for each different node is a difficult challenge. Towards solving this issue, a lot of techniques have surfaced, aiming at reducing training time and cost, while ensuring optimisation. One of the proposed solutions to address this challenge is Knowledge Distillation, which involves transferring knowledge from a pre-trained complex model to a simpler untrained one. This approach can facilitate the use of smaller models with reduced computational demands, as well as personalise the model with additional local knowledge, enhancing the performance of existing models. The present study delves into the fundamental principles of KD and applies the technique to a selection of ML models, investigating the efficacy of this technique in the local and federated domain. Subsequently, a comparative analysis of the results between the original and optimised, local and federated models is presented. Post-analysis indicates promising results in personalising and optimising the employed models with KD.
Date of Conference: 04-08 December 2023
Date Added to IEEE Xplore: 21 March 2024
ISBN Information:
Conference Location: Kuala Lumpur, Malaysia

Contact IEEE to Subscribe

References

References is not available for this document.