Split Learning Optimized For The Medical Field: Reducing Communication Overhead | IEEE Conference Publication | IEEE Xplore

Split Learning Optimized For The Medical Field: Reducing Communication Overhead


Abstract:

Split Learning (SL) is a distributed privacy-preserving learning methodology designed to address the challenges associated with the deployment of large-scale deep neural ...Show More

Abstract:

Split Learning (SL) is a distributed privacy-preserving learning methodology designed to address the challenges associated with the deployment of large-scale deep neural networks on medical devices, while simultaneously safeguarding the privacy of medical data. However, both the forward and backward propagation of the model require communication between the medical devices and high-performance servers, resulting in significant communication overhead and high latency. In this paper, to reduce the communication overhead from the client to the server during forward propagation, we propose an autoencoder layer based on attention mechanisms and triple compression. To reduce the communication overhead from the server to the client during backward propagation, we proposed an average loss threshold algorithm to decrease the frequency of client updates. Compared to the original Split Learning algorithm, after incorporating the method proposed in this paper, the communication overhead during forward propagation decreased by an average of 93%, and during backward propagation, it decreased by an average of 96%. The total communication overhead decreased by an average of 95%. The model’s accuracy loss was between 0% and 1%. In terms of communication compression, compared to the SOTA SL–BSL, the overall communication overhead was reduced by an average of 92%.
Date of Conference: 03-06 December 2024
Date Added to IEEE Xplore: 10 January 2025
ISBN Information:

ISSN Information:

Conference Location: Lisbon, Portugal

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.