Abstract:
In this work, we propose Timed Loop Gears (TLG), as a distributed method for enabling fragmented learning in Resource-Constrained networked IoT edge devices. TLG identifi...Show MoreMetadata
Abstract:
In this work, we propose Timed Loop Gears (TLG), as a distributed method for enabling fragmented learning in Resource-Constrained networked IoT edge devices. TLG identifies atomic operations (gears), such as feed-forward and back-propagation, necessary for training Machine Learning (ML) models. Each of these gears executes on a Fog Node (FN) exclusively for each data point at a time rather than the whole dataset in its entirety. Additionally, the networked Edge Devices (EDs) offload the training data to the fog layer using the Message Queuing Telemetry Transport (MQTT) protocol such that the participating FNs subscribe to incoming training data and store them based on topics, simplifying data sharing. TLG enables the FN to then transfer the partially learned weights to the next suitable FN for further training. This looping of weights is repeated across FNs until the training is complete. Through extensive analysis, we observe that, compared to existing distributed ML training approaches, for n devices, TLG reduces the probability of disruption due to device failure by n^{2} times. Implementation results of our fragmented learning method demonstrate that, although TLG negligibly increases the memory consumption of the IoT devices by 0.8\%, it reduces CPU usage by almost 90\%. The proposed method proves beneficial for developing and hosting ML models, even on constrained IoT devices, in contrast to existing lightweight ML methods.
Published in: IEEE Transactions on Parallel and Distributed Systems ( Volume: 34, Issue: 1, 01 January 2023)