Loading [a11y]/accessibility-menu.js
Reducing communication overhead in distributed learning by an order of magnitude (almost) | IEEE Conference Publication | IEEE Xplore

Reducing communication overhead in distributed learning by an order of magnitude (almost)


Abstract:

Large-scale distributed learning plays an ever-more increasing role in modern computing. However, whether using a compute cluster with thousands of nodes, or a single mul...Show More

Abstract:

Large-scale distributed learning plays an ever-more increasing role in modern computing. However, whether using a compute cluster with thousands of nodes, or a single multi-GPU machine, the most significant bottleneck is that of communication. In this work, we explore the effects of applying quantization and encoding to the parameters of distributed models. We show that, for a neural network, this can be done - without slowing down the convergence, or hurting the generalization of the model. In fact, in our experiments we were able to reduce the communication overhead by nearly an order of magnitude - while actually improving the generalization accuracy.
Date of Conference: 19-24 April 2015
Date Added to IEEE Xplore: 06 August 2015
Electronic ISBN:978-1-4673-6997-8

ISSN Information:

Conference Location: South Brisbane, QLD, Australia

Contact IEEE to Subscribe

References

References is not available for this document.