Loading [a11y]/accessibility-menu.js
AdaGL: Adaptive Learning for Agile Distributed Training of Gigantic GNNs | IEEE Conference Publication | IEEE Xplore

AdaGL: Adaptive Learning for Agile Distributed Training of Gigantic GNNs


Abstract:

Distributed GNN training on contemporary massive and densely connected graphs requires information aggregation from all neighboring nodes, which leads to an explosion of ...Show More

Abstract:

Distributed GNN training on contemporary massive and densely connected graphs requires information aggregation from all neighboring nodes, which leads to an explosion of inter-server communications. This paper proposes AdaGL, a highly scalable end-to-end framework for rapid distributed GNN training. AdaGL novelty lies upon our adaptive-learning based graph-allocation engine as well as utilizing multi-resolution coarse representation of dense graphs. As a result, AdaGL achieves an unprecedented level of balanced server computation while minimizing the communication overhead. Extensive proof-of-concept evaluations on billion-scale graphs show AdaGL attains ∼30−40% faster convergence compared with prior arts.
Date of Conference: 09-13 July 2023
Date Added to IEEE Xplore: 15 September 2023
ISBN Information:
Conference Location: San Francisco, CA, USA

References

References is not available for this document.