Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accelerating model synchronization for distributed machine learning in an optical wide area network

Not Accessible

Your library or personal account may give you access

Abstract

Geo-distributed machine learning (Geo-DML) adopts a hierarchical training architecture that includes local model synchronization within the data center and global model synchronization (GMS) across data centers. However, the scarce and heterogeneous wide area network (WAN) bandwidth can become the bottleneck of training performance. An intelligent optical device (i.e., reconfigurable optical all-drop multiplexer) makes the modern WAN topology reconfigurable, which has been ignored by most approaches to speed up Geo-DML training. Therefore, in this paper, we study scheduling algorithms to accelerate model synchronization for Geo-DML training with consideration of the reconfigurable optical WAN topology. Specifically, we use an aggregation tree for each Geo-DML training job, which helps to reduce model synchronization communication overhead across the WAN, and propose two efficient algorithms to accelerate GMS for Geo-DML: MOptree, a model-based algorithm for single job scheduling, and MMOptree for multiple job scheduling, aiming to reconfigure the WAN topology and trees by reassigning wavelengths on each fiber. Based on the current WAN topology and job information, mathematical models are built to guide the topology reconstruction, wavelength, and bandwidth allocation for each edge of the trees. The simulation results show that MOptree completes the GMS stage up to 56.16% on average faster than the traditional tree without optical-layer reconfiguration, and MMOptree achieves up to 54.6% less weighted GMS time.

© 2022 Optica Publishing Group

Full Article  |  PDF Article
More Like This
Fast and scalable all-optical network architecture for distributed deep learning

Wenzhe Li, Guojun Yuan, Zhan Wang, Guangming Tan, Peiheng Zhang, and George N. Rouskas
J. Opt. Commun. Netw. 16(3) 342-357 (2024)

Flexible silicon photonic architecture for accelerating distributed deep learning

Zhenguo Wu, Liang Yuan Dai, Yuyang Wang, Songli Wang, and Keren Bergman
J. Opt. Commun. Netw. 16(2) A157-A168 (2024)

Topology configuration scheme for accelerating coflows in a hyper-FleX-LION

Hao Yang and Zuqing Zhu
J. Opt. Commun. Netw. 14(10) 805-814 (2022)

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Figures (13)

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Tables (6)

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Equations (29)

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.