Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast and scalable all-optical network architecture for distributed deep learning

Not Accessible

Your library or personal account may give you access

Abstract

With the ever-increasing size of training models and datasets, network communication has emerged as a major bottleneck in distributed deep learning training. To address this challenge, we propose an optical distributed deep learning (ODDL) architecture. ODDL utilizes a fast yet scalable all-optical network architecture to accelerate distributed training. One of the key features of the architecture is its flow-based transmit scheduling with fast reconfiguration. This allows ODDL to allocate dedicated optical paths for each traffic stream dynamically, resulting in low network latency and high network utilization. Additionally, ODDL provides physically isolated and tailored network resources for training tasks by reconfiguring the optical switch using LCoS-WSS technology. The ODDL topology also uses tunable transceivers to adapt to time-varying traffic patterns. To achieve accurate and fine-grained scheduling of optical circuits, we propose an efficient distributed control scheme that incurs minimal delay overhead. Our evaluation on real-world traces showcases ODDL’s remarkable performance. When implemented with 1024 nodes and 100 Gbps bandwidth, ODDL accelerates VGG19 training by $1.6 \times$ and $1.7 \times$ compared to conventional fat-tree electrical networks and photonic SiP-Ring architectures, respectively. We further build a four-node testbed, and our experiments show that ODDL can achieve comparable training time compared to that of an ideal electrical switching network.

© 2024 Optica Publishing Group

Full Article  |  PDF Article
More Like This
Flexible silicon photonic architecture for accelerating distributed deep learning

Zhenguo Wu, Liang Yuan Dai, Yuyang Wang, Songli Wang, and Keren Bergman
J. Opt. Commun. Netw. 16(2) A157-A168 (2024)

Modoru: Clos nanosecond optical switching for distributed deep training [Invited]

Cen Wang, Noboru Yoshikane, Daniel Elson, Yuta Wakayama, Daiki Soma, Shohei Beppu, and Takehiro Tsuritani
J. Opt. Commun. Netw. 16(1) A40-A52 (2024)

Scalable Data Center Network Architecture With Distributed Placement of Optical Switches and Racks

Jie Xiao, Bin Wu, Xiaohong Jiang, Achille Pattavina, Hong Wen, and Lei Zhang
J. Opt. Commun. Netw. 6(3) 270-281 (2014)

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Figures (16)

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Tables (4)

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Equations (1)

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.