Scalable Reinforcement Learning for Dynamic Overlay Selection in SD-WANs | IEEE Conference Publication | IEEE Xplore

Scalable Reinforcement Learning for Dynamic Overlay Selection in SD-WANs


Abstract:

SD-WAN promises distributed enterprises to satisfy their dynamic communication requirements over the public Internet with a substantial cost reduction and enhanced perfor...Show More

Abstract:

SD-WAN promises distributed enterprises to satisfy their dynamic communication requirements over the public Internet with a substantial cost reduction and enhanced performance compared to dedicated lines. It builds interconnections between users or applications in remote sites by exploiting all available transport connections (e.g. Internet, MPLS, …), but how to combine them to enhance communication performance is still an open challenge. Previous work investigated the use of Reinforcement Learning in the SD-WAN control logic to solve this problem, but they only considered simple scenarios consisting of two sites connected by two paths. In this paper we move a step forward and pose the question of whether such a promising approach can scale to WANs spanning multiple distributed sites connected through several paths. We first conduct an analytical study of the complexity of Reinforcement Learning that considers the increase of action and state spaces when the number of sites and paths grows. We then propose a solution based on Multi-Agent Reinforcement Learning (MARL) that helps reducing the overall complexity by leveraging an agent for each site. Finally, we show the effectiveness of our solution with real experiments in an emulated environment, showing that not only it is viable, but it also achieves a reduction in network policy violations, latency, and transit costs in a multi-site scenario.
Date of Conference: 12-15 June 2023
Date Added to IEEE Xplore: 24 July 2023
ISBN Information:
Electronic ISSN: 1861-2288
Conference Location: Barcelona, Spain

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.