Loading [MathJax]/extensions/MathMenu.js
On Distributed Model-Free Reinforcement Learning Control With Stability Guarantee | IEEE Journals & Magazine | IEEE Xplore

On Distributed Model-Free Reinforcement Learning Control With Stability Guarantee


Abstract:

Distributed learning can enable scalable and effective decision making in numerous complex cyber-physical systems such as smart transportation, robotics swarm, power syst...Show More

Abstract:

Distributed learning can enable scalable and effective decision making in numerous complex cyber-physical systems such as smart transportation, robotics swarm, power systems, etc. However, stability of the system is usually not guaranteed in most existing learning paradigms; and this limitation can hinder the wide deployment of machine learning in decision making of safety-critical systems. This letter presents a stability-guaranteed distributed reinforcement learning (SGDRLHJ80-C1001-A032) framework for interconnected linear subsystems, without knowing the subsystem models. While the learning process requires data from a peer-to-peer (p2p) communication architecture, the control implementation of each subsystem is only based on its local states. The stability of the interconnected subsystems will be ensured by a diagonally dominant eigenvalue condition, which will then be used in a model-free RL algorithm to learn the stabilizing control gains. The RL algorithm structure follows an off-policy iterative framework, with interleaved policy evaluation and policy update steps. We numerically validate our theoretical results by performing simulations on four interconnected sub-systems.
Published in: IEEE Control Systems Letters ( Volume: 5, Issue: 5, November 2021)
Page(s): 1615 - 1620
Date of Publication: 30 November 2020
Electronic ISSN: 2475-1456

Contact IEEE to Subscribe

References

References is not available for this document.