Reinforcement Learning of Structured Stabilizing Control for Linear Systems With Unknown State Matrix
- Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
This paper delves into designing feedback control gains for a continuous-time linear quadratic regulator (LQR) problem that is constrained to certain predefined structure with unknown state matrix. We bring forth the ideas from reinforcement learning (RL) in conjunction with sufficient stability and performance guarantees in order to design these structured gains using the trajectory measurements of states and controls. Here we first formulate a model-based framework using dynamic programming (DP) to embed the structural constraint to the LQR gain computation in the continuous-time setting, and then subsequently, formulate a policy iteration RL algorithm that can alleviate the requirement of known state matrix in conjunction with maintaining the feedback gain structure. The design enables a distributed learning control design which is necessary for many large-scale cyber-physical systems. Theoretical guarantees are provided for stability and convergence of the structured reinforcement learning (SRL) algorithm. We validate our theoretical results with numerical simulations on a multi-agent networked linear time-invariant (LTI) dynamic system.
- Research Organization:
- Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
- Sponsoring Organization:
- USDOE
- Grant/Contract Number:
- AC05-76RL01830
- OSTI ID:
- 1971347
- Report Number(s):
- PNNL-SA-156272
- Journal Information:
- IEEE Transactions on Automatic Control, Vol. 68, Issue 3; ISSN 0018-9286
- Publisher:
- IEEECopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
A Secure Learning Control Strategy via Dynamic Camouflaging for Unknown Dynamical Systems under Attacks
On Distributed Model-Free Reinforcement Learning Control with Stability Guarantee