Loading [a11y]/accessibility-menu.js
Safe Transfer-Reinforcement-Learning-Based Optimal Control of Nonlinear Systems | IEEE Journals & Magazine | IEEE Xplore

Safe Transfer-Reinforcement-Learning-Based Optimal Control of Nonlinear Systems


Abstract:

Traditional reinforcement learning (RL) methods for optimal control of nonlinear processes often face challenges, such as high computational demands, long training times,...Show More

Abstract:

Traditional reinforcement learning (RL) methods for optimal control of nonlinear processes often face challenges, such as high computational demands, long training times, and difficulties in ensuring the safety of closed-loop systems during training. To address these issues, this work proposes a safe transfer RL (TRL) framework. The TRL algorithm leverages knowledge from pretrained source tasks to accelerate learning in a new, related target task, significantly reducing both learning time and computational resources required for optimizing control policies. To ensure safety during knowledge transfer and training, data collection and optimization of the control policy are performed within a control invariant set (CIS) throughout the learning process. Furthermore, we theoretically analyze the errors between the approximate and optimal control policies by accounting for the differences between source and target tasks. Finally, the proposed TRL method is applied to the case studies of chemical processes to demonstrate its effectiveness in solving the optimal control problem with improved computational efficiency and guaranteed safety.
Published in: IEEE Transactions on Cybernetics ( Volume: 54, Issue: 12, December 2024)
Page(s): 7272 - 7284
Date of Publication: 04 November 2024

ISSN Information:

PubMed ID: 39495686

Funding Agency:


References

References is not available for this document.