Abstract
The H∞ control method is an effective approach for attenuating the effect of disturbances on practical systems, but it is difficult to obtain the H∞ controller due to the nonlinear Hamilton—Jacobi—Isaacs equation, even for linear systems. This study deals with the design of an H∞ controller for linear discrete-time systems. To solve the related game algebraic Riccati equation (GARE), a novel model-free minimax Q-learning method is developed, on the basis of an offline policy iteration algorithm, which is shown to be Newton’s method for solving the GARE. The proposed minimax Q-learning method, which employs off-policy reinforcement learning, learns the optimal control policies for the controller and the disturbance online, using only the state samples generated by the implemented behavior policies. Different from existing Q-learning methods, a novel gradient-based policy improvement scheme is proposed. We prove that the minimax Q-learning method converges to the saddle solution under initially admissible control policies and an appropriate positive learning rate, provided that certain persistence of excitation (PE) conditions are satisfied. In addition, the PE conditions can be easily met by choosing appropriate behavior policies containing certain excitation noises, without causing any excitation noise bias. In the simulation study, we apply the proposed minimax Q-learning method to design an H∞ load-frequency controller for an electrical power system generator that suffers from load disturbance, and the simulation results indicate that the obtained H∞ load-frequency controller has good disturbance rejection performance.
摘要
H∞控制是一种消除系统扰动的有效方式, 但是由于需要求解非线性哈密顿—雅克比—伊萨克斯方程, H∞控制器往往很难得到, 即便对于线性系统. 本文考虑了线性离散时间系统的H∞控制器设计问题. 为求解涉及的博弈代数黎卡提方程, 在离线策略算法基础上提出一种新型无模型极小极大Q-学习算法, 并证明离线策略迭代算法是求解博弈代数黎卡提方程的牛顿法. 提出的极小极大Q-学习算法采用离轨策略强化学习技术, 利用行为策略产生的系统状态数据, 可实现对最优控制器和最佳干扰策略的在线学习. 不同于当前Q-学习算法, 本文提出一种基于梯度的策略提高方法. 证明在一定持续激励条件下, 对于初始可行的控制策略并结合合适学习率, 提出的极小极大Q-学习算法可收敛到鞍点策略. 此外, 算法收敛所需的持续激励条件可通过选择包含一定噪声激励的合适行为策略实现, 且不会引起任何激励噪声偏差. 将提出的极小极大Q-学习算法用于受负载扰动的电力系统H∞负载频率控制器设计, 仿真结果表明, 最终得到的H∞负载频率控制器具有良好抗干扰性能.
Similar content being viewed by others
References
Al-Tamimi A, Lewis FL, Abu-Khalaf M, 2007. Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control. Automatica, 43(3):473–481. https://doi.org/10.1016/j.automatica.2006.09.019
Başar T, Bernhard P, 1995. H∞-Optimal Control and Related Minimax Design Problems (2nd Ed.). Springer, Boston, USA.
Doyle JC, Glover K, Khargonekar PP, et al., 1989. Statespace solutions to standard H2 and H∞ control problems. IEEE Trans Autom Contr, 34(8):831–847. https://doi.org/10.1109/9.29425
Hansen TD, Miltersen PB, Zwick U, 2003. Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor. JACM, 60(1): Article 1. https://doi.org/10.1145/2432622.2432623
He HB, Zhong XN, 2018. Learning without external reward. IEEE Comput Intell Mag, 13(3):48–54. https://doi.org/10.1109/MCI.2018.2840727
Ioannou PA, Fidan B, 2006. Adaptive Control Tutorial. SIAM, Philadelphia, USA.
Kiumarsi B, Lewis FL, Jiang ZP, 2017. H∞ control of linear discrete-time systems: off-policy reinforcement learning. Automatica, 78:144–152. https://doi.org/10.1016/j.automatica.2016.12.009
Kiumarsi B, Vamvoudakis KG, Modares H, et al., 2018. Optimal and autonomous control using reinforcement learning: a survey. IEEE Trans Neur Netw Learn Syst, 29(6):2042–2062. https://doi.org/10.1109/TNNLS.2017.2773458
Li HR, Zhang QC, Zhao DB, 2020. Deep reinforcement learning-based automatic exploration for navigation in unknown environment. IEEE Trans Neur Netw Learn Syst, 31(6):2064–2076. https://doi.org/10.1109/TNNLS.2019.2927869
Li XX, Peng ZH, Jiao L, et al., 2019. Online adaptive Q-learning method for fully cooperative linear quadratic dynamic games. Inform Sci, 62:222201. https://doi.org/10.1007/s11432-018-9865-9
Littman ML, 2001. Value-function reinforcement learning in Markov games. Cogn Syst Res, 2(1):55–66. https://doi.org/10.1016/S1389-0417(01)00015-8
Luo B, Wu HN, Huang TW, 2015. Off-policy reinforcement learning for H∞ control design. IEEE Trans Cybern, 45(1):65–76. https://doi.org/10.1109/TCYB.2014.2319577
Luo B, Yang Y, Liu DR, 2018. Adaptive Q-learning for data-based optimal output regulation with experience replay. IEEE Trans Cybern, 48(12):3337–3348. https://doi.org/10.1109/TCYB.2018.2821369
Luo B, Yang Y, Liu DR, 2021. Policy iteration Q-learning for data-based two-player zero-sum game of linear discrete-time systems. IEEE Trans Cybern, 51(7):3630–3640. https://doi.org/10.1109/TCYB.2020.2970969
Mehraeen S, Dierks T, Jagannathan S, et al., 2013. Zero-sum two-player game theoretic formulation of affine nonlinear discrete-time systems using neural networks. IEEE Trans Cybern, 43(6):1641–1655. https://doi.org/10.1109/TSMCB.2012.2227253
Modares H, Lewis FL, Jiang ZP, 2015. H∞ tracking control of completely unknown continuous-time systems via off-policy reinforcement learning. IEEE Trans Neur Netw Learn Syst, 26(10):2550–2562. https://doi.org/10.1109/TNNLS.2015.2441749
Prokhorov DV, Wunsch DC, 1997. Adaptive critic designs. IEEE Trans Neur Netw, 8(5):997–1007. https://doi.org/10.1109/72.623201
Rizvi SAA, Lin ZL, 2018. Output feedback Q-learning for discrete-time linear zero-sum games with application to the H-infinity control. Automatica, 95:213–221. https://doi.org/10.1016/j.automatica.2018.05.027
Sakamoto N, van der Schaft AJ, 2008. Analytical approximation methods for the stabilizing solution of the Hamilton—Jacobi equation. IEEE Trans Autom Contr, 53(10):2335–2350. https://doi.org/10.1109/TAC.2008.2006113
Sutton RS, Barto AG, 1998. Reinforcement Learning: an Introduction. MIT Press, Cambridge, USA.
Valadbeigi AP, Sedigh AK, Lewis FL, 2020. H∞ static output-feedback control design for discrete-time systems using reinforcement learning. IEEE Trans Neur Netw Learn Syst, 31(2):396–406. https://doi.org/10.1109/TNNLS.2019.2901889
Vamvoudakis KG, Modares H, Kiumarsi B, et al., 2017. Game theory-based control system algorithms with realtime reinforcement learning: how to solve multiplayer games online. IEEE Contr Syst Mag, 37(1):33–52. https://doi.org/10.1109/MCS.2016.2621461
Watkins CJCH, Dayan P, 1992. Q-learning. Mach Learn, 8(3):279–292. https://doi.org/10.1007/BF00992698
Wei QL, Lewis FL, Sun QY, et al., 2017. Discrete-time deterministic Q-learning: a novel convergence analysis. IEEE Trans Cybern, 47(5):1224–1237. https://doi.org/10.1109/TCYB.2016.2542923
Wei YF, Wang ZY, Guo D, et al., 2019. Deep Q-learning based computation offloading strategy for mobile edge computing. Comput Mater Contin, 59(1):89–104. https://doi.org/10.32604/cmc.2019.04836
Yan HS, Zhang JJ, Sun QM, 2019. MTN optimal control of SISO nonlinear time-varying discrete-time systems for tracking by output feedback. Intell Autom Soft Comput, 25(3):487–507.
Zhang HG, Qin CB, Jiang B, et al., 2014. Online adaptive policy learning algorithm for H∞ state feedback control of unknown affine nonlinear discrete-time systems. IEEE Trans Cybern, 44(12):2706–2718. https://doi.org/10.1109/TCYB.2014.2313915
Zhong XN, He HB, Wang D, et al., 2018. Model-free adaptive control for unknown nonlinear zero-sum differential game. IEEE Trans Cybern, 48(5):1633–1646. https://doi.org/10.1109/TCYB.2017.2712617
Zhu YH, Zhao DB, Li XJ, 2017. Iterative adaptive dynamic programming for solving unknown nonlinear zero-sum game based on online data. IEEE Trans Neur Netw Learn Syst, 28(3):714–725. https://doi.org/10.1109/TNNLS.2016.2561300
Author information
Authors and Affiliations
Contributions
Xinxing LI and Lele XI designed the research, conducted the investigation, and drafted the paper. Wenzhong ZHA and Zhihong PENG supervised the research, helped organize the paper, and revised and finalized the paper.
Corresponding author
Ethics declarations
Xinxing LI, Lele XI, Wenzhong ZHA, and Zhihong PENG declare that they have no conflict of interest.
Additional information
Project supported by the National Natural Science Foundation of China (No. U1613225)
Rights and permissions
About this article
Cite this article
Li, X., Xi, L., Zha, W. et al. Minimax Q-learning design for H∞ control of linear discrete-time systems. Front Inform Technol Electron Eng 23, 438–451 (2022). https://doi.org/10.1631/FITEE.2000446
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1631/FITEE.2000446
Key words
- H ∞ control
- Zero-sum dynamic game
- Reinforcement learning
- Adaptive dynamic programming
- Minimax Q-learning
- Policy iteration