Loading [a11y]/accessibility-menu.js
Policy Approximation in Policy Iteration Approximate Dynamic Programming for Discrete-Time Nonlinear Systems | IEEE Journals & Magazine | IEEE Xplore

Policy Approximation in Policy Iteration Approximate Dynamic Programming for Discrete-Time Nonlinear Systems


Abstract:

Policy iteration approximate dynamic programming (DP) is an important algorithm for solving optimal decision and control problems. In this paper, we focus on the problem ...Show More

Abstract:

Policy iteration approximate dynamic programming (DP) is an important algorithm for solving optimal decision and control problems. In this paper, we focus on the problem associated with policy approximation in policy iteration approximate DP for discrete-time nonlinear systems using infinite-horizon undiscounted value functions. Taking policy approximation error into account, we demonstrate asymptotic stability of the control policy under our problem setting, show boundedness of the value function during each policy iteration step, and introduce a new sufficient condition for the value function to converge to a bounded neighborhood of the optimal value function. Aiming for practical implementation of an approximate policy, we consider using Volterra series, which has been extensively covered in controls literature for its good theoretical properties and for its success in practical applications. We illustrate the effectiveness of the main ideas developed in this paper using several examples including a practical problem of excitation control of a hydrogenerator.
Page(s): 2794 - 2807
Date of Publication: 06 June 2017

ISSN Information:

PubMed ID: 28600262

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.