Loading [a11y]/accessibility-menu.js
Nonlinear two-player zero-sum game approximate solution using a Policy Iteration algorithm | IEEE Conference Publication | IEEE Xplore

Nonlinear two-player zero-sum game approximate solution using a Policy Iteration algorithm


Abstract:

An approximate online solution is developed for a two-player zero-sum game subject to continuous-time nonlinear uncertain dynamics and an infinite horizon quadratic cost....Show More

Abstract:

An approximate online solution is developed for a two-player zero-sum game subject to continuous-time nonlinear uncertain dynamics and an infinite horizon quadratic cost. A novel actor-critic-identifier (ACI) structure is used to implement the Policy Iteration (PI) algorithm, wherein a robust dynamic neural network (DNN) is used to asymptotically identify the uncertain system, and a critic NN is used to approximate the value function. The weight update laws for the critic NN are generated using a gradient-descent method based on a modified temporal difference error, which is independent of the system dynamics. This method finds approximations of the optimal value function, and the saddle point feedback control policies. These policies are computed using the critic NN and the identifier DNN and guarantee uniformly ultimately bounded (UUB) stability of the closed-loop system. The actor, critic and identifier structures are implemented in real-time, continuously and simultaneously.
Date of Conference: 12-15 December 2011
Date Added to IEEE Xplore: 01 March 2012
ISBN Information:

ISSN Information:

Conference Location: Orlando, FL, USA

Contact IEEE to Subscribe

References

References is not available for this document.