Stochastic Two-Player Zero-Sum Learning Differential Games | IEEE Conference Publication | IEEE Xplore

Stochastic Two-Player Zero-Sum Learning Differential Games


Abstract:

The two-player zero-sum differential game has been extensively studied, partially because its solution implies the H∞ optimality. Existing studies on zero-sum differentia...Show More

Abstract:

The two-player zero-sum differential game has been extensively studied, partially because its solution implies the H optimality. Existing studies on zero-sum differential games either assume deterministic dynamics or the dynamics corrupted by additive noise. In realistic environments, highdimensional environmental uncertainties often modulate system dynamics in a more complicated fashion. In this paper, we study the stochastic two-player zero-sum differential game governed by more general uncertain linear dynamics. We show that the optimal control policies for this game can be found by solving the Hamilton-Jacobi-Bellman (HJB) equation. We prove that with the derived optimal control policies, the system is asymptotically stable in the mean, and reaches the Nash equilibrium. To solve the stochastic two-player zero-sum game online, we design a new policy iteration (PI) algorithm that integrates the integral reinforcement learning (IRL) and an efficient uncertainty evaluation method-multivariate probabilistic collocation method (MPCM). This algorithm provides a fast online solution for the stochastic two-player zero-sum differential game subject to multiple uncertainties in the system dynamics.
Date of Conference: 16-19 July 2019
Date Added to IEEE Xplore: 14 November 2019
ISBN Information:

ISSN Information:

Conference Location: Edinburgh, UK

Contact IEEE to Subscribe

References

References is not available for this document.