Loading [a11y]/accessibility-menu.js
Online policy iteration solution for dynamic graphical games | IEEE Conference Publication | IEEE Xplore

Online policy iteration solution for dynamic graphical games


Abstract:

The dynamic graphical game is a special class of the standard dynamic game and explicitly captures the structure of a communication graph, where the information flow betw...Show More

Abstract:

The dynamic graphical game is a special class of the standard dynamic game and explicitly captures the structure of a communication graph, where the information flow between the agents is governed by the communication graph topology. A novel online adaptive learning (policy iteration) solution for the graphical game is given in terms of the solution to a set of coupled graphical game Hamiltonian and Bellman equations. The policy iteration solution is developed to learn Nash solution for the dynamic graphical game online in real-time. Policy iteration convergence proof for the dynamic graphical game is given under mild condition about the graph interconnectivity properties. Critic neural network structures are used to implement the online policy iteration solution. Only partial knowledge of the dynamics is required and the tuning is done in a distributed fashion in terms of the local information available to each agent.
Date of Conference: 21-24 March 2016
Date Added to IEEE Xplore: 19 May 2016
ISBN Information:
Conference Location: Leipzig, Germany

Contact IEEE to Subscribe

References

References is not available for this document.