Abstract:
Approximate dynamic programming platforms are employed to solve dynamic graphical games, where the agents interact among each other using communication graphs in order to...Show MoreMetadata
Abstract:
Approximate dynamic programming platforms are employed to solve dynamic graphical games, where the agents interact among each other using communication graphs in order to achieve synchronization. Although the action dependent dual heuristic dynamic programming schemes provide fast solution platforms for several control problems, their capabilities degrade for systems with unknown or uncertain dynamical models. An online model-free adaptive learning solution based on action dependent dual heuristic dynamic programming is proposed to solve the dynamic graphical games. It employs distributed actor-critic neural networks to approximate the optimal value function and the associated model-free control strategy for each agent. This is done using a policy iteration process where it does not employ any extensive computational effort, as traditionally observed. The duality between the model-free coupled Bellman optimality equation and the underlying coupled Riccati equation is highlighted. This is followed by a graph simulation scenario to test the usefulness of the proposed policy iteration process.
Date of Conference: 20-24 May 2019
Date Added to IEEE Xplore: 12 August 2019
ISBN Information: