Loading [a11y]/accessibility-menu.js
Convergence of multiagent Q-learning: Multi action replay process approach | IEEE Conference Publication | IEEE Xplore

Convergence of multiagent Q-learning: Multi action replay process approach


Abstract:

In this paper, we first suggest a new type of Markov model extended by Watkins' action replay process. The new Markov model is called multi-action replay process (MARP), ...Show More

Abstract:

In this paper, we first suggest a new type of Markov model extended by Watkins' action replay process. The new Markov model is called multi-action replay process (MARP), which is a process designed for multiagent coordination on the basis of reward values, state transition probabilities, and equilibrium strategy taking account of joint-action among agents. Using this model, multiagent Q-learning algorithm is then constructed as a cooperative reinforcement learning algorithm under completely connected agents. Finally, we prove that multiagent Q-learning values converge to optimal values. Simulation results are reported to illustrate the validity of the proposed multiagent Q-learning algorithm.
Date of Conference: 08-10 September 2010
Date Added to IEEE Xplore: 28 October 2010
ISBN Information:

ISSN Information:

Conference Location: Yokohama, Japan

References

References is not available for this document.