CHQ: A Multi-Agent Reinforcement Learning Scheme for Partially Observable Markov Decision Processes

Hiroshi OSADA
Satoshi FUJITA

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E88-D    No.5    pp.1004-1011
Publication Date: 2005/05/01
Online ISSN: 
DOI: 10.1093/ietisy/e88-d.5.1004
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Artificial Intelligence and Cognitive Science
Keyword: 
multi-agent system,  reinforcement learning,  partially observable MDP,  Q-learning,  

Full Text: PDF(253.5KB)>>
Buy this Article



Summary: 
In this paper, we propose a new reinforcement learning scheme called CHQ that could efficiently acquire appropriate policies under partially observable Markov decision processes (POMDP) involving probabilistic state transitions, that frequently occurs in multi-agent systems in which each agent independently takes a probabilistic action based on a partial observation of the underlying environment. A key idea of CHQ is to extend the HQ-learning proposed by Wiering et al. in such a way that it could learn the activation order of the MDP subtasks as well as an appropriate policy under each MDP subtask. The goodness of the proposed scheme is experimentally evaluated. The result of experiments implies that it can acquire a deterministic policy with a sufficiently high success rate, even if the given task is POMDP with probabilistic state transitions.


open access publishing via