Abstract:
We develop extensions of the simulated annealing with multiplicative weights (SAMW) algorithm that proposed a method of solution of finite-horizon Markov decision process...Show MoreMetadata
Abstract:
We develop extensions of the simulated annealing with multiplicative weights (SAMW) algorithm that proposed a method of solution of finite-horizon Markov decision processes (FH-MDPs). The extensions developed are in three directions: a) Use of the dynamic programming principle in the policy update step of SAMW b) A two-timescale actor-critic algorithm that uses simulated transitions alone, and c) Extending the algorithm to the infinite-horizon discounted-reward scenario. In particular, a) reduces the storage required from exponential to linear in the number of actions per stage-state pair. On the faster timescale, a 'critic' recursion performs policy evaluation while on the slower timescale an 'actor' recursion performs policy improvement using SAMW. We give a proof outlining convergence w.p. 1 and show experimental results on two settings: semiconductor fabrication and flow control in communication networks.
Published in: 2007 American Control Conference
Date of Conference: 09-13 July 2007
Date Added to IEEE Xplore: 30 July 2007
ISBN Information: