Abstract:
General state space valued optimal stochastic control problems are often computationally intractable. On the other hand, for finite state-action models, there exist power...Show MoreMetadata
Abstract:
General state space valued optimal stochastic control problems are often computationally intractable. On the other hand, for finite state-action models, there exist powerful computational and simulation tools for computing optimal strategies. With this motivation, we consider finite state and action space approximations of discrete time Markov decision processes with discounted and average costs and compact state and action spaces. Stationary policies obtained from finite state approximations of the original model are shown to approximate the optimal stationary policy with arbitrary precision under mild technical conditions. These results complement recent work that studied the finite action approximation of discrete time Markov decision process with discounted and average costs.
Published in: 2015 American Control Conference (ACC)
Date of Conference: 01-03 July 2015
Date Added to IEEE Xplore: 30 July 2015
ISBN Information: