Abstract:
The theory of (standard) stochastic optimal control is based on the assumption that the probability distribution of uncertain variables is fully known. In practice, howev...Show MoreMetadata
Abstract:
The theory of (standard) stochastic optimal control is based on the assumption that the probability distribution of uncertain variables is fully known. In practice, however, obtaining an accurate distribution is often challenging. To resolve this issue, we study a distributionally robust stochastic control problem that minimizes a cost function of interest given that the distribution of uncertain variables is not known but lies in a so-called ambiguity set. We first investigate a dynamic programming approach and identify conditions for the existence and optimality of non-randomized Markov policies. We then propose a duality-based reformulation method for an associated Bellman equation in cases with conic confidence sets. This reformulation alleviates the computational issues inherent in the infinite-dimensional minimax optimization problem in the Bellman equation without sacrificing optimality. The effectiveness of the proposed method is demonstrated through an application to a stochastic inventory control problem.
Date of Conference: 12-15 December 2017
Date Added to IEEE Xplore: 22 January 2018
ISBN Information: