Abstract:
Recent work has led to a novel theory of linearly solvable optimal control, where the Bellman equation characterizing the optimal value function is reduced to a linear eq...Show MoreMetadata
Abstract:
Recent work has led to a novel theory of linearly solvable optimal control, where the Bellman equation characterizing the optimal value function is reduced to a linear equation. Already, this work has shown promising results in planning and control of nonlinear systems in high dimensional state spaces. In this paper, we extend the class of linearly solvable problems to include certain kinds of 2-player Markov Games. In terms of modeling power, the new framework is more general than previous work, and can apply to any noisy dynamical system. Also, we obtain analytical solutions for the optimal value function of continuous-state control problems with linear dynamics and a very flexible class of cost functions. The linearity leads to many other useful properties: the ability to compose solutions to simple control problems to obtain solutions to new problems, a convex optimization formulation of inverse optimal control etc. We demonstrate the usefulness of the framework through examples of forward and inverse optimal control problems in continuous as well as discrete state spaces.
Published in: 2012 American Control Conference (ACC)
Date of Conference: 27-29 June 2012
Date Added to IEEE Xplore: 01 October 2012
ISBN Information: