Abstract:
The problem of minimizing cost in nonlinear control systems with uncertainties or disturbances remains a major challenge. Model predictive control (MPC), and in particula...Show MoreMetadata
Abstract:
The problem of minimizing cost in nonlinear control systems with uncertainties or disturbances remains a major challenge. Model predictive control (MPC), and in particular sampling-based MPC has recently shown great success in complex domains such as aggressive driving with highly nonlinear dynamics. Sampling-based methods rely on a prior distribution to generate samples in the first place. Obviously, the choice of this distribution highly influences efficiency of the controller. Existing approaches such as sampling around the control trajectory of the previous time step perform suboptimally, especially in multi-modal or highly dynamic settings. In this work, we therefore propose to learn models that generate samples in low-cost areas of the state-space, conditioned on the environment and on contextual information of the task to solve. By using generative models as an informed sampling distribution, our approach exploits guidance from the learned models and at the same time maintains robustness properties of the MPC methods. We use Conditional Variational Autoencoders (CVAE) to learn distributions that imitate samples from a training dataset containing optimized controls. An extensive evaluation in the autonomous navigation domain suggests that replacing previous sampling schemes with our learned models considerably improves performance in terms of path quality and planning efficiency.
Date of Conference: 20-24 May 2019
Date Added to IEEE Xplore: 12 August 2019
ISBN Information: