Abstract
Among all the mathematical frameworks used in the control and robotics communities to handle uncertainties, the stochastic variants of optimal control frameworks are appealing, in particular because of the existence of efficient tools to solve them computationally, such as dynamic programming. However, in many cases, because of their formulation as a classical optimization problem, it may be difficult to ponder the expected solutions for a given choice of the objective function to minimize. In this paper, we perform an in-depth analysis of the behavior of the policies obtained from solving Stochastic Linear Quadratic Gaussian problems, thinking in particular in robot motion planning applications. To perform this analysis, we assume simplified linear systems perturbed by Gaussian noise, with state-dependend and control-dependent components, and objective functions summing up control-related and state-related costs. We provide (1) useful bounds for understanding the effect of the objective function parameters, (2) insights on what the expected paths of system should be and (3) results on the optimal choice of the planning horizon.
Similar content being viewed by others
References
Anderson, B.D.O., Moore, J.B.: Optimal control: linear quadratic methods. Prentice-Hall, Inc., Upper Saddle River (1990)
Bai, H., Hsu, D., Lee, W.S.: Integrated perception and planning in the continuous space: A pomdp approach. Int. J. Robot. Res. 33(9), 1288–1302 (2014)
Barraquand, J., Ferbach, P.: Motion planning with uncertainty: the information space approach. In: 1995. Proceedings., 1995 IEEE International Conference on Robotics and Automation, vol. 2, pp 1341–1348 (1995)
Bertsekas, D.P.: Dynamic programming and optimal control. Volume I. Athena Scientific optimization and computation series. Mass. Athena Scientific, Belmont (2005)
Candido, S., Davidson, J.C., Hutchinson, S.: Exploiting domain knowledge in planning uncertain robot systems modeled as POMDPs. In: Proceedings of IEEE Int. Conf. on Robotics and Automation, ICRA 2010, Anchorage, pp 3596–3603 (2010)
Kaelbling, L., Littman, M., Cassandra, A.: Planning and acting in partially observable stochastic domains. Artif. Intell. 101(1-2), 99–134 (1998)
Kurniawati, H., Du, Y., Hsu, D., Lee, W.S.: Motion planning under uncertainty for robotic tasks with long time horizons. Int. J. Robot. Res. 30(3), 308–323 (2011)
LaValle, S.M., Hutchinson, S.A.: An objective-based framework for motion planning under sensing and control uncertainties. Int. J. Robot. Res. 17(1), 19–42 (1998)
Smith, C.R., Cheeseman, P.: On the representation and estimation of spatial uncertainty. Int. J. Robot. Res. 5(4), 56–68 (1986)
Taylor, K., LaValle, S.M.: Intensity-based navigation with global guarantees. Auton. Robot. 36(4), 349–364 (2014)
Thrun, S.: Monte Carlo POMDPs. In: Solla, S.A., Leen, T.K., Muller, K.-R. (eds.) Advances in Neural Information Processing Systems 12, pp 1064–1070. MIT Press, Cambridge (2000)
Todorov, E.: A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems. In: Proceedings of the 2005, American Control Conference, pp 300–306 (2005)
van den Berg, J., Abbeel, P., Goldberg, K.: LQG-MP: Optimized path planning for robots with motion uncertainty and imperfect state information. Int. J. Robot. Res. 30(7), 895–913 (2011)
van den Berg, J., Patil, S., Alterovitz, R.: Motion planning under uncertainty using iterative local optimization in belief space. Int. J. Robot. Res. 31(11), 1263–1278 (2012)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Carlos, H., Hayet, JB. & Murrieta-Cid, R. An Analysis of Policies from Stochastic Linear Quadratic Gaussian in Robotics Problems with State- and Control-Dependent Noise. J Intell Robot Syst 92, 85–106 (2018). https://doi.org/10.1007/s10846-017-0736-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10846-017-0736-x