Abstract
We present a sample-based replanning strategy for driving partially-observable, high-dimensional robotic systems to a desired goal. At each time step, it uses forward simulation of randomly-sampled open-loop controls to construct a belief-space search tree rooted at its current belief state. Then, it executes the action at the root that leads to the best node in the tree. As a node quality metric we use Monte Carlo simulation to estimate the likelihood of success under the QMDP belief-space feedback policy, which encourages the robot to take information-gathering actions as needed to reach the goal. The technique is demonstrated on target-finding and localization examples in up to 5D state spacess.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Alterovitz, R., Simeon, T., Goldberg, K.: The stochastic motion roadmap: A sampling framework for planning with markov motion uncertainty. In: Robotics: Science and Systems ( June 2007)
Barraquand, J., Latombe, J.-C.: Robot motion planning: A distributed representation approach. Int. J. Rob. Res. 10(6), 628–649 (1991)
Berg, J.V.D., Abbeel, P., Goldberg, K.: Lqg-mp: Optimized path planning for robots with motion uncertainty and imperfect state information. In: Proc. Robotics: Science and Systems (2010)
Burns, B., Brock, O.: Sampling-based motion planning with sensing uncertainty. In: Proc. IEEE Int. Conf. on Robotics and Automation (2007)
Doucet, A., Godsill, S., Andrieu, C.: On sequential monte carlo sampling methods for bayesian filtering. Statistics and Computing 10(3), 197–208 (2000)
du Toit, N., Burdick, J.: Robotic motion planning in dynamic, cluttered, uncertain environments. In: IEEE Int. Conf. on Robotics and Automation (2010)
Guibas, L.J., Hsu, D., Kurniawati, H., Rehman, E.: Bounded uncertainty roadmaps for path planning. In: Workshop on the Algorithmic Foundations of Robotics, Guanajuato, Mexico (2008)
He, R., Brunskill, E., Roy, N.: Puma: Planning under uncertainty with macro-actions. In: Proc. Twenty-Fourth Conf. on Artificial Intelligence (AAAI) (2010)
Hsiao, K., Lozano-Perez, T., Kaelbling, L.P.: Robust belief-based execution of manipulation programs. In: Eighth International Workshop on the Algorithmic Foundations of Robotics (WAFR) (2008)
Huang, Y., Gupta, K.: Collision-probability constrained prm for a manipulator with base pose uncertainty. In: Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Piscataway, NJ, USA, 2009, pp. 1426–1432. IEEE Press, Los Alamitos (2009)
Kavraki, L.E., Svetska, P., Latombe, J.-C., Overmars, M.: Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans. Robot. and Autom. 12(4), 566–580 (1996)
Kearns, M., Mansour, Y., Ng, A.Y.: Approximate planning in large pomdps via reusable trajectories. In: Advances in Neural Information Processing Systems, vol. 12. MIT Press, Cambridge (2000)
Kurniawati, H., Du, Y., Hsu, D., Lee, W.: Motion planning under uncertainty for robotic tasks with long time horizons. In: Proc. Int. Symp. on Robotics Research (2009)
Kurniawati, H., Hsu, D., Lee, W.: Sarsop: Efficient point-based pomdp planning by approximating optimally reachable belief spaces. In: Proc. Robotics: Science and Systems (2008)
LaValle, S.M., Kuffner Jr., J.J.: Rapidly-exploring random trees: progress and prospects. In: WAFR (2000)
Littman, M., Cassandra, A.R., Kaelbling, L.P.: Learning policies for partially observable environments: Scaling up. In: Proc. 12th Int. Conf. on Machine Learning, pp. 362–370. Morgan Kaufmann, San Francisco (1995)
Littman, M.L., Goldsmith, J., Mundhenk, M.: The computational complexity of probabilistic planning. Journal of Artificial Intelligence Research 9, 1–36 (1998)
Pineau, J., Gordon, G., Thrun, S.: Point-based value iteration: An anytime algorithm for pomdps. In: International Joint Conference on Artificial Intelligence (IJCAI), Acapulco, Mexico, pp. 1025–1032 (August 2003)
Platt, R., Tedrake, R., Kaelbling, L., Lozano-Perez, T.: Belief space planning assuming maximum likelihood observations. In: Proc. Robotics: Science and Systems (2010)
Porta, J.M., Vlassis, N., Spaan, M.T.J., Poupart, P.: Point-based value iteration for continuous pomdps. Journal of Machine Learning Research 7, 2329–2367 (2006)
Prentice, S., Roy, N.: The Belief Roadmap: Efficient Planning in Belief Space by Factoring the Covariance. The International Journal of Robotics Research 28(11-12), 1448–1465 (2009)
Thrun, S.: Monte carlo pomdps. In: Advances in Neural Information Processing Systems 12 (NIPS-1999), MIT Press, Cambridge (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Hauser, K. (2010). Randomized Belief-Space Replanning in Partially-Observable Continuous Spaces. In: Hsu, D., Isler, V., Latombe, JC., Lin, M.C. (eds) Algorithmic Foundations of Robotics IX. Springer Tracts in Advanced Robotics, vol 68. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17452-0_12
Download citation
DOI: https://doi.org/10.1007/978-3-642-17452-0_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-17451-3
Online ISBN: 978-3-642-17452-0
eBook Packages: EngineeringEngineering (R0)