Abstract:
Although most robots use probabilistic algorithms to solve state estimation problems, path planning is often performed without considering the uncertainty about the robot...Show MoreMetadata
Abstract:
Although most robots use probabilistic algorithms to solve state estimation problems, path planning is often performed without considering the uncertainty about the robot's position. Uncertainty, however, matters in planning, but considering it often leads to computationally expensive algorithms. In this paper, we investigate the problem of path planning considering the uncertainty in the robot's belief about the world, in its perceptions and in its action execution. We propose the use of an uncertainty-augmented Markov Decision Process to approximate the underlying Partially Observable Markov Decision Process, and we employ a localization prior to estimate how the belief about the robot's position propagates through the environment. This yields to a planning approach that generates navigation policies able to make decisions according to the degree of uncertainty while being computationally tractable. We implemented our approach and thoroughly evaluated it on different navigation problems. Our experiments suggest that we are able to compute policies that are more effective than approaches that ignore the uncertainty, and that also outperform policies that always take the safest actions.
Date of Conference: 20-24 May 2019
Date Added to IEEE Xplore: 12 August 2019
ISBN Information: