Abstract:
Next-best-view algorithms are commonly used for covering known scenes, for example in search, maintenance, and mapping tasks. In this paper, we consider the problem of pl...Show MoreMetadata
Abstract:
Next-best-view algorithms are commonly used for covering known scenes, for example in search, maintenance, and mapping tasks. In this paper, we consider the problem of planning a strategy for covering articulated environments where the robot also has to manipulate objects to inspect obstructed areas. This problem is particularly challenging due to the many degrees of freedom resulting from the articulation. We propose to exploit graphics processing units present in many embedded devices to parallelize the computations of a greedy next-best-view approach. We implemented algorithms for costmap computation, path planning, as well as simulation and evaluation of viewpoint candidates in OpenGL for Embedded Systems and benchmarked the implementations on multiple device classes ranging from smartphones to multi-GPU servers. We introduce a heuristic for estimating a utility map from images rendered with strategically placed spherical cameras and show in simulation experiments that robots can successfully explore complex articulated scenes with our system.
Date of Conference: 01-05 October 2018
Date Added to IEEE Xplore: 06 January 2019
ISBN Information: