Abstract:
In this paper we propose distributed algorithms that use 2-D image measurements to estimate the absolute 3-D poses of the nodes in a camera network, with the purpose of e...Show MoreMetadata
Abstract:
In this paper we propose distributed algorithms that use 2-D image measurements to estimate the absolute 3-D poses of the nodes in a camera network, with the purpose of enabling higher-level tasks such as tracking and recognition. We assume that pairs of cameras with overlapping fields of view can estimate their relative 3-D pose (rotation and translation direction) using standard computer vision techniques. The solution we propose combines these local, noisy estimates into a single consistent localization. We derive our algorithms from optimization problems on the manifold of poses. We provide theoretical results on the convergence of the algorithms (choice of the step-size, initialization) and on the properties of their solutions (sensitivity, uniqueness). We also provide experiments on synthetic and real data. Interestingly, our algorithm for estimating the rotation part of the poses shows some degree of robustness to outliers.
Published in: IEEE Transactions on Automatic Control ( Volume: 59, Issue: 12, December 2014)