Abstract:
With proper calibration of its color and depth cameras, the Kinect can capture detailed color point clouds at up to 30 frames per second. This capability positions the Ki...Show MoreMetadata
Abstract:
With proper calibration of its color and depth cameras, the Kinect can capture detailed color point clouds at up to 30 frames per second. This capability positions the Kinect for use in robotics as a low-cost navigation sensor. Thus, techniques for efficiently calibrating the Kinect depth camera and altering its optical system to improve suitability for imaging short-range obstacles are presented. To perform depth calibration, a calibration rig and software were developed to automatically map raw depth values to object depths. The calibration rig consisted of a traditional chessboard calibration target with easily locatable features in depth at its exterior corners that facilitated software extraction of corresponding object depths and raw depth values. To modify the Kinect's optics for improved short-range imaging, Nyko's Zoom adapter was used due to its simplicity and low cost. Although effective at reducing the Kinect's minimum range, these optics introduced pronounced distortion in depth. A method based on capturing depth images of planar objects at various depths produced an empirical depth distortion model for correcting such distortion in software. Together, the modified optics and the empirical depth undistortion procedure demonstrated the ability to improve the Kinect's resolution and decrease its minimum range by approximately 30%.
Published in: 2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)
Date of Conference: 13-15 September 2012
Date Added to IEEE Xplore: 10 November 2012
ISBN Information: