skip to main content
research-article

Kinect Depth Recovery Using a Color-Guided, Region-Adaptive, and Depth-Selective Framework

Published: 31 March 2015 Publication History

Abstract

Considering that the existing depth recovery approaches have different limitations when applied to Kinect depth data, in this article, we propose to integrate their effective features including adaptive support region selection, reliable depth selection, and color guidance together under an optimization framework for Kinect depth recovery. In particular, we formulate our depth recovery as an energy minimization problem, which solves the depth hole filling and denoising simultaneously. The energy function consists of a fidelity term and a regularization term, which are designed according to the Kinect characteristics. Our framework inherits and improves the idea of guided filtering by incorporating structure information and prior knowledge of the Kinect noise model. Through analyzing the solution to the optimization framework, we also derive a local filtering version that provides an efficient and effective way of improving the existing filtering techniques. Quantitative evaluations on our developed synthesized dataset and experiments on real Kinect data show that the proposed method achieves superior performance in terms of recovery accuracy and visual quality.

References

[1]
Autodesk. 2013. Autodesk 3ds Max. Retrieved from http://www.autodesk.com/products/autodesk-3ds-max/overview.
[2]
R. Barrett, M. Berry, T. F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. Van der Vorst. 1994. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods (2nd ed.). SIAM, Philadelphia, PA.
[3]
Y. Berdnikov and D. Vatolin. 2011. Real-time depth map occlusion filling and scene background restoration for projected-pattern based depth cameras. In Proceedings of the 21st International Conference on Computer Graphics (GraphiCon’11). 200--203.
[4]
M. Camplani and L. Salgado. 2012. Efficient spatio-temporal hole filling strategy for Kinect depth maps. In 3DIP and App. II. SPIE, 82900E--82900E--10.
[5]
C. Chen, J. Cai, J. Zheng, T.-J. Cham, and G. Shi. 2013. A color-guided, region-adaptive and depth-selective unified framework for Kinect depth recovery. In Proceedings of the International Workshop Multimedia Signal Process (MMSP’13). IEEE, 8--12.
[6]
L. Chen, H. Lin, and S. Li. 2012. Depth image enhancement for Kinect using region growing and bilateral filter. In Proceedings of the International Conference on Pattern Recognition (ICPR’12). IEEE, 3070--3073.
[7]
D. V. S. De Silva, W. A. C. Fernando, H. Kodikaraarachchi, S. T. Worrall, and A. M. Kondoz. 2010. Adaptive sharpening of depth maps for 3D-TV. Electron. Lett. 46, 23 (2010), 1546--1548.
[8]
T. Deng, H. Li, J. Cai, T.-J. Cham, and H. Fuchs. 2013. Kinect shadow detection and classification. In Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops (ICCVW’13). 708--713.
[9]
B. Freedman, A. Shpunt, M. Machline, and Y. Arieli. 2010. Depth mapping using projected patterns. Patent. US 2010/0118123. https://www.google.com/patents/US20100118123.
[10]
K. Khoshelham and S. O. Elberink. 2012. Accuracy and resolution of Kinect depth data for indoor mapping applications. Sensors 12, 2 (2012), 1437--1454.
[11]
J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele. 2007. Joint bilateral upsampling. ACM Trans. Graph. 26, 3, (July 2007), Article 96.
[12]
C. Kuster, T. Popa, C. Zach, C. Gotsman, and M. Gross. 2011. FreeCam: A hybrid camera system for interactive free-viewpoint video. In Proceedings of Vision, Modeling, and Visualization (VMV’11). 17--24.
[13]
P. Lai, D. Tian, and P. Lopez. 2010. Depth map processing with iterative joint multilateral filtering. In Proceedings of the Picture Coding Symposium (PCS’10). IEEE, 9--12.
[14]
S. Liu, P. Lai, D. Tian, C. Gomila, and C. W. Chen. 2010. Joint trilateral filtering for depth map compression. In Proceedings of Visual Communications and Image Processing (VCIP’10). SPIE, 77440F--77440F--10.
[15]
A. Maimone and H. Fuchs. 2011. Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras. In Proceedings of the International Symposium Mixed Augmented Reality (ISMAR’11). IEEE, 137--146.
[16]
A. Maimone and H. Fuchs. 2012. Reducing interference between multiple structured light depth sensors using motion. In Proceedings of the Virtual Reality (VR) Workshops. IEEE, 51--54.
[17]
D. Miao, J. Fu, Y. Lu, S. Li, and C. Chen. 2012. Texture-assisted Kinect depth inpainting. In Proceedings of the International Symposium Circuits Systems (ISCAS’12). IEEE, 604--607.
[18]
D. Min, J. Lu, and M. N. Do. 2012. Depth video enhancement based on weighted mode filtering. IEEE Trans. Image Process. 21, 3 (2012), 1176--1190.
[19]
C. Mutto, P. Zanuttigh, and G. Cortelazzo. 2012. Microsoft Kinect™ range camera. In Time-of-Flight Cameras and Microsoft Kinect™. Springer, 33--47.
[20]
R. A. Newcombe, A. J. Davison, S. Izadi, P. Kohli, O. Hilliges, J. Shotton, D. Molyneaux, S. Hodges, D. Kim, and A. Fitzgibbon. 2011. KinectFusion: Real-time dense surface mapping and tracking. In Proceedings of the International Symposium Mixed Augmented Reality (ISMAR’11). IEEE, 127--136.
[21]
J. Park, H. Kim, Y.-W. Tai, M. S. Brown, and I. Kweon. 2011. High quality depth map upsampling for 3D-TOF cameras. In Proceedings of the International Conference Computer Vision (ICCV’11). IEEE, 1623--1630.
[22]
G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama. 2004. Digital photography with flash and no-flash image pairs. ACM Trans. Graph. 23, 3 (Aug. 2004), 664--672.
[23]
F. Qi, J. Han, P. Wang, G. Shi, and F. Li. 2013. Structure guided fusion for depth map inpainting. Pattern Recognit. Lett. 34, 1 (2013), 70--76.
[24]
C. Richardt, C. Stoll, N. A. Dodgson, H.-P. Seidel, and C. Theobalt. 2012. Coherent spatiotemporal filtering, upsampling and rendering of RGBZ videos. Comp. Graph. Forum 31, 2pt1 (May 2012), 247--256.
[25]
D. Scharstein and C. Pal. 2007. Learning conditional random fields for stereo. In Proceedings of the Conference on Computer Vision Pattern Recognition (CVPR’07). IEEE, 1--8.
[26]
M. Schmeing and X. Jiang. 2012. Color segmentation based depth image filtering. In Proceedings of the International Workshop on Depth Image Analysis (WDIA’12). IEEE, 604--607.
[27]
C. Tomasi and R. Manduchi. 1998. Bilateral filtering for gray and color images. In Proceedings of the International Conference on Computer Vision (ICCV’98). IEEE, 839--846.
[28]
Wikipedia. 2013. Flooding Algorithm. Retrieved from http://en.wikipedia.org/wiki/Flooding_algorithm.
[29]
J. Yang, X. Ye, K. Li, and C. Hou. 2012. Depth recovery using an adaptive color-guided auto-regressive model. In Proceedings of the European Conference on Computer Vision (ECCV’12). Springer-Verlag, 158--171.
[30]
M. Zhao, F. Tan, C.-W. Fu, C.-K. Tang, J. Cai, and T. J. Cham. 2013. High-quality Kinect depth filtering for real-time 3D telepresence. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME’13). 1--6.

Cited By

View all
  • (2023)Self-Supervised Learning for RGB-Guided Depth Enhancement by Exploiting the Dependency Between RGB and DepthIEEE Transactions on Image Processing10.1109/TIP.2022.322641932(159-174)Online publication date: 2023
  • (2023)Depth Recovery With Large-Area Data Loss Guided by Polarization Cues for Time-of-Flight ImagingIEEE Access10.1109/ACCESS.2023.326781411(38840-38849)Online publication date: 2023
  • (2023)A single defocused image depth recovery with superpixel segmentationPattern Analysis & Applications10.1007/s10044-023-01133-326:3(1113-1123)Online publication date: 1-Aug-2023
  • Show More Cited By

Index Terms

  1. Kinect Depth Recovery Using a Color-Guided, Region-Adaptive, and Depth-Selective Framework

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Intelligent Systems and Technology
      ACM Transactions on Intelligent Systems and Technology  Volume 6, Issue 2
      Special Section on Visual Understanding with RGB-D Sensors
      May 2015
      381 pages
      ISSN:2157-6904
      EISSN:2157-6912
      DOI:10.1145/2753829
      • Editor:
      • Huan Liu
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 31 March 2015
      Accepted: 01 October 2014
      Revised: 01 July 2014
      Received: 01 April 2014
      Published in TIST Volume 6, Issue 2

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Depth recovery
      2. Kinect
      3. variational framework

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Funding Sources

      • NSF of China
      • Major State Basic Research Development Program of China
      • 973 Program
      • MoE AcRF Tier-1
      • 111 Project

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)4
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 16 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)Self-Supervised Learning for RGB-Guided Depth Enhancement by Exploiting the Dependency Between RGB and DepthIEEE Transactions on Image Processing10.1109/TIP.2022.322641932(159-174)Online publication date: 2023
      • (2023)Depth Recovery With Large-Area Data Loss Guided by Polarization Cues for Time-of-Flight ImagingIEEE Access10.1109/ACCESS.2023.326781411(38840-38849)Online publication date: 2023
      • (2023)A single defocused image depth recovery with superpixel segmentationPattern Analysis & Applications10.1007/s10044-023-01133-326:3(1113-1123)Online publication date: 1-Aug-2023
      • (2019)Multi-Scale Guided Mask Refinement for Coarse-to-Fine RGB-D PerceptionIEEE Signal Processing Letters10.1109/LSP.2018.288647026:2(217-221)Online publication date: Feb-2019
      • (2019)Dealing with Missing Depth: Recent Advances in Depth Image Completion and EstimationRGB-D Image Analysis and Processing10.1007/978-3-030-28603-3_2(15-50)Online publication date: 27-Oct-2019
      • (2018)A cloud-based system for dynamically capturing appliance usage relationsInternational Journal of Web and Grid Services10.1504/IJWGS.2016.07916112:3(257-272)Online publication date: 21-Dec-2018
      • (2018)Embedding Temporally Consistent Depth Recovery for Real-time Dense Mapping in Visual-inertial Odometry2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)10.1109/IROS.2018.8593917(693-698)Online publication date: 1-Oct-2018
      • (2018)Closed-Form Solution of Simultaneous Denoising and Hole Filling of Depth Image2018 25th IEEE International Conference on Image Processing (ICIP)10.1109/ICIP.2018.8451850(968-972)Online publication date: Oct-2018
      • (2018)A comparative review of plausible hole filling strategies in the context of scene depth image completionComputers & Graphics10.1016/j.cag.2018.02.00172(39-58)Online publication date: May-2018
      • (2018)Nighttime image Dehazing with modified models of color transfer and guided image filterMultimedia Tools and Applications10.1007/s11042-017-4954-977:3(3125-3141)Online publication date: 1-Feb-2018
      • Show More Cited By

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media