Skip to main content

Guided Depth Completion Using Active Infrared Images in Time of Flight Systems

  • Conference paper
  • First Online:
Pattern Recognition and Image Analysis (IbPRIA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14062))

Included in the following conference series:

Abstract

Depth information has been successfully used in many computer vision applications, but depth imaging sensors frequently provide missing values, mainly around objects boundaries. These invalid values and image gaps cause serious problems in some applications. In order to estimate missing depth values and fill gaps in depth images (D), we propose a new algorithm for depth completion based on belief propagation. The rationale of the proposed technique is based on the idea that missing values must be estimated by taking into account object boundaries, mainly those related with depth discontinuities. Time of Flight (ToF) cameras provide depth information and some additional data, such as active infrared (IR) brightness images. Therefore, object boundaries information for depth missing areas can be reconstructed by using auxiliary IR information or by RGB images in RGB-D systems. These auxiliary images are used as a guidance for the depth completion, also known as depth inpainting. Experimental results show that our algorithm is very simple to implement, fast and produces better results than other more complex, and usually slower, existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. https://www.analog.com/en/products/adtf3175.html#product-overview

  2. https://vision.middlebury.edu/stereo/data/

  3. Atapour-Abarghouei, A., Breckon, T.P.: Extended patch prioritization for depth filling within constrained exemplar-based RGB-D image completion. In: Campilho, A., Karray, F., ter Haar Romeny, B. (eds.) ICIAR 2018. LNCS, vol. 10882, pp. 306–314. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93000-8_35

    Chapter  Google Scholar 

  4. Barron, J.T., Malik, J.: Intrinsic scene properties from a single RGB-D image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 17–24 (2013)

    Google Scholar 

  5. Ciotta, M., Androutsos, D.: Depth guided image completion for structure and texture synthesis. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1199–1203. IEEE (2016)

    Google Scholar 

  6. Gong, X., Liu, J., Zhou, W., Liu, J.: Guided depth enhancement via a fast marching method. Image Vis. Comput. 31(10), 695–703 (2013)

    Article  Google Scholar 

  7. Li, H., Chen, H., Tu, C., Wang, H.: A recovery method for Kinect-like depth map based on color image segmentation. In: 2015 14th International Conference on Computer-Aided Design and Computer Graphics (CAD/Graphics), pp. 230–231. IEEE (2015)

    Google Scholar 

  8. Liu, J., Gong, X.: Guided depth enhancement via anisotropic diffusion. In: Huet, B., Ngo, C.-W., Tang, J., Zhou, Z.-H., Hauptmann, A.G., Yan, S. (eds.) PCM 2013. LNCS, vol. 8294, pp. 408–417. Springer, Cham (2013). https://doi.org/10.1007/978-3-319-03731-8_38

    Chapter  Google Scholar 

  9. Liu, M., He, X., Salzmann, M.: Building scene models by completing and hallucinating depth and semantics. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 258–274. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_16

    Chapter  Google Scholar 

  10. Lu, W., et al.: Diverse facial inpainting guided by exemplars. arXiv preprint arXiv:2202.06358 (2022)

  11. Mao, J., Li, J., Li, F., Wan, C.: Depth image inpainting via single depth features learning. In: 2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), pp. 116–120. IEEE (2020)

    Google Scholar 

  12. März, T.: Image inpainting based on coherence transport with adapted distance functions. SIAM J. Imaging Sci. 4(4), 981–1000 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Matsuo, K., Aoki, Y.: Depth image enhancement using local tangent plane approximations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3574–3583 (2015)

    Google Scholar 

  14. Van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A., et al.: Conditional image generation with pixelcnn decoders. Adv. Neural Inf. Process. Syst. 29 (2016)

    Google Scholar 

  15. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)

    Google Scholar 

  16. Wan, Z., Zhang, J., Chen, D., Liao, J.: High-fidelity pluralistic image completion with transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4692–4701 (2021)

    Google Scholar 

  17. Wang, D., Chen, X., Yi, H., Zhao, F.: Hole filling and optimization algorithm for depth images based on adaptive joint bilateral filtering. Chin. J. Lasers 46(10), 294–301 (2019)

    Google Scholar 

  18. Xue, H., Zhang, S., Cai, D.: Depth image inpainting: improving low rank matrix completion with low gradient regularization. IEEE Trans. Image Process. 26(9), 4311–4320 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  19. Yang, J., Ye, X., Li, K., Hou, C., Wang, Y.: Color-guided depth recovery from RGB-D data using an adaptive autoregressive model. IEEE Trans. Image Process. 23(8), 3443–3458 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Yu, Y., Zhang, L., Fan, H., Luo, T.: High-fidelity image inpainting with GAN inversion. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022. ECCV 2022. LNCS, vol. 13676, pp. 242–258. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19787-1_14

  21. Zhang, H.T., Yu, J., Wang, Z.F.: Probability contour guided depth map inpainting and superresolution using non-local total generalized variation. Multimed. Tools Appl. 77(7), 9003–9020 (2018)

    Article  Google Scholar 

  22. Zhang, Y., Funkhouser, T.: Deep depth completion of a single RGB-D image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 175–185 (2018)

    Google Scholar 

Download references

Acknowledgments

This work was partially supported by Analog Devices, Inc. and by the Agencia Valenciana de la Innovacion of the Generalitat Valenciana under program “Plan GEnT. Doctorados Industriales. Innodocto”

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amina Achaibou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Achaibou, A., Sanmartín-Vich, N., Pla, F., Calpe, J. (2023). Guided Depth Completion Using Active Infrared Images in Time of Flight Systems. In: Pertusa, A., Gallego, A.J., Sánchez, J.A., Domingues, I. (eds) Pattern Recognition and Image Analysis. IbPRIA 2023. Lecture Notes in Computer Science, vol 14062. Springer, Cham. https://doi.org/10.1007/978-3-031-36616-1_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-36616-1_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-36615-4

  • Online ISBN: 978-3-031-36616-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics