Skip to main content

Quantitative Analysis of Surface Reconstruction Accuracy Achievable with the TSDF Representation

  • Conference paper
  • First Online:
Computer Vision Systems (ICVS 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9163))

Included in the following conference series:

  • 1942 Accesses

Abstract

During the last years KinectFusion and related algorithms have facilitated significant advances in real-time simultaneous localization and mapping (SLAM) with depth-sensing cameras. Nearly all of these algorithms represent the observed area with the truncated signed distance function (TSDF). The reconstruction accuracy achievable with the representation is crucial for camera pose estimation and object reconstruction. Therefore, we evaluate this reconstruction accuracy in an optimal context, i.e. assuming error-free camera pose estimation and depth measurement. For this purpose we use a synthetic dataset of depth image sequences and corresponding camera pose ground truth and compare the reconstructed point clouds with the ground truth meshes. We investigate several influencing factors, especially the TSDF resolution and show that the TSDF is a very powerful representation even for low resolutions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Voxels are currently seen if they are in the field of view of camera, no matter if they are occluded by a surface.

References

  1. Bylow, E., Sturm, J., Kerl, C., Kahl, F., Cremers, D.: Real-time camera tracking and 3d reconstruction using signed distance functions. In: Robotics: Science and Systems (RSS) Conference 2013, vol. 9 (2013)

    Google Scholar 

  2. Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 303–312. ACM (1996)

    Google Scholar 

  3. Georgia Institute of Technology: Large geometric models archive. http://www.cc.gatech.edu/projects/large_models/

  4. Girardeau-Montaut, D.: CloudCompare (2015). http://www.danielgm.net/cc/

  5. Hemmat, H.J., Bondarev, E., Dubbelman, G., With, P.: Improved ICP-based pose estimation by distance-aware 3d mapping. In: International Conference on Computer Vision Theory and Applications (VISAPP), pp. 360–367. SciTePress (2014)

    Google Scholar 

  6. Hemmat, H.J., Bondarev, E., de With, P.H.N.: Exploring Distance-Aware weighting strategies for accurate reconstruction of Voxel-Based 3D synthetic models. In: Gurrin, C., Hopfgartner, F., Hurst, W., Johansen, H., Lee, H., O’Connor, N. (eds.) MMM 2014, Part I. LNCS, vol. 8325, pp. 412–423. Springer, Heidelberg (2014)

    Google Scholar 

  7. Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A.: KinectFusion: real-time 3d reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp. 559–568. ACM (2011)

    Google Scholar 

  8. Jiang, S.Y., Chang, N.C., Wu, C.C., Wu, C.H., Song, K.T.: Error analysis and experiments of 3d reconstruction using a rgb-d sensor. In: IEEE International Conference on Automation Science and Engineering (CASE), pp. 1020–1025 (2014)

    Google Scholar 

  9. Kinect: Kinect (2014). http://www.xbox.com/en-us/kinect/

  10. Meister, S., Izadi, S., Kohli, P., Hammerle, M., Rother, C., Kondermann, D.: When can we use kinectfusion for ground truth acquisition? In: Workshop on Color-Depth Camera Fusion in Robotics, IROS (2012)

    Google Scholar 

  11. Morgan McGuire: Computer graphics archive. http://graphics.cs.williams.edu/data/meshes.xml

  12. Newcombe, R.A., Davison, A.J., et al.: KinectFusion: real-time dense surface mapping and tracking. In: IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 127–136. IEEE (2011)

    Google Scholar 

  13. PCL: Kinectfusion in PCL 1.7.1 (2014). http://www.pointclouds.org/

  14. Rusu, R., Cousins, S.: 3d is here: Point Cloud Library (PCL). In: IEEE International Conference on Robotics and Automation (ICRA), pp. 1–4 (2011)

    Google Scholar 

  15. Stanford University: The stanford 3d scanning repository. http://graphics.stanford.edu/data/3Dscanrep/

  16. Werner, D., Al-Hamadi, A., Werner, P.: Truncated signed distance function: experiments on voxel size. In: Campilho, A., Kamel, M. (eds.) ICIAR 2014, Part II. LNCS, vol. 8815, pp. 357–364. Springer, Heidelberg (2014)

    Google Scholar 

  17. Whelan, T., Johannsson, H., Kaess, M., Leonard, J., McDonald, J.: Robust tracking for real-time dense RGB-D mapping with Kintinuous. Tech. Rep. MIT-CSAIL-TR-2012-031, Computer Science and Artificial Intelligence Laboratory, MIT, September 2012

    Google Scholar 

  18. Whelan, T., Johannsson, H., Kaess, M., Leonard, J.J., McDonald, J.: Robust real-time visual odometry for dense RGB-D mapping. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 5724–5731. IEEE (2013)

    Google Scholar 

  19. Whelan, T., Kaess, M., Fallon, M., Johannsson, H., Leonard, J., McDonald, J.: Kintinuous: spatially extended kinectfusion. In: RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras (2012)

    Google Scholar 

Download references

Acknowledgments

This work was supported by Transregional Collaborative Research Centre SFB/TRR 62 (Companion-Technology for Cognitive Technical Systems) funded by the German Research Foundation (DFG).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Diana Werner .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Werner, D., Werner, P., Al-Hamadi, A. (2015). Quantitative Analysis of Surface Reconstruction Accuracy Achievable with the TSDF Representation. In: Nalpantidis, L., Krüger, V., Eklundh, JO., Gasteratos, A. (eds) Computer Vision Systems. ICVS 2015. Lecture Notes in Computer Science(), vol 9163. Springer, Cham. https://doi.org/10.1007/978-3-319-20904-3_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-20904-3_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-20903-6

  • Online ISBN: 978-3-319-20904-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics