Skip to main content
Log in

An accurate method for 3D object reconstruction from unordered sparse views

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

3D object reconstruction from depth scans has been long investigated problem, but complete solutions are still missing. Recent advances of consumer depth sensors resulted in a number of excellent methods for 3D object reconstruction, with the most impressive results obtained by KinectFusion and Visual Odometry. Both methods reconstruct an object from a dense set of RGB-D video frames, with the important restrictions that the object has to be static and camera transitions has to be smooth during the reconstruction. As an alternative, we propose a method which is more accurate, while being able to reconstruct the object from a sparse set of unordered depth scans. Our method starts by pairwise view matching which relies on robust point pair features. It is followed by the efficient graph-based view consistency check to produce an initial 3D model reconstruction. To improve the accuracy, we perform a simultaneous optimization of the camera positions and the object model, using depth and color information. A quantitative evaluation demonstrates that we can reconstruct objects with equal or better accuracy than the KinectFusion and Visual Odometry, but using significantly lower number of unordered input scans.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Cloudcompare. http://www.danielgm.net/cc/

  2. Recfusion. http://www.recfusion.net. Accessed July 01, 2015

  3. The stanford 3d scanning repository. https://graphics.stanford.edu/data/3Dscanrep/. Accessed December 01, 2015

  4. Ahmed, N., Khalifa, S.: Time-coherent 3d animation reconstruction from rgb-d video. SIViP 10(4), 783–790 (2015)

    Article  Google Scholar 

  5. Alexa, M., Behr, J., Cohen-Or, D., Fleishman, S., Levin, D., Silva, C.T.: Computing and rendering point set surfaces. IEEE Trans. Visual Comput. Graphics 9(1), 3–15 (2003)

    Article  Google Scholar 

  6. Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In Proceedings of the 23rd annual conference on computer graphics and interactive techniques. SIGGRAPH ’96, pp. 303–312. ACM, New York (1996)

  7. Dimashova, M., Lysenkov, I., Rabaud, V., Eruhimov, V.: Tabletop object scanning with an rgb-d sensor. In: 3rd Workshop on semantic perception, mapping and exploration (SPME) (2013)

  8. Drost, B., Ulrich, M., Navab, N., Ilic, S.: Model globally, match locally: Efficient and robust 3d object recognition. In: IEEE conference on computer vision and pattern recognition (CVPR), pp. 998–1005 (2010)

  9. Fantoni, S., Castellani, U., Fusiello, A.: Accurate and automatic alignment of range surfaces. In: Second international conference on 3D imaging, modeling, processing, visualization and transmission (3DIMPVT), pp. 73–80 (2012)

  10. Huber, D.F., Hebert, M.: Fully automatic registration of multiple 3d data sets. Image and Vision Computing 21(7), 637–650 (2003). Computer Vision beyond the visible spectrum

  11. Johnson, A., Hebert, M.: Using spin images for efficient object recognition in cluttered 3d scenes. IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 433–449 (1999)

    Article  Google Scholar 

  12. Kerl, C., Sturm, J., Cremers, D.: Dense visual slam for rgb-d cameras. In: Proc. of the Int. Conf. on Intelligent Robot Systems (IROS) (2013)

  13. Khoshelham, K.: Accuracy analysis of kinect depth data. In: ISPRS: International archives of the photogrammetry, remote sensing and spatial information sciences, vol 3812, pp. 133–138 (2011)

  14. Kummerle, R., Grisetti, G., Strasdat, H., Konolige, K., Burgard, W.: G2o: A general framework for graph optimization. In: IEEE international conference on robotics and automation (ICRA), pp. 3607–3613 (2011)

  15. Meister, S., Izadi, S., Kohli, P., Hammerle, M., Rother, C., Kondermann, D.: When can we use kinectfusion for ground truth acquisition? In: Workshop on color-depth camera fusion in robotics, IROS(2012)

  16. Mian, A., Bennamoun, M., Owens, R.: Three-dimensional model-based object recognition and segmentation in cluttered scenes. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1584–1601 (2006)

    Article  Google Scholar 

  17. Mian, A.S., Bennamoun, M., Owens, R.A.: Automatic correspondence for 3d modelling: an extensive review. Int. J. Shape Model. 11(02), 253–291 (2005)

    Article  MATH  Google Scholar 

  18. Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohli, P., Shotton, J., Hodges, S., Fitzgibbon, A.: Kinectfusion: real-time dense surface mapping and tracking. In: Proceedings of the 2011 10th IEEE international symposium on mixed and augmented reality. ISMAR ’11, pp. 127–136. IEEE Computer Society, Washington, DC (2011)

  19. Pulli, K.: Multiview registration for large data sets. In: Proceedings of second international conference on 3-D digital imaging and modeling, pp. 160–168 (1999)

  20. Ruhnke, M., Kümmerle, R., Grisetti, G., Burgard, W.: Highly accurate 3d surface models by sparse surface adjustment. In: IEEE international conference on robotics and automation, ICRA 2012, 14-18 May, 2012, St. Paul, Minnesota, pp. 751–757 (2012)

  21. Salti, S., Petrelli, A., Tombari, F., Di Stefano, L.: On the affinity between 3d detectors and descriptors. In: Second international conference on 3D imaging, modeling, processing, visualization and transmission (3DIMPVT), pp. 424–431 (2012)

  22. Steinbrucker, F., Sturm, J., Cremers, D.: Real-time visual odometry from dense rgb-d images. In: IEEE international conference on computer vision workshops (ICCV Workshops), pp. 719–722 (2011)

  23. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of rgb-d slam systems. In: Proceedings of the international conference on intelligent robot systems (IROS) (2012)

  24. Tombari, F., Salti, S., DiStefano, L.: Performance evaluation of 3d keypoint detectors. Int. J. Comput. Vision 102(1–3), 198–220 (2013)

    Article  Google Scholar 

  25. Zhang, Z.: Iterative point matching for registration of free-form curves and surfaces. Int. J. Comput. Vision 13(2), 119–152 (1994)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Danilo Djordjevic.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Djordjevic, D., Cvetković, S. & Nikolić, S.V. An accurate method for 3D object reconstruction from unordered sparse views. SIViP 11, 1147–1154 (2017). https://doi.org/10.1007/s11760-017-1069-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-017-1069-8

Keywords

Navigation