Skip to main content
Log in

3D Scene Reconstruction Using Colorimetric and Geometric Constraints on Iterative Closest Point Method

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Advent of the 3D scene reconstruction framework using the RGB-D camera has enabled users to easily construct their indoor environment in a virtual space and has allowed them to experience an immersive augmented reality with the reconstructed 3D scene. Technically, the early stage of the 3D scene reconstruction framework using the RGB-D camera is based on the frame-to-model registration. It tries to iteratively estimate the transformation parameters of the camera between the incoming depth frame and its previously reconstructed 3D model. However, due to the nature of the frame-to-model registration, the conventional framework has an inherent drift problem caused by the accumulated alignment error. In this paper, we propose a new 3D scene reconstruction framework with the improved camera tracking capability to reduce the drift problem. There are two types of constraints in this work: colorimetric and geometric constraints. For the colorimetric constraint, we impose the more weights on the reliable feature correspondences obtained from color image frames. For the geometric constraint, we compute the consistent surface normal vector for the noisy point cloud data. Experimental results show that the proposed framework reduces the absolute trajectory error representing the amount of the drift and shows a more consistent trajectory in comparison to the conventional framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

References

  1. Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). In: Computer Vision and Image Understanding, vol 3. pp 346–359. doi:10.1016/j.cviu.2007.09.014

  2. Berger M, Tagliasacchi A, Seversky L, Alliez P, Levine J, Sharf A, Silva C (2014) State of the art in surface reconstruction from point clouds. Eurographics 2014 - State of the Art Reports 1(1):161–185. doi:10.2312/egst.20141040

    Google Scholar 

  3. Besl PJ, McKay ND (1992) A method for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell 14(2):239–256

    Article  Google Scholar 

  4. Borgefors G (1986) Distance transformations in digital images. Comput Vision Graph 34(3):344–371

    Article  Google Scholar 

  5. Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 8(6):679–698

    Article  Google Scholar 

  6. Curless B, Levoy M (1996) A Volumetric Method for Building Complex Models from Range Images. In: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. ACM Press, pp 303–312

  7. Endres F, Hess J, Engelhard N, Sturm J, Cremers D, Burgard W (2012) An evaluation of the RGB-D SLAM system. Paper presented at the International Conference on Robotics and Automation, May 2012

  8. Fioraio N, Taylor J, Fitzgibbon A, Di Stefano L, Izadi S (2015) Large-scale and drift-free surface reconstruction using online subvolume registration. In: 2015 I.E. Conference on Computer Vision and Pattern Recognition (CVPR), June. IEEE, pp 4475–4483

  9. Fischler MA, Bolles RC (1981) Random sample consensus - a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395

    Article  MathSciNet  Google Scholar 

  10. Godin G, Rioux M, Baribeau R (1994) Three-dimensional registration using range and intensity information. In: Proc. SPIE 2350, Videometrics III, Oct. pp 279–290

  11. Graham M, Zook M, Boulton A (2013) Augmented reality in urban places: contested content and the duplicity of code. Trans Inst Br Geogr 38(3):464–479

    Article  Google Scholar 

  12. Gressin A, Mallet C, Demantké J, David N (2013) Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge. ISPRS J Photogramm Remote Sens 79:240–251. doi:10.1016/j.isprsjprs.2013.02.019

    Article  Google Scholar 

  13. Gumhold S, Wang X, MacLeod R (2001) Feature extraction from point clouds. In: Proceedings of 10th International Meshing Roundtable. pp 293–305

  14. Han J, Shao L, Xu D, Shotton J (2013) Enhanced Computer Vision With Microsoft Kinect Sensor: A Review. IEEE Trans Cybern 43(5):1318–1334

    Article  Google Scholar 

  15. Handa A, Whelan T, McDonald J, Davison AJ (2014) A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In: 2014 I.E. International Conference on Robotics and Automation (ICRA). IEEE, pp 1524–1531

  16. Henry P, Krainin M, Herbst E, Ren X, Fox D (2012) RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments. Int J Robot Res 31(5):647–663

    Article  Google Scholar 

  17. Holzer S, Rusu RB, Dixon M, Gedikli S, Navab N (2012) Adaptive neighborhood selection for real-time surface normal estimation from organized point cloud data using integral images. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 7–12 Oct. 2012. pp 2684–2689. doi:10.1109/IROS.2012.6385999

  18. Hoppe H, DeRose T, McDonald J, Stuetzle W, Duchamp T (1992) Surface reconstruction from unorganized points. In: SIGGRAPH Comput. Graph, vol 2. ACM, pp 71–78

  19. Izadi S, Hilliges O, Molyneaux D, Newcombe R, Kohli P, Shotton J, Hodges S, Freeman D, Davison S, Fitzgibbon A (2011) KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. Paper presented at the ACM Symposium on User Interface Software and Technology, Oct 2011

  20. Low KL (2004) Linear least-squares optimization for point-to-plane ICP surface registration. Chapel Hill 4

  21. Muja M, Lowe DG (2014) Scalable nearest neighbor algorithms for high dimensional data. IEEE Trans Pattern Anal Mach Intell 36(11):2227–2240

    Article  Google Scholar 

  22. Nardi L, Bodin B, Zia MZ, Mawer J, Nisbet A, Kelly PHJ, Davison AJ, Lujan M, O'Boyle MFP, Riley G, Topham N, Furber S (2015) Introducing SLAMBench, a performance and accuracy benchmarking methodology for SLAM. In: 2015 I.E. International Conference on Robotics and Automation (ICRA). IEEE, pp 5783–5790

  23. Newcombe RA, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison AJ, Kohli P, Shotton J, Hodges S, Fitzgibbon A (2011) KinectFusion: Real-time dense surface mapping and tracking. Paper presented at the ISMAR '11: Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality, Oct 2011

  24. Rusu RB (2010) Semantic 3D object maps for everyday manipulation in human living environments. Künstl Intell 24(4):345–348. doi:10.1007/s13218-010-0059-6

    Article  Google Scholar 

  25. Sturm J, Engelhard N, Endres F, Burgard W, Cremers D (2012) A benchmark for the evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct 07. pp 573–580

  26. Weber C, Hahmann S, Hagen H (2010) Sharp feature detection in point clouds. In: Proceedings of the 2010 Shape Modeling International Conference. IEEE, pp 175–186

  27. Whelan T, Kaess M, Johannsson H, Fallon M, Leonard JJ, McDonald J (2015) Real-time large-scale dense RGB-D SLAM with volumetric fusion. Int J Robot Res 34(4–5):598–626

    Article  Google Scholar 

  28. Wu J, Wang Q (2007) Feature point detection from point cloud based on repeatability rate and local entropy. In: MIPPR 2007: Automatic Target Recognition and Image Analysis; and Multispectral Image Acquisition. pp 1–10

Download references

Acknowledgements

This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MSIP) (No. 2011-0030079)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yo-Sung Ho.

Appendix

Appendix

Comparison table of the absolute trajectory error

figure g
figure c
figure d
figure e
figure f

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shin, DW., Ho, YS. 3D Scene Reconstruction Using Colorimetric and Geometric Constraints on Iterative Closest Point Method. Multimed Tools Appl 77, 14381–14406 (2018). https://doi.org/10.1007/s11042-017-5034-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-017-5034-x

Keywords

Navigation