Abstract
Stratified 3D reconstruction, or a layer-by-layer 3D reconstruction upgraded from projective to affine, then to the final metric reconstruction, is a well-known 3D reconstruction method in computer vision. It is also a key supporting technology for various well-known applications, such as streetview, smart3D, oblique photogrammetry. Generally speaking, the existing computer vision methods in the literature can be roughly classified into either the geometry-based approaches for spatial vision or the learning-based approaches for object vision. Although deep learning has demonstrated tremendous success in object vision in recent years, learning 3D scene reconstruction from multiple images is still rare, even not existent, except for those on depth learning from single images. This study is to explore the feasibility of learning the stratified 3D reconstruction from putative point correspondences across images, and to assess whether it could also be as robust to matching outliers as the traditional geometry-based methods do. In this study, a special parsimonious neural network is designed for the learning. Our results show that it is indeed possible to learn a stratified 3D reconstruction from noisy image point correspondences, and the learnt reconstruction results appear satisfactory although they are still not on a par with the state-of-the-arts in the structure-from-motion community due to largely its lack of an explicit robust outlier detector such as random sample consensus (RANSAC). To the best of our knowledge, our study is the first attempt in the literature to learn 3D scene reconstruction from multiple images. Our results also show that how to implicitly or explicitly integrate an outlier detector in learning methods is a key problem to solve in order to learn comparable 3D scene structures to those by the current geometry-based state-of-the-arts. Otherwise any significant advancement of learning 3D structures from multiple images seems difficult, if not impossible. Besides, we even speculate that deep learning might be, in nature, not suitable for learning 3D structure from multiple images, or more generally, for solving spatial vision problems.
Similar content being viewed by others
References
Roberts R, Sinha S N, Szeliski R, et al. Structure from motion for scenes with large duplicate structures. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado, 2011. 3137–3144
Kerl C, Sturm J, Cremers D. Dense visual slam for rgb-d cameras. In: Proceedings of 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, 2013. 2100–2106
Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst, 2012, 25: 1097–1105
Hartley R, Zisserman A. Multiple View Geometry in Computer Vision. New York: Cambridge University Press, 2003
He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 2016
Eigen D, Fergus R. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the IEEE International Conference on Computer Vision, Santiago, 2015. 2650–2658
Kendall A, Grimes M, Cipolla R. Posenet: a convolutional network for real-time 6-dof camera relocalization. In: Proceedings of the IEEE International Conference on Computer Vision, Santiago, 2015. 2938–2946
Kulkarni T D, Whitney W F, Kohli P, et al. Deep convolutional inverse graphics network. In: Proceedings of International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2015. 2539–2547
Acknowledgements
This work was supported by National Natural Science Foundation of China (Grant Nos. 61333015, 61375042, 61421004, 61573359, 61772444).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Dong, Q., Shu, M., Cui, H. et al. Learning stratified 3D reconstruction. Sci. China Inf. Sci. 61, 023101 (2018). https://doi.org/10.1007/s11432-017-9234-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11432-017-9234-7