Authors:
Matthew Moynihan
1
;
Rafael Pagés
2
and
Aljosa Smolic
1
Affiliations:
1
V-SENSE, School of Computer Science and Statistics, Trinity College Dublin and Ireland
;
2
V-SENSE, School of Computer Science and Statistics, Trinity College Dublin, Ireland, Volograms Ltd, Dublin and Ireland
Keyword(s):
Point Clouds, Upsampling, Temporal Coherence, Free Viewpoint Video, Multiview Video.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Computer Vision, Visualization and Computer Graphics
;
Geometry and Modeling
;
Image-Based Modeling
;
Motion, Tracking and Stereo Vision
;
Optical Flow and Motion Analyses
;
Pattern Recognition
;
Software Engineering
;
Stereo Vision and Structure from Motion
Abstract:
This paper presents an approach to upsampling point cloud sequences captured through a wide baseline camera setup in a spatio-temporally consistent manner. The system uses edge-aware scene flow to understand the movement of 3D points across a free-viewpoint video scene to impose temporal consistency. In addition to geometric upsampling, a Hausdorff distance quality metric is used to filter noise and further improve the density of each point cloud. Results show that the system produces temporally consistent point clouds, not only reducing errors and noise but also recovering details that were lost in frame-by-frame dense point cloud reconstruction. The system has been successfully tested in sequences that have been captured via both static or handheld cameras.