Reference consistent reconstruction of 3D cloth surface

https://doi.org/10.1016/j.cviu.2012.04.001Get rights and content

Abstract

We propose a multiview method for reconstructing a folded cloth surface on which regularly-textured color patches are printed. These patches provide not only easy pixel-correspondence between multiviews but also the following two new functions. (1) Error recovery: errors in 3D surface reconstruction (e.g. errors in occlusion boundaries and shaded regions) can be recovered based on the spatio-temporal consistency of the patches. (2) Single-view hole filling: patches that are visible only from a single view can be extrapolated from the reconstructed ones based on the regularity of the patches. Using these functions for improving 3D reconstruction also produces the patch configuration on the reconstructed surface, showing how the cloth is deformed from its reference shape. Experimental results demonstrate the above improvements and the accurate patch configurations produced by our method.

Highlights

► Multiview method for reconstructing a folded cloth surface with color patches. ► Error in 3D surface reconstruction can be recovered with consistency of the patches. ► Single-view patches can be extrapolated with regularity of the patches. ► 3D surface deformation is obtained from the patches on the reconstructed surface.

Introduction

Modeling the motion of non-rigid clothing is one of important topics in Computer Vision and Graphics: for surface reconstruction [1], body estimation under clothing [2], [3], and physical cloth simulation [4], [5]. Several studies have proposed ways to obtain the cloth model/parameters from the surface points of a cloth (see [6], [7], for example). Sample data of the cloth motion are required also for data-driven approaches without the physical cloth model (e.g. free cloth motion [8] and cloth motion driven by human motion [9]). Therefore, cloth surface reconstruction is a fundamental technology for all of the above applications.

We developed a 3D reconstruction method with the following properties that are crucial for cloth modeling:

  • Correctness. Reconstruction error should be small.

  • High spatial density. Spatially dense points are necessary because a cloth is completely non-rigid and its shape changes significantly even within a small area.

  • High temporal density. Quick motion should be captured with a high frame-rate.

  • Completeness. The surface of a cloth should be reconstructed as completely as possible to capture the whole motion of a cloth.

  • Configuration. To capture the instantaneous motion of a cloth as well as its temporal deformation, each point on the reconstructed surface must correspond to its respective point on the reference surface (i.e. flat cloth with no tension). We call this correspondence a configuration. The configuration includes the orientation of each patch as well as its location. The configuration also enables time-coherent texture mapping (i.e. mapping any texture onto a deforming 3D surface).

These properties are classified into shape reconstruction (top four) and configuration acquisition. In our reference configuration consistent reconstruction, the inseparable relationships between them are used to improve their accuracy and robustness.

Section snippets

Related work

General 3D reconstruction algorithms can be used for cloth surface reconstruction (e.g. dense and accurate reconstruction [21], one for a textureless object [23]). Recently, bundle adjustment [25], [26] and Graph-cut [15], [27] have been widely used for optimal solutions. These algorithms can obtain 3D points from multiview images, although some incorrect points are included and the complete shape cannot be captured due to occlusion and image processing errors such as multiview point

White–Crane–Forsyth method

Many methods have been proposed for reconstructing a 3D surface and its motion. Among them, a method proposed by White et al. [10] is the state-of-the-art multiview method using color patches printed on a cloth for reliability and precision. In their method, motion of the cloth with regularly-textured patches is observed from multiviews. While the method requires the printed patches, it is useful to acquire accurate cloth surfaces and parameters (e.g. tension, spring) for Vision and Graphics

Detailed analysis of the problems and their solutions

This section describes an analysis of what caused the problems with the White–Crane–Forsyth method [10], which gives us insights into solutions for resolving the problems. The detailed implementation of the solutions is described in Section 5.

Detailed implementation

From the discussion in Section 4, our reconstruction method is designed as shown in Fig. 9, Fig. 10. Compared with the White–Crane–Forsyth method, occlusion and ambiguity handling is added and neighborhood matching, pruning, and hole filling are augmented.

Reconstruction from sequences of a moving cloth

A moving cloth, which was used in the experiments described before, was captured in image sequences by a pair of synchronized cameras.

Fig. 19 shows the results obtained from the two-view image sequences. Images in the sixth and seventh columns in Fig. 19 were generated by projecting texture images, (a) the detected colors in observed images and (b) a new texture, onto the reconstructed 3D surface. The textures were mapped from the triangles from the 2D texture image to those on the 3D surface

Concluding remarks

We propose a method for reconstructing the 3D surface of a folded cloth by cameras. Regularly-textured color patches printed on the cloth surface are employed to (1) provide explicit occlusion and ambiguity handling in a single view and (2) acquire the patch configuration on the reconstructed surface. The patch configuration is acquired by Graph-cut so that each patch on the reconstructed surface is consistent with its neighboring patches and its projection patches in all observed images. With

References (37)

  • M. Salzmann et al.

    Surface deformation models for nonrigid 3D shape recovery

    PAMI

    (2007)
  • B. Rosenhahn et al.

    A system for articulated tracking incorporating a cloth model

    Mach. Vision Appl.

    (2007)
  • A.O. Balan, M.J. Black, The Naked Truth: Estimating Body Shape Under Clothing, ECCV,...
  • D. Baraff, A.P. Witkin, Large Steps in Cloth Simulation, SIGGRAPH,...
  • R. Bridson, R. Fedkiw, J. Anderson, Robust Treatment of Collisions, Contact and Friction for Cloth Animation, SIGGRAPH,...
  • K.S. Bhat, C.D. Twigg, J.K. Hodgins, P.K. Khosla, Z. Popovic, S.M. Seitz, Estimating Cloth Simulation Parameters from...
  • N. Jojic, T.S. Huang, Estimating cloth draping parameters from range data, in: International Workshop on...
  • M. Salzmann, R. Urtasun, P. Fua, Local Deformation Models for Monocular 3D Shape Recovery, CVPR,...
  • F. Cordier et al.

    A data-driven approach for real-time clothes simulation

    Comput. Graph. Forum

    (2005)
  • R. White, K. Crane, D. Forsyth, Capturing and Animating Occluded Cloth, SIGGRAPH,...
  • R.W. Sumner, M. Zwicker, C. Gotsman, J. Popovic, Mesh-Based Inverse Kinematics, SIGGRAPH,...
  • Z. Zhang

    A flexible new technique for camera calibration

    PAMI

    (2000)
  • V. Kolmogorov, R. Zabih, Computing Visual Correspondence with Occlusions via Graph Cuts, ICCV,...
  • C. Tomasi, T. Kanade, Detection and Tracking of Point Features, Carnegie Mellon University Technical Report,...
  • Y. Boykov et al.

    Fast approximate energy minimization via graph cuts

    PAMI

    (2001)
  • Y. Boykov, O. Veksler, R. Zabih, Markov Random Fields with Efficient Approximations, CVPR,...
  • V. Kolmogorov et al.

    What energy functions can be minimized via graph cuts?

    PAMI

    (2004)
  • P.L. Hammer et al.

    complementation and persistency in quadratic 0–1 optimization

    Math. Program.

    (1984)
  • Cited by (3)

    • Invariant Gabor-based interest points detector under geometric transformation

      2014, Digital Signal Processing: A Review Journal
      Citation Excerpt :

      Detecting distinctive invariant local image features becomes a fundamental process in many computer vision applications such as 3D reconstruction [1–3], image retrieval [4–6], image registration [7,8], texture classification [9,10], robot localization [11] and object recognition [12,13].

    • A practical framework for automatic 3D reconstruction of clothing with RGB-D cameras

      2019, 2019 IEEE International Conference on Real-Time Computing and Robotics, RCAR 2019
    • An overview of interactive wet cloth simulation in virtual reality and serious games

      2018, Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization

    This paper has been recommended for acceptance by Siome Klein Goldenstein.

    View full text