Elsevier

Computers & Graphics

Volume 38, February 2014, Pages 10-17
Computers & Graphics

Special Section on Computer Graphics in Brazil: A selection of papers from SIBGRAPI 2012
Representing mesh-based character animations

https://doi.org/10.1016/j.cag.2013.07.007Get rights and content

Highlights

  • Our approach represents mesh animation preserving its details.

  • We decompose a mesh animation into coarse and fine components.

  • Our technique represents and manipulates cloth simulation data.

  • It can be used for real-time rendering and editing.

Abstract

We propose a new approach to represent and manipulate a mesh-based character animation preserving its time-varying details. Our method first decomposes the input mesh animation into coarse and fine deformation components. A model for the coarse deformations is constructed by an underlying kinematic skeleton structure and blending skinning weights. Thereafter, a non-linear probabilistic model is used to encode the fine time-varying details of the input animation. The user can manipulate the corresponding skeleton-based component of the input, which can be done by any standard animation package, and the final result is generated including its important time-varying details. By converting an input sample animation into our new hybrid representation, we are able to maintain the flexibility of mesh-based methods during animation creation while allowing for practical manipulations using the standard skeleton-based paradigm. We demonstrate the performance of our method by converting and manipulating several mesh animations generated by different performance capture approaches and apply it to represent and manipulate cloth simulation data.

Introduction

Recently, a variety of mesh-based approaches have been developed to enable the generation of computer animations without relying on the classical skeleton-based paradigm [1], [2]. The advantage of a deformable model representation is also demonstrated by the new performance capture approaches [3], [4], where both motion and surface deformations can be captured from input video-streams for arbitrary subjects. This shows the great flexibility of a mesh-based representation over the classical one during animation creation.

Although bypassing many drawbacks of the conventional animation pipeline, a mesh-based representation for character animation is still complex to be edited or manipulated. Few solutions are presented in the literature [5], [6], [7], [8], [9], but in general it is still hard to integrate these methods into the conventional pipeline. Other approaches try to convert or represent mesh animations using a skeleton-based representation to simplify the rendering [10] or editing tasks [11], [3]. However, these editing methods are not able to preserve fine time-varying details during the manipulation process, as for instance the waving of the clothes for a performing subject.

For editing mesh-based character animations, an underlying representation (i.e. skeleton) is desired since it simplifies the overall process. At the same time, the time-varying details should be preserved during manipulation. These two constraints guide the design of our new hybrid representation for mesh-based character animation. Our method decomposes the input mesh animation into coarse and fine deformation components. A model for the coarse deformation is constructed automatically using the conventional skeleton-based paradigm (i.e. kinematic skeleton, joint parameters and blending skinning weights). Thereafter, a model to encode the time-varying details is built by learning the fine deformations of the input over time using a pair of linked Gaussian process latent variable models (GPLVM [12]). Our probabilistic non-linear formulation allows us to represent the time-varying details as a function of the underlying skeletal motion as well as to generalize to different configurations such that we are able to reconstruct details for edited poses that were not used during training. By combining both models, we simplify the editing process: animators can work directly using the underlying skeleton and the corresponding time-varying details are reconstructed in the final edited animation.

We demonstrate the performance of our approach by performing a variety of edits to mesh animations generated from different performance capture methods. Additionally, we extend the original approach [13] to represent and manipulate cloth simulation data. As a result, our technique is also able to convert cloth animation into a new hybrid representation that is more flexible for editing purposes and it can be easily integrated in the conventional animation pipeline (Section 7).

The paper is structured as follows: Section 2 reviews the most relevant related work and Section 3 briefly describes our overall approach. Thereafter, Section 4 details the method to convert a mesh-based character animation into the skeleton-based format and Section 5 describes how the time-varying details are learnt using a non-linear probabilistic technique. Experiments with mesh-based character animations are shown in Section 6, applications of the extended technique for cloth simulation is presented in Section 7 and the paper concludes with a discussion about the approach in Section 8.

Section snippets

Related work

Creating animations for human subjects is a time-consuming and expensive task. In the traditional framework, the character animation is represented by a surface mesh and an underlying skeleton. The surface geometry can be hand-crafted or scanned from a real subject and the underlying skeleton is manually created, inferred from marker trajectories [14] or inferred from the input geometry [15], [16]. The skeleton model is animated by assigning motion parameters to the joints and the geometry and

Overview

An overview of our approach is shown in Fig. 2. The input to our method is an animated mesh sequence comprising NFR frames. The mesh-based character animation, or the cloth simulation data, (MCA=[M,pt]) is represented by a sequence of triangle mesh models M=(V=vertices,T=triangulation) and position data pt(vi)=(xi,yi,zi)t for each vertex viV at all time steps t.

Our framework is inspired by Botsch and Kobbelt [43], where a new representation for mesh editing is proposed using a multiresolution

Skeleton-based representation

Given an input mesh-based character animation MCA, a skinned model (MCAC) is created to reproduce the coarse deformation component of the input animation. This is done by automatically fitting a kinematic skeleton to the input mesh model (i.e. triangle mesh at first frame of the animation) and by calculating the joint parameters (θ) and blending skinning weights such that MCAC reproduces MCA approximately.

Our goal is to deal with human-like characters. Therefore, we include prior knowledge in

Learning time-varying surface details

We use a non-linear probabilistic technique to efficiently learn the surface time-varying details of the input, which is inherently non-linear, from a small number of examples. This is achieved by learning the difference between the input mesh animation and its corresponding skinned model representation. This algorithm design is important because it makes our representation more stable (i.e. by using the coarse skinned animation) and it enables a more detailed and accurate reproduction of the

Experiments with mesh-based character animations

Our approach has been tested on several mesh-based animation sequences generated from performance capture methods that are publicly available [46], [4]. The animations contain walking, marching and fighting sequences. After decimating the original sequences, the input meshes in our system have a resolution of around NVERT=10005000 vertices and the animation sequences range from NFR=70 to 400 frames long. We evaluate the performance of different algorithmic alternatives with two experiments.

In

Editing cloth animation

In order to verify the performance of our system with other types of mesh animations, we extended the technique presented in [13] to represent and manipulate the mesh models resulting from cloth simulation. We started by collecting a training set of 6 atomic motion sequences, using a Vicon motion capture system, consisting of: walking and running at three different speeds. For each sequence, we manually add a stop for the character motion during 15 frames in order to capture the dynamics of the

Discussion

The running time of our algorithm is dominated by the training phase of the GPLVM-based technique (around 20 min for 100 frames with NVERT=1000). This step is done only once at the beginning for each sequence and, thereafter, the rendering or editing operations run in real-time. Our timings were obtained with an Intel Core Duo Laptop at 2.4 GHz.

Despite our method's ability to reproduce and manipulate the input animation, there are a few limitations to be considered. Our current framework is

References (47)

  • M. Botsch et al.

    On linear variational surface deformation methods

    IEEE Transactions on Visualization and Computer Graphics

    (2008)
  • Casas D, Tejera M, Guillemaut JY, Hilton A. 4d parametric motion graphs for interactive animation. In: Proceedings of...
  • E. de Aguiar et al.

    Automatic conversion of mesh animations into skeleton-based animations

    Computer Graphics Forum

    (2008)
  • D. Vlasic et al.

    Articulated mesh animation from multi-view silhouettes

    ACM Transactions on Graphics

    (2008)
  • Kircher S, Garland M. Editing arbitrarily deforming surface animations. In: SIGGRAPH '06, 2006. p....
  • W. Xu et al.

    Gradient domain editing of deforming mesh sequences

    ACM Transactions on Graphics

    (2007)
  • Sumner RW, Schmid J, Pauly M. Embedded deformation for shape manipulation. In: SIGGRAPH '07. ACM; 2007. p....
  • S. Kircher et al.

    Free-form motion processing

    ACM Transactions on Graphics

    (2008)
  • I. Baran et al.

    Semantic deformation transfer

    ACM Transactions on Graphics

    (2009)
  • D. James et al.

    Skinning mesh animations

    ACM Transactions on Graphics

    (2005)
  • Schaefer S, Yuksel C. Example-based skeleton extraction. In: SGP '07, 2007. p....
  • N. Lawrence

    Probabilistic non-linear principal component analysis with gaussian process latent variable models

    Journal of Machine Learning Research

    (2005)
  • de Aguiar E, Ukita N. Representing and manipulating mesh-based character animations. In: Proceedings of the 25th...
  • Kirk AG, O'Brien JF, Forsyth DA. Skeletal parameter estimation from optical motion capture data. In: Proceedings of...
  • I. Baran et al.

    Automatic rigging and animation of 3d characters

    ACM Transactions on Graphics

    (2007)
  • G. Bharaj et al.

    Automatically rigging multi-component characters

    Computer Graphics Forum (Proceedings of Eurographics 2012)

    (2011)
  • L. Kavan et al.

    Geometric skinning with approximate dual quaternion blending

    ACM Transactions on Graphics

    (2008)
  • Sumner RW, Popović J. Deformation transfer for triangle meshes. In: Proceedings of SIGGRAPH '04. ACM; 2004. p....
  • Ben-Chen M, Weber O, Gotsman C. Spatial deformation transfer. In: Proceedings of SCA. ACM; 2009. p....
  • J. Starck et al.

    Surface capture for performance based animation

    IEEE Computer Graphics and Applications

    (2007)
  • Huang P, Hilton A, Starck J. Human motion synthesis from 3d video. In: Proceedings of CVPR '09. IEEE;...
  • K. Mamou et al.

    A skinning approach for dynamic 3d mesh compressionresearch articles

    Computer Animation and Virtual Worlds

    (2006)
  • M. Alexa et al.

    Representing animations by principal components

    Computer Graphics Forum

    (2000)
  • Cited by (0)

    View full text