Special Section on Computer Graphics in Brazil: A selection of papers from SIBGRAPI 2012Representing mesh-based character animations
Graphical abstract
Introduction
Recently, a variety of mesh-based approaches have been developed to enable the generation of computer animations without relying on the classical skeleton-based paradigm [1], [2]. The advantage of a deformable model representation is also demonstrated by the new performance capture approaches [3], [4], where both motion and surface deformations can be captured from input video-streams for arbitrary subjects. This shows the great flexibility of a mesh-based representation over the classical one during animation creation.
Although bypassing many drawbacks of the conventional animation pipeline, a mesh-based representation for character animation is still complex to be edited or manipulated. Few solutions are presented in the literature [5], [6], [7], [8], [9], but in general it is still hard to integrate these methods into the conventional pipeline. Other approaches try to convert or represent mesh animations using a skeleton-based representation to simplify the rendering [10] or editing tasks [11], [3]. However, these editing methods are not able to preserve fine time-varying details during the manipulation process, as for instance the waving of the clothes for a performing subject.
For editing mesh-based character animations, an underlying representation (i.e. skeleton) is desired since it simplifies the overall process. At the same time, the time-varying details should be preserved during manipulation. These two constraints guide the design of our new hybrid representation for mesh-based character animation. Our method decomposes the input mesh animation into coarse and fine deformation components. A model for the coarse deformation is constructed automatically using the conventional skeleton-based paradigm (i.e. kinematic skeleton, joint parameters and blending skinning weights). Thereafter, a model to encode the time-varying details is built by learning the fine deformations of the input over time using a pair of linked Gaussian process latent variable models (GPLVM [12]). Our probabilistic non-linear formulation allows us to represent the time-varying details as a function of the underlying skeletal motion as well as to generalize to different configurations such that we are able to reconstruct details for edited poses that were not used during training. By combining both models, we simplify the editing process: animators can work directly using the underlying skeleton and the corresponding time-varying details are reconstructed in the final edited animation.
We demonstrate the performance of our approach by performing a variety of edits to mesh animations generated from different performance capture methods. Additionally, we extend the original approach [13] to represent and manipulate cloth simulation data. As a result, our technique is also able to convert cloth animation into a new hybrid representation that is more flexible for editing purposes and it can be easily integrated in the conventional animation pipeline (Section 7).
The paper is structured as follows: Section 2 reviews the most relevant related work and Section 3 briefly describes our overall approach. Thereafter, Section 4 details the method to convert a mesh-based character animation into the skeleton-based format and Section 5 describes how the time-varying details are learnt using a non-linear probabilistic technique. Experiments with mesh-based character animations are shown in Section 6, applications of the extended technique for cloth simulation is presented in Section 7 and the paper concludes with a discussion about the approach in Section 8.
Section snippets
Related work
Creating animations for human subjects is a time-consuming and expensive task. In the traditional framework, the character animation is represented by a surface mesh and an underlying skeleton. The surface geometry can be hand-crafted or scanned from a real subject and the underlying skeleton is manually created, inferred from marker trajectories [14] or inferred from the input geometry [15], [16]. The skeleton model is animated by assigning motion parameters to the joints and the geometry and
Overview
An overview of our approach is shown in Fig. 2. The input to our method is an animated mesh sequence comprising NFR frames. The mesh-based character animation, or the cloth simulation data, () is represented by a sequence of triangle mesh models and position data for each vertex at all time steps t.
Our framework is inspired by Botsch and Kobbelt [43], where a new representation for mesh editing is proposed using a multiresolution
Skeleton-based representation
Given an input mesh-based character animation MCA, a skinned model (MCAC) is created to reproduce the coarse deformation component of the input animation. This is done by automatically fitting a kinematic skeleton to the input mesh model (i.e. triangle mesh at first frame of the animation) and by calculating the joint parameters () and blending skinning weights such that MCAC reproduces MCA approximately.
Our goal is to deal with human-like characters. Therefore, we include prior knowledge in
Learning time-varying surface details
We use a non-linear probabilistic technique to efficiently learn the surface time-varying details of the input, which is inherently non-linear, from a small number of examples. This is achieved by learning the difference between the input mesh animation and its corresponding skinned model representation. This algorithm design is important because it makes our representation more stable (i.e. by using the coarse skinned animation) and it enables a more detailed and accurate reproduction of the
Experiments with mesh-based character animations
Our approach has been tested on several mesh-based animation sequences generated from performance capture methods that are publicly available [46], [4]. The animations contain walking, marching and fighting sequences. After decimating the original sequences, the input meshes in our system have a resolution of around vertices and the animation sequences range from to 400 frames long. We evaluate the performance of different algorithmic alternatives with two experiments.
In
Editing cloth animation
In order to verify the performance of our system with other types of mesh animations, we extended the technique presented in [13] to represent and manipulate the mesh models resulting from cloth simulation. We started by collecting a training set of 6 atomic motion sequences, using a Vicon motion capture system, consisting of: walking and running at three different speeds. For each sequence, we manually add a stop for the character motion during 15 frames in order to capture the dynamics of the
Discussion
The running time of our algorithm is dominated by the training phase of the GPLVM-based technique (around 20 min for 100 frames with ). This step is done only once at the beginning for each sequence and, thereafter, the rendering or editing operations run in real-time. Our timings were obtained with an Intel Core Duo Laptop at 2.4 GHz.
Despite our method's ability to reproduce and manipulate the input animation, there are a few limitations to be considered. Our current framework is
References (47)
- et al.
On linear variational surface deformation methods
IEEE Transactions on Visualization and Computer Graphics
(2008) - Casas D, Tejera M, Guillemaut JY, Hilton A. 4d parametric motion graphs for interactive animation. In: Proceedings of...
- et al.
Automatic conversion of mesh animations into skeleton-based animations
Computer Graphics Forum
(2008) - et al.
Articulated mesh animation from multi-view silhouettes
ACM Transactions on Graphics
(2008) - Kircher S, Garland M. Editing arbitrarily deforming surface animations. In: SIGGRAPH '06, 2006. p....
- et al.
Gradient domain editing of deforming mesh sequences
ACM Transactions on Graphics
(2007) - Sumner RW, Schmid J, Pauly M. Embedded deformation for shape manipulation. In: SIGGRAPH '07. ACM; 2007. p....
- et al.
Free-form motion processing
ACM Transactions on Graphics
(2008) - et al.
Semantic deformation transfer
ACM Transactions on Graphics
(2009) - et al.
Skinning mesh animations
ACM Transactions on Graphics
(2005)