Abstract
Virtual character dressing deformation simulation is widely used in digital filmmaking, 3D gaming, animation, and metaverse construction to generate realistic dressing deformations and animations based on human body shapes and poses. Data-driven methods, compared to physically driven ones, offer advantages such as ease of control, speed, and data reusability, making them increasingly mainstream. However, they are often time-consuming and expensive, unsuitable for rapid iteration in current dressing animation. We present an unsupervised clothing deformation prediction model suitable for various body shapes and poses. Our method enables network training without a clothing deformation dataset by converting physical constraints into optimization objectives. Using a variational autoencoder with an encoder-decoder structure, we map body parameters (pose and shape) to clothing deformation. The reparameterisation module learns the latent space conditional probability distribution model from body features to clothing deformation. The feature-deformation transformation space is then learned to convert encoded vectors of different body features into corresponding clothing deformation vertex sets. Experimental results show that our model can be trained quickly without a clothing deformation dataset, even on a CPU, and can rapidly synthesize realistic clothing animation effects based on given body parameters, excelling in prediction speed and minimizing penetration loss.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., Davis, J.: Scape: shape completion and animation of people. In: ACM SIGGRAPH 2005 Papers, pp. 408–416 (2005)
Bertiche, H., Madadi, M., Escalera, S.: PBNS: physically based neural simulator for unsupervised garment pose space deformation. arXiv preprint arXiv:2012.11310 (2020)
Corona, E., Pumarola, A., Alenya, G., Pons-Moll, G., Moreno-Noguer, F.: SMPLicit: Topology-aware generative model for clothed people. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11875–11885 (2021)
Grigorev, A., Black, M.J., Hilliges, O.: Hood: hierarchical graphs for generalized modelling of clothing dynamics. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16965–16974 (2023)
Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013)
Li, C., Tang, M., Tong, R., Cai, M., Zhao, J., Manocha, D.: P-cloth: interactive complex cloth simulation on multi-GPU systems using dynamic matrix assembly and pipelined implicit integrators. ACM Trans. Graph. (TOG) 39(6), 1–15 (2020)
Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: SMPL: a skinned multi-person linear model. In: Seminal Graphics Papers: Pushing the Boundaries, vol. 2, pp. 851–866 (2023)
Ma, Q., Yang, J., Black, M.J., Tang, S.: Neural point-based shape modeling of humans in challenging clothing. In: 2022 International Conference on 3D Vision (3DV), pp. 679–689. IEEE (2022)
Ma, Q., Yang, J., Tang, S., Black, M.J.: The power of points for modeling humans in clothing. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 10974–10984 (2021)
Marvelous Designer: Marvelous Designer. https://www.marvelousdesigner.com (2023). Accessed 30 Mar 2023
Narain, R., Samii, A., O’brien, J.F.: Adaptive anisotropic remeshing for cloth simulation. ACM Trans. Graph. (TOG) 31(6), 1–10 (2012)
Patel, C., Liao, Z., Pons-Moll, G.: TailorNet: predicting clothing in 3D as a function of human pose, shape and garment style. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7365–7375 (2020)
Pons-Moll, G., Pujades, S., Hu, S., Black, M.J.: ClothCap: Seamless 4D clothing capture and retargeting. ACM Trans. Graph. (ToG) 36(4), 1–15 (2017)
Provot, X., et al.: Deformation constraints in a mass-spring model to describe rigid cloth behaviour. In: Graphics Interface, pp. 147–147. Canadian Information Processing Society (1995)
Rohmer, D., Popa, T., Cani, M.P., Hahmann, S., Sheffer, A.: Animation wrinkling: augmenting coarse cloth simulations with realistic-looking wrinkles. ACM Trans. Graph. (ToG) 29(6), 1–8 (2010)
Saito, S., Yang, J., Ma, Q., Black, M.J.: SCANImate: weakly supervised learning of skinned clothed avatar networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2886–2897 (2021)
Santesteban, I., Otaduy, M.A., Casas, D.: Learning-based animation of clothing for virtual try-on. In: Computer Graphics Forum, vol. 38, pp. 355–366. Wiley Online Library (2019)
Vidaurre, R., Santesteban, I., Garces, E., Casas, D.: Fully convolutional graph neural networks for parametric virtual try-on. In: Computer Graphics Forum, vol. 39, pp. 145–156. Wiley Online Library (2020)
Wang, H.: GPU-based simulation of cloth wrinkles at submillimeter levels. ACM Trans. Graph. (TOG) 40(4), 1–14 (2021)
Wang, T.Y., Shao, T., Fu, K., Mitra, N.J.: Learning an intrinsic garment space for interactive authoring of garment animation. ACM Trans. Graph. (TOG) 38(6), 1–12 (2019)
Wang, Z., Wu, L., Fratarcangeli, M., Tang, M., Wang, H.: Parallel multigrid for nonlinear cloth simulation. In: Computer Graphics Forum, vol. 37, pp. 131–141. Wiley Online Library (2018)
Wu, L., Wu, B., Yang, Y., Wang, H.: A safe and fast repulsion method for GPU-based cloth self collisions. ACM Trans. Graph. (TOG) 40(1), 1–18 (2020)
Wu, Z., Lin, G., Tao, Q., Cai, J.: M2E-Try On Net: fashion from model to everyone. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 293–301 (2019)
Zhu, H., et al.: Deep Fashion3D: a dataset and benchmark for 3D garment reconstruction from single images. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 512–530. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_30
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Supplementary material 1 (mp4 2308 KB)
Supplementary material 2 (mp4 19336 KB)
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhuo, X., Shi, M., Zhu, D., Han, G., Li, Z. (2025). Unsupervised Real-Time Garment Deformation Prediction Driven by Human Body Pose and Shape. In: Magnenat-Thalmann, N., Kim, J., Sheng, B., Deng, Z., Thalmann, D., Li, P. (eds) Advances in Computer Graphics. CGI 2024. Lecture Notes in Computer Science, vol 15339. Springer, Cham. https://doi.org/10.1007/978-3-031-82021-2_19
Download citation
DOI: https://doi.org/10.1007/978-3-031-82021-2_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-82020-5
Online ISBN: 978-3-031-82021-2
eBook Packages: Computer ScienceComputer Science (R0)