Skip to main content

Advertisement

Log in

Customizing blendshapes to capture facial details

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Blendshape technique is an effective tool in the computer facial animation. Every character requires its own unique blendshapes to cover numerous facial expressions in the Visual Effects industry. Despite outstanding advances in this area, existing techniques still need a professional artist’s intuition and complex hardware. In this paper, we propose a framework for customizing blendshapes to capture facial details. The suggested method primarily consists of two stages: Blendshape generation and Blendshape augmentation. In the first stage, localized blendshapes are automatically generated from real-time captured faces with two methods: linear regression and an autoencoder Han (in: IEEE International Conference on Big Data and Smart Computing (BigComp) 2021) (2021). In our experiment, face construction with the former outperforms that of the later method. However, generated blendshapes are slightly missing the source features, especially mouth movements. To overcome this, in the last stage, we extend Han (in: IEEE International Conference on Big Data and Smart Computing (BigComp) 2021), (2021) by adding a blendshape incrementally to minimize erroneous expression transfer.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

References

  1. Anjyo K, Todo H, Lewis JP (2012) A practical approach to direct manipulation blendshapes. J Graph Tools 16(3):160–176

    Article  Google Scholar 

  2. Berson E, Soladie C, Barrielle V, Stoiber N (2019) A robust interactive facial animation editing system. Motion Interaction Games, pp 1–10

  3. Bouaziz S, Wang Y, Pauly M (2013) Online modeling for realtime facial animation. ACM Trans Graph (ToG) 32(4):1–10

    Article  MATH  Google Scholar 

  4. Cao C, Weng Y, Lin S, Zhou K (2013) 3d shape regression for real-time facial animation. ACM Trans Graphics (TOG) 32(4):1–10

    Article  MATH  Google Scholar 

  5. Casas D, Feng A, Alexander O, Fyffe G, Debevec P, Ichikari R, Li H, Olszewski K, Suma E, Shapiro A (2016) Rapid photorealistic blendshape modeling from rgb-d sensors. In: Proceedings of the 29th International Conference on Computer Animation and Social Agents, pp 121–129

  6. Cetinaslan O, Orvalho V (2020) Sketching manipulators for localized blendshape editing. Graph Mod 108:101059

    Article  Google Scholar 

  7. Cetinaslan O, Orvalho V (2020) Stabilized blendshape editing using localized Jacobian transpose descent. Graph Mod 112:101091

    Article  Google Scholar 

  8. Chaudhuri B, Vesdapunt N, Shapiro L, Wang B (2020) Personalized face modeling for improved face reconstruction and motion retargeting. In: European Conference on Computer Vision, pp 142–160. Springer

  9. Chen K, Zheng J, Cai J, Zhang J (2020) Modeling caricature expressions by 3d blendshape and dynamic texture. arXiv preprint arXiv:2008.05714

  10. Chheang V, Jeong S, Lee G, Ha JS, Yoo KH (2020) Natural embedding of live actors and entities into 360 virtual reality scenes. J Supercomput 76(7):5655–5677

    Article  Google Scholar 

  11. Cong M, Fedkiw R (2019) Muscle-based facial retargeting with anatomical constraints. In: ACM SIGGRAPH 2019 Talks, pp 1–2

  12. Costigan T, Gerdelan A, Carrigan E, McDonnell R (2016) Improving blendshape performance for crowds with gpu and gpgpu techniques. In: Proceedings of the 9th International Conference on Motion in Games, pp 73–78

  13. Costigan T, Prasad M, McDonnell R (2014) Facial retargeting using neural networks. In: Proceedings of the Seventh International Conference on Motion in Games, pp 31–38

  14. Ding C, Tao D (2015) Robust face recognition via multimodal deep face representation. IEEE Trans Multimedia 17(11):2049–2058

    Article  Google Scholar 

  15. Han JH, Kim JI, Kim H, Suh JW (2021) Generate individually optimized blendshapes. In: 2021 IEEE International Conference on Big Data and Smart Computing (BigComp), pp 114–120. IEEE

  16. Hui Q (2019) Motion video tracking technology in sports training based on mean-shift algorithm. J Supercomput 75(9):6021–6037

    Article  Google Scholar 

  17. Joshi P, Tien WC, Desbrun M, Pighin F (2006) Learning controls for blend shape based realistic facial animation. In: ACM Siggraph 2006 Courses, pp 17–es

  18. Kan M, Shan S, Chang H, Chen X (2014) Stacked progressive auto-encoders (spae) for face recognition across poses. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1883–1890

  19. Kang J, Lee S (2020) A greedy pursuit approach for fitting 3d facial expression models. IEEE Access 8:192682–192692

    Article  Google Scholar 

  20. Kim PH, Seol Y, Song J, Noh J (2011) Facial retargeting by adding supplemental blendshapes. In: PG (Short Papers)

  21. Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114

  22. Kommineni J, Mandala S, Sunar MS, Chakravarthy PM (2021) Accurate computing of facial expression recognition using a hybrid feature extraction technique. J Supercomput 77(5):5019–5044

    Article  Google Scholar 

  23. Lewis JP, Anjyo K, Rhee T, Zhang M, Pighin FH, Deng Z (2014) Practice and theory of blendshape facial models. Eurographics (State of the Art Reports) 1(8):2

    Google Scholar 

  24. Lewis JP, Anjyo KI (2010) Direct manipulation blendshapes. IEEE Comput Graph Appl 30(4):42–50

  25. Li J, Kuang Z, Zhao Y, He M, Bladin K, Li H (2020) Dynamic facial asset and rig generation from a single scan. ACM Trans. Graph. 39(6):215–1

    Article  Google Scholar 

  26. Li Q, Deng Z (2008) Orthogonal-blendshape-based editing system for facial motion capture data. IEEE Comput Graph Appl 28(6):76–82

    Article  Google Scholar 

  27. Lombardi S, Saragih J, Simon T, Sheikh Y (2018) Deep appearance models for face rendering. ACM Transactions on Graphics (TOG) 37(4):1–13

    Article  Google Scholar 

  28. Orvalho V, Bastos P, Parke FI, Oliveira B, Alvarez X (2012) A facial rigging survey. Eurographics (State of the Art Reports) pp 183–204

  29. Parent R (2012) Computer animation: algorithms and techniques. Newnes

  30. Parke FI (1972) Computer generated animation of faces. In: Proceedings of the ACM annual conference-Volume 1, pp 451–457

  31. Parke FI (1974) A parametric model for human faces. The University of Utah

  32. Parke FI, Waters K (2008) Computer facial animation. CRC press

  33. Pighin F, Hecker J, Lischinski D, Szeliski R, Salesin DH (2006) Synthesizing realistic facial expressions from photographs. In: ACM SIGGRAPH 2006 Courses, pp 19–es

  34. Pighin F, Lewis JP (2006) Facial motion retargeting. In: ACM SIGGRAPH 2006 Courses, pp 2–es

  35. Rao S, Ortiz-Cayon R, Munaro M, Liaudanskas A, Chande K, Bertel T, Richardt C, JB A, Holzer S, Kar A (2020) Free-viewpoint facial re-enactment from a casual capture. In: SIGGRAPH Asia 2020 Posters, pp 1–2

  36. Ribera RBI, Zell E, Lewis JP, Noh J, Botsch M (2017) Facial retargeting with automatic range of motion alignment. ACM Trans Graph (TOG) 36(4):1–12

    Article  Google Scholar 

  37. Seo J, Irving G, Lewis JP, Noh J (2011) Compression and direct manipulation of complex blendshape models. ACM Trans Graph (TOG) 30(6):1–10

    Article  Google Scholar 

  38. Seol Y, Lewis JP, Seo J, Choi B, Anjyo K, Noh J (2012) Spacetime expression cloning for blendshapes. ACM Trans Graph (TOG) 31(2):1–12

    Article  Google Scholar 

  39. Seol Y, Ma WC, Lewis J (2016) Creating an actor-specific facial rig from performance capture. In: Proceedings of the 2016 Symposium on Digital Production, pp 13–17

  40. Sumner RW, Popović J (2004) Deformation transfer for triangle meshes. ACM Trans Graph (TOG) 23(3):399–405

    Article  Google Scholar 

  41. Thomas D, Taniguchi RI (2016) Augmented blendshapes for real-time simultaneous 3d head modeling and facial motion capture. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3299–3308

  42. Toshpulatov M, Lee W, Lee S (2021) Generative adversarial networks and their application to 3d face generation: a survey. Image Vis Comput, p 104119

  43. Wang S, Cheng Z, Deng X, Chang L, Duan F, Lu K (2020) Leveraging 3d blendshape for facial expression recognition using cnn. Sci China Inf Sci 63(120114):1–120114

    Google Scholar 

  44. Zhang J, Chen K, Zheng J (2020) Facial expression retargeting from human to avatar made easy. IEEE Trans Vis Comput Graph

Download references

Acknowledgements

This work was extended from the paper presented in 2021 IEEE International Conference on Big Data and Smart Computing [15]. It was supported by Institute of Information and communications Technology Planning and Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2020-0-00872, SaaS Technology for Development of Veterinary Medical Image Interpretation based on AI) and the Bio-Synergy Research Project (2013M3A9C4078140) of the Ministry of Science, ICT and Future Planning through the National Research Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyungseok Kim.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (mov 17938 KB)

Supplementary file 2 (mp4 10751 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, J.H., Kim, J.I., Suh, J.W. et al. Customizing blendshapes to capture facial details. J Supercomput 79, 6347–6372 (2023). https://doi.org/10.1007/s11227-022-04885-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-022-04885-7

Keywords