Skip to main content
Log in

Simulation of face/hairstyle swapping in photographs with skin texture synthesis

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The modern trend of diversification and personalization has encouraged people to boldly express their differentiation and uniqueness in many aspects, and one of the noticeable evidences is the wide variety of hairstyles that we could observe today. Given the needs for hairstyle customization, approaches or systems, ranging from 2D to 3D, or from automatic to manual, have been proposed or developed to digitally facilitate the choice of hairstyles. However, nearly all existing approaches suffer from providing realistic hairstyle synthesis results. By assuming the inputs to be 2D photos, the vividness of a hairstyle re-synthesis result relies heavily on the removal of the original hairstyle, because the co-existence of the original hairstyle and the newly re-synthesized hairstyle may lead to serious artifact on human perception. We resolve this issue by extending the active shape model to more precisely extract the entire facial contour, which can then be used to trim away the hair from the input photo. After hair removal, the facial skin of the revealed forehead needs to be recovered. Since the skin texture is non-stationary and there is little information left, the traditional texture synthesis and image inpainting approaches do not fit to solve this problem. Our proposed method yields a more desired facial skin patch by first interpolating a base skin patch, and followed by a non-stationary texture synthesis. In this paper, we also would like to reduce the user assistance during such a process as much as possible. We have devised a new and friendly facial contour and hairstyle adjusting mechanism that make it extremely easy to manipulate and fit a desired hairstyle onto a face. In addition, our system is also equipped with the functionality of extracting the hairstyle from a given photo, which makes our work more complete. Moreover, by extracting the face from the input photo, our system allows users to exchange faces as well. In the end of this paper, our re-synthesized results are shown, comparisons are made, and user studies are conducted as well to further demonstrate the usefulness of our system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23

Similar content being viewed by others

References

  1. Active Shape Models with Stasm (2008) http://www.milbo.users.sonic.net/stasm/. Accessed 19 Jan 2009

  2. Bertalmio M, Sapiro G, Caselles V, Ballester C (2000) Image inpainting. In: SIGGRAPH ‘2000

  3. Bitouk D, Kumar N, Dhillon S, Belhumeur P, Nayar SK (2008) Face swapping: automatically replacing faces in photographs. ACM Trans Graph (Also Proc. of ACM SIGGRAPH ‘2008) 27(3):1–8

    Article  Google Scholar 

  4. Blancato V (1988) Method and apparatus for displaying hairstyles. US Patent 4731743

  5. Blanz V, Scherbaum K, Vetter T, Seidel H (2004) Exchanging faces in images. In: Eurographics ‘2004

  6. Chuang YY, Curless B, Salesin DH, Szeliski R (2001) A Bayesian approach to digital matting. In: Proceedings of CVPR 2001, pp 264–271

  7. Cootes TF, Taylor CJ (2001) Statistical models of appearance for medical image analysis and computer vision. In: SPIE medical imaging

  8. Cootes TF, Taylor CJ, Cooper DH, Graham J (1995) Active shape models—their training and application. Comput Vis Image Underst 61(1):38–59

    Article  Google Scholar 

  9. Doi M, Tominaga S (2006) Image analysis and synthesis of skin color textures by wavelet transform. In: Image analysis and interpretation ‘2006, pp 193–197

  10. Drori I, Cohen-Or D, Yeshurun H (2003) Fragment-based image completion. In: SIGGRAPH ‘2003, pp 303–312

  11. Efros AA, Freeman WT (2001) Image quilting for texture synthesis and transfer. In: SIGGRAPH ‘2001, pp 341–346

  12. Efros AA, Leung T (1999) Texture synthesis by non-parametric sampling. In: International conference on computer vision, pp 1033–1038

  13. Gonzalez RC, Woods RE (2002) Digital image processing, 2nd edn. Prentice-Hall, New York

    Google Scholar 

  14. He L, Chao Y, Suzuki K (2008) A run-based two-scan labeling algorithm. IEEE Trans Image Process 17(5):749–756

    Article  MathSciNet  Google Scholar 

  15. Hsu RL, Abdel-Mottaleb M, Jain AK (2002) Face detection in color images. IEEE Trans Pattern Anal Mach Intell 24(5):696–706

    Article  Google Scholar 

  16. Kjeldsen R, Kender J (1996) Finding skin in color images. In: FG’ 96 (2nd international conference on automatic face and gesture recognition)

  17. Levin A, Lischinski D, Weiss Y (2008) A closed-form solution to natural image matting. IEEE Trans Pattern Anal Mach Intell 30(2):228–242

    Article  Google Scholar 

  18. MacQueen JB (1967) Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley symposium on mathematical statistics and probability,pp 281–297

  19. Mohammed U, Prince S, Kautz J (2009) Visio-lization generating novel facial images. In: SIGGRAPH ‘2009

  20. Mortensen EN, Barrett WA (1995) Intelligent scissors for image composition. In: SIGGRAPH ‘1995, pp 191–198

  21. Nainia FB, Mossb JP, Gillc DS (2006) The enigma of facial beauty: esthetics, proportions, deformity, and controversy. Am J Orthod Dentofac Orthop 130(3):277–282

    Article  Google Scholar 

  22. Oliveira MM, Bowen B, McKenna R, Chang Y (2001) Fast digital image inpainting. In: VIIP (international conference on Visualization, Imaging and Image Processing) ‘2001

  23. Perez P, Gangnet M, Blake A (2003) Poisson image editing. In: SIGGRAPH ‘2003, pp 313–318

  24. Perret DI, May KA, Yoshikawa S (1994) Facial shape and judgments of female attractiveness. Nature 368(6468):239–242

    Article  Google Scholar 

  25. Rowley HA, Baluja S, Kanade T (1998) Neural network-based face detection. IEEE Trans Pattern Anal Mach Intell 20(1):23–38

    Article  Google Scholar 

  26. Rowley HA, Baluja S, Kanade T (1998) Rotation invariant neural network-based face detection. In: CVPR ‘1998, pp 38–44

  27. Smith AR, Blinn JF (1996) Blue screen matting. In: SIGGRAPH ‘1996, pp 259–268

  28. Sun J, Jia J, Tang CK, Shum HY (2004) Poisson matting. ACM Trans Graph 23(3):315–321

    Article  Google Scholar 

  29. Sun J, Yuan L, Jia J, Shum H (2005) Image completion with structural propagation. In: SIGGRAPH ‘2005, pp 861–868

  30. Tsumura N, Ojima N, Sato K, Shimizu H, Nabeshima H, Akazaki S, Hori K, Miyake Y (2003) Image-based skin color and texture analysis/synthesis by extracting hemoglobin and melanin information in the skin. In: Proceedings of the 30th annual conference on computer graphics and interactive techniques SIGGRAPH 2003

  31. Viola P (2004) Robust real-time face detection. Int J Comput Vis 57(2):137–154

    Article  Google Scholar 

  32. Wang J, Agrawala M, Cohen M (2007) Soft scissors: an interactive tool for realtime high quality matting. In: SIGGRAPH ‘2007

  33. Wang L, Yu Y, Zhou K, Guo B (2009) Example-based hair geometry synthesis. In: SIGGRAPH ‘2009

  34. Wei L, Levoy M (2000) Fast texture synthesis using tree-structured vector quantization. In: SIGGRAPH ‘2000, pp 479–488

  35. Yang MH, Ahuja N (1998) Detecting human faces in color images. In: International conference on image processing, pp 127–130

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chuan-Kai Yang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chou, JK., Yang, CK. Simulation of face/hairstyle swapping in photographs with skin texture synthesis. Multimed Tools Appl 63, 729–756 (2013). https://doi.org/10.1007/s11042-011-0891-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-011-0891-1

Keywords

Navigation