Skip to main content
Log in

DCCMF-GAN: double cycle consistently constrained multi-feature discrimination GAN for makeup transfer

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Makeup transfer aims to transfer the makeup of the human face in the reference image to another face in the source image. Most mainstream makeup transfer methods directly learn the makeup, which often regard the interference information (posture, illumination, shadow and background) as part of makeup. We propose a novel Double Cycle Consistently constrained Multi-Feature Discrimination Generative Adversarial Networks (DCCMF-GAN) for makeup transfer, which is based on the separation of "makeup" and "content". DCCMF-GAN is nested by cycle reconstruction networks CycleI and CycleII, which are separately constrained by cycle consistency loss “cycle consistency1” and “cycle consistency2”. Both cycle networks consist of two makeup-transfer generators G and two multi-feature discriminators, respectively. G first encodes to separate "makeup" and "content" of the source and the reference images. Then fuses the source content with reference makeup for makeup application, and merges the source makeup with reference content for makeup removal. Under the constraint of double cycle consistency loss and the adversarial learning of multi-feature discriminators in terms of identity, global makeup and focused local makeup ensure to generate pleasant makeup conversion results. Compared with several state-of-the-art makeup transfer methods, the proposed method is insensitive to pose, illumination, shadow, expression, aging and other interference, which achieves high-quality makeup transfer with strong robustness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availability

The datasets supporting the findings of this article are included in the published article BeautyGAN [8] and its supplementary information files.

References

  1. Goodfellow IJ, Pouget-Abadie J, Mirza M et al (2014) Generative adversarial nets. In: 28th Conference on Neural Information Processing Systems (NIPS), vol 27, pp 2672–2680

  2. Jin X, Han R, Ning N et al (2019) Facial makeup transfer combining illumination transfer. IEEE Access 7:80928–80936

    Article  Google Scholar 

  3. Wan Z, Chen H, An J, Jiang W, Yao C, Luo J (2022) Facial attribute transformers for precise and robust makeup transfer. In: 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp 3113–3122

  4. Bastanfard A, Takahashi H, Nakajima M (2004) Toward E-appearance of human face and hair by age, expression and rejuvenation. In: 2004 International Conference on Cyberworlds (CW), pp 306–311

  5. Bastanfard A, Bastanfard O, Takahashi H, Nakajima M (2004) Toward anthropometrics simulation of face rejuvenation and skin cosmetic: Research Articles. Comput Animat Virtual Worlds 15(3–4):347–352

  6. Sun Y, Tang J, Shu X, Sun Z, Tistarelli M (2020) Facial age synthesis with label distribution-guided generative adversarial network. IEEE Trans Inf Forensics Secur 15(2):2679–2691

    Article  Google Scholar 

  7. Liu S, Sun Y, Zhu D, Bao R, Wang W, Shu X, Yan S (2017) Face aging with contextual generative adversarial nets. In: Proceedings of the 25th ACM International Conference on Multimedia (MM), pp 82–90

  8. Li T, Qian R, Dong C et al (2018) BeautyGAN: Instance-level facial makeup transfer with deep generative adversarial network. In: Proceedings of the 26th ACM International Conference on Multimedia (MM), pp 645–653

  9. Hertzmann A (1998) Painterly rendering with curved brush strokes of multiple sizes. In: Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pp 453–460

  10. Hertzmann A (2003) A survey of stroke-based rendering. IEEE Comput Graphics Appl 23(4):70–81

    Article  Google Scholar 

  11. Jacobs C, Salesin D, Oliver N et al (2001) Image analogies. Proceedings of Siggraph 327–340

  12. Jamriška O, Sochorová Š, Texler O et al (2019) Stylizing video by example. ACM Trans Graph 38(4):1–11

    Article  Google Scholar 

  13. He K, Sun J, Tang X (2012) Guided image filtering. IEEE Trans Pattern Anal Mach Intell 35(6):1397–1409

    Article  Google Scholar 

  14. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875

    Article  Google Scholar 

  15. Li C, Wand M (2016) Precomputed real-time texture synthesis with markovian generative adversarial networks. Eur Conf Comput Vis 9907:702–716 (Springer, Cham 2016)

    Google Scholar 

  16. Jiahao L (2022) Transformer-based neural texture synthesis and style transfer. In: Proceedings of the 4th Asia Pacific Information Technology Conference(APIT), pp 88–95

  17. Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2414–2423

  18. Gatys L, Ecker A, Bethge M (2016) A neural algorithm of artistic style. J Vis 16(12):326–326

  19. Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: 14th European Conference on Computer Vision (ECCV), vol 9906, pp 694–711

  20. Ulyanov D, Vedaldi A, Lempitsky V (2017) Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In: Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 4105–4113

  21. Huang X, Belongie S (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the 16th IEEE International Conference on Computer Vision (ICCV), pp 1501–1510

  22. Liu MY, Breuel T, Kautz J (2017) Unsupervised image-to-image translation networks. In: 31st Annual Conference on Neural Information Processing Systems (NIPS), vol 30, pp 700–708

  23. Huang X, Liu M-Y, Belongie S et al (2018) Multimodal unsupervised image-to-image translation. In: Proceedings of the 15th European Conference on Computer Vision (ECCV), vol 11207, pp 172–189

  24. Zhu JY, Park T, Isola P et al (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the 16th IEEE International Conference on Computer Vision (ICCV), pp 2223–2232

  25. Chang H, Lu J, Yu F et al (2018) PairedCycleGAN: Asymmetric style transfer for applying and removing makeup. In: Proceedings of the 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 40–48

  26. Xu L, Du Y, Zhang Y (2013) An automatic framework for example-based virtual makeup. In: 20th IEEE International Conference on Image Processing (ICIP), pp 3206–3210

  27. Li C, Zhou K, Lin S (2015) Simulating makeup through physics-based manipulation of intrinsic image layers. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp 4621–4629

  28. Liu L, Xing J, Liu S et al (2014) Wow! you are so beautiful today! ACM Trans Multimedia Comput Commun Appl (TOMM) 11(1):1–22

    Article  Google Scholar 

  29. Li Y, Song L, Wu X, He R, Tan T (2018) Anti-makeup:learning a bi-level adversarial network for makeup-invariant face verification. In: 32nd AAAI Conference on Artificial Intelligence (AAAI), pp 7057–7064

  30. Wang S, Fu Y (2016) Face behind makeup. In: 30th AAAI Conference on Artificial Intelligence (AAAI), pp 58–64

  31. Liu S, Ou X, Qian R et al (2016) Makeup like a superstar: Deep localized makeup transfer network. arXiv preprint arXiv:1604.07102

  32. Chen HJ, Hui KM, Wang SY et al (2019) Beautyglow: On-demand makeup transfer framework with reversible generative network. In: Proceedings of the 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 10034–10042

  33. Deng H, Han C, Cai H et al (2021) Spatially-invariant style-codes controlled makeup transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp 6549–6557

  34. Nguyen T, Tran AT, Hoai M (2021) Lipstick ain't enough: beyond color matching for in-the-wild makeup transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp 13305–13314

  35. Lyu Y, Dong J, Peng B, Wang W, Tan T (2021) SOGAN: 3D-aware shadow and occlusion robust GAN for makeup transfer. In: Proceedings of the 29th ACM International Conference on Multimedia (MM '21), pp 3601–3609

  36. Sun Z, Chen Y, Xiong S (2022) SSAT: A symmetric semantic-aware transformer network for makeup transfer and removal. In: Proceedings of the AAAI Conference on Artificial Intelligence(AAAI), pp 2325–2334

  37. Tong WS, Tang CK, Brown MS et al (2007) Example-based cosmetic transfer. In: 15th Pacific Conference on Computer Graphics and Applications (Pacific Graphics), pp 211–218

  38. Guo D, Sim T (2009) Digital face makeup by example. In: IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp 73–79

  39. Gu Q, Wang G, Chiu MT et al (2019) LADN: Local adversarial disentangling network for facial makeup and de-makeup. IEEE/CVF International Conference on Computer Vision (ICCV):10481–10490

  40. Jiang W, Liu S, Gao C et al (2020) PSGAN: Pose and expression robust spatial-aware GAN for customizable makeup transfer. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR):5194–5202

  41. Liu S, Jiang W, Gao C et al (2022) PSGAN++: Robust detail-preserving makeup transfer and removal. IEEE Trans Pattern Anal Mach Intell 44(11):8538–8551

  42. Yu C, Wang J, Peng C et al (2018) BiSeNet: Bilateral segmentation network for real-time semantic segmentation. Proceedings of the European Conference on Computer Vision (ECCV) 11217:334–349

Download references

Acknowledgements

This work was supported by the key project of Natural Science Foundation of Shaanxi Province (Grant Nos. 2018JZ6007) and the Key Research and Development Program of Shaanxi Province (Program No.2021ZDLGY15-03).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lin Wang.

Ethics declarations

Competing Interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, X., Cao, X., Wang, L. et al. DCCMF-GAN: double cycle consistently constrained multi-feature discrimination GAN for makeup transfer. Multimed Tools Appl 83, 44009–44022 (2024). https://doi.org/10.1007/s11042-023-17240-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-17240-6

Keywords

Navigation