Abstract
Makeup transfer aims to transfer the makeup of the human face in the reference image to another face in the source image. Most mainstream makeup transfer methods directly learn the makeup, which often regard the interference information (posture, illumination, shadow and background) as part of makeup. We propose a novel Double Cycle Consistently constrained Multi-Feature Discrimination Generative Adversarial Networks (DCCMF-GAN) for makeup transfer, which is based on the separation of "makeup" and "content". DCCMF-GAN is nested by cycle reconstruction networks CycleI and CycleII, which are separately constrained by cycle consistency loss “cycle consistency1” and “cycle consistency2”. Both cycle networks consist of two makeup-transfer generators G and two multi-feature discriminators, respectively. G first encodes to separate "makeup" and "content" of the source and the reference images. Then fuses the source content with reference makeup for makeup application, and merges the source makeup with reference content for makeup removal. Under the constraint of double cycle consistency loss and the adversarial learning of multi-feature discriminators in terms of identity, global makeup and focused local makeup ensure to generate pleasant makeup conversion results. Compared with several state-of-the-art makeup transfer methods, the proposed method is insensitive to pose, illumination, shadow, expression, aging and other interference, which achieves high-quality makeup transfer with strong robustness.
Similar content being viewed by others
Data Availability
The datasets supporting the findings of this article are included in the published article BeautyGAN [8] and its supplementary information files.
References
Goodfellow IJ, Pouget-Abadie J, Mirza M et al (2014) Generative adversarial nets. In: 28th Conference on Neural Information Processing Systems (NIPS), vol 27, pp 2672–2680
Jin X, Han R, Ning N et al (2019) Facial makeup transfer combining illumination transfer. IEEE Access 7:80928–80936
Wan Z, Chen H, An J, Jiang W, Yao C, Luo J (2022) Facial attribute transformers for precise and robust makeup transfer. In: 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp 3113–3122
Bastanfard A, Takahashi H, Nakajima M (2004) Toward E-appearance of human face and hair by age, expression and rejuvenation. In: 2004 International Conference on Cyberworlds (CW), pp 306–311
Bastanfard A, Bastanfard O, Takahashi H, Nakajima M (2004) Toward anthropometrics simulation of face rejuvenation and skin cosmetic: Research Articles. Comput Animat Virtual Worlds 15(3–4):347–352
Sun Y, Tang J, Shu X, Sun Z, Tistarelli M (2020) Facial age synthesis with label distribution-guided generative adversarial network. IEEE Trans Inf Forensics Secur 15(2):2679–2691
Liu S, Sun Y, Zhu D, Bao R, Wang W, Shu X, Yan S (2017) Face aging with contextual generative adversarial nets. In: Proceedings of the 25th ACM International Conference on Multimedia (MM), pp 82–90
Li T, Qian R, Dong C et al (2018) BeautyGAN: Instance-level facial makeup transfer with deep generative adversarial network. In: Proceedings of the 26th ACM International Conference on Multimedia (MM), pp 645–653
Hertzmann A (1998) Painterly rendering with curved brush strokes of multiple sizes. In: Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pp 453–460
Hertzmann A (2003) A survey of stroke-based rendering. IEEE Comput Graphics Appl 23(4):70–81
Jacobs C, Salesin D, Oliver N et al (2001) Image analogies. Proceedings of Siggraph 327–340
Jamriška O, Sochorová Š, Texler O et al (2019) Stylizing video by example. ACM Trans Graph 38(4):1–11
He K, Sun J, Tang X (2012) Guided image filtering. IEEE Trans Pattern Anal Mach Intell 35(6):1397–1409
Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875
Li C, Wand M (2016) Precomputed real-time texture synthesis with markovian generative adversarial networks. Eur Conf Comput Vis 9907:702–716 (Springer, Cham 2016)
Jiahao L (2022) Transformer-based neural texture synthesis and style transfer. In: Proceedings of the 4th Asia Pacific Information Technology Conference(APIT), pp 88–95
Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2414–2423
Gatys L, Ecker A, Bethge M (2016) A neural algorithm of artistic style. J Vis 16(12):326–326
Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: 14th European Conference on Computer Vision (ECCV), vol 9906, pp 694–711
Ulyanov D, Vedaldi A, Lempitsky V (2017) Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In: Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 4105–4113
Huang X, Belongie S (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the 16th IEEE International Conference on Computer Vision (ICCV), pp 1501–1510
Liu MY, Breuel T, Kautz J (2017) Unsupervised image-to-image translation networks. In: 31st Annual Conference on Neural Information Processing Systems (NIPS), vol 30, pp 700–708
Huang X, Liu M-Y, Belongie S et al (2018) Multimodal unsupervised image-to-image translation. In: Proceedings of the 15th European Conference on Computer Vision (ECCV), vol 11207, pp 172–189
Zhu JY, Park T, Isola P et al (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the 16th IEEE International Conference on Computer Vision (ICCV), pp 2223–2232
Chang H, Lu J, Yu F et al (2018) PairedCycleGAN: Asymmetric style transfer for applying and removing makeup. In: Proceedings of the 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 40–48
Xu L, Du Y, Zhang Y (2013) An automatic framework for example-based virtual makeup. In: 20th IEEE International Conference on Image Processing (ICIP), pp 3206–3210
Li C, Zhou K, Lin S (2015) Simulating makeup through physics-based manipulation of intrinsic image layers. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp 4621–4629
Liu L, Xing J, Liu S et al (2014) Wow! you are so beautiful today! ACM Trans Multimedia Comput Commun Appl (TOMM) 11(1):1–22
Li Y, Song L, Wu X, He R, Tan T (2018) Anti-makeup:learning a bi-level adversarial network for makeup-invariant face verification. In: 32nd AAAI Conference on Artificial Intelligence (AAAI), pp 7057–7064
Wang S, Fu Y (2016) Face behind makeup. In: 30th AAAI Conference on Artificial Intelligence (AAAI), pp 58–64
Liu S, Ou X, Qian R et al (2016) Makeup like a superstar: Deep localized makeup transfer network. arXiv preprint arXiv:1604.07102
Chen HJ, Hui KM, Wang SY et al (2019) Beautyglow: On-demand makeup transfer framework with reversible generative network. In: Proceedings of the 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 10034–10042
Deng H, Han C, Cai H et al (2021) Spatially-invariant style-codes controlled makeup transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp 6549–6557
Nguyen T, Tran AT, Hoai M (2021) Lipstick ain't enough: beyond color matching for in-the-wild makeup transfer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp 13305–13314
Lyu Y, Dong J, Peng B, Wang W, Tan T (2021) SOGAN: 3D-aware shadow and occlusion robust GAN for makeup transfer. In: Proceedings of the 29th ACM International Conference on Multimedia (MM '21), pp 3601–3609
Sun Z, Chen Y, Xiong S (2022) SSAT: A symmetric semantic-aware transformer network for makeup transfer and removal. In: Proceedings of the AAAI Conference on Artificial Intelligence(AAAI), pp 2325–2334
Tong WS, Tang CK, Brown MS et al (2007) Example-based cosmetic transfer. In: 15th Pacific Conference on Computer Graphics and Applications (Pacific Graphics), pp 211–218
Guo D, Sim T (2009) Digital face makeup by example. In: IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp 73–79
Gu Q, Wang G, Chiu MT et al (2019) LADN: Local adversarial disentangling network for facial makeup and de-makeup. IEEE/CVF International Conference on Computer Vision (ICCV):10481–10490
Jiang W, Liu S, Gao C et al (2020) PSGAN: Pose and expression robust spatial-aware GAN for customizable makeup transfer. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR):5194–5202
Liu S, Jiang W, Gao C et al (2022) PSGAN++: Robust detail-preserving makeup transfer and removal. IEEE Trans Pattern Anal Mach Intell 44(11):8538–8551
Yu C, Wang J, Peng C et al (2018) BiSeNet: Bilateral segmentation network for real-time semantic segmentation. Proceedings of the European Conference on Computer Vision (ECCV) 11217:334–349
Acknowledgements
This work was supported by the key project of Natural Science Foundation of Shaanxi Province (Grant Nos. 2018JZ6007) and the Key Research and Development Program of Shaanxi Province (Program No.2021ZDLGY15-03).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing Interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhu, X., Cao, X., Wang, L. et al. DCCMF-GAN: double cycle consistently constrained multi-feature discrimination GAN for makeup transfer. Multimed Tools Appl 83, 44009–44022 (2024). https://doi.org/10.1007/s11042-023-17240-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-023-17240-6