skip to main content
10.1145/3635118.3635125acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicaipConference Proceedingsconference-collections
research-article

Face makeup transfer based on Generative Adversarial Network

Published:05 February 2024Publication History

ABSTRACT

The goal of the face makeup transfer algorithm is to transfer the face makeup from the specified reference image to the non-makeup face in the target image while maintaining the face structure of the target image during the transfer process. Aiming at the problem that the currently proposed facial makeup transfer network is prone to deformation and the strength of makeup is uncontrollable, we propose an algorithm to realize facial makeup transfer by using the generation of confrontation network, adding the mask area corresponding to the fusion of attention and makeup. Firstly, the corresponding masks of lip makeup, eye makeup and base makeup are generated based on the detection of face key points, and then the masks are embedded with the features extracted by the encoder to generate the corresponding feature matrix to ensure the consistency of the non-makeup area before and after the transfer. Secondly, the global style features are generated by integrating the feature results through the attention module, and the high-quality makeup image is synthesized while maintaining the integrity of the face structure. Our method can not only keep the face structure unchanged during the transfer process, but also form a more natural face makeup transfer effect. The experimental results show that the method has better training results on the data set, and the generated facial makeup transfer image is better than PSGAN and BeautyGAN in terms of visual experience, and the strength of the transferred makeup can be adjusted by parameters.

References

  1. Guo D, Sim T. Digital face makeup by example[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2009: 73-79. https://doi.org/10.1109/CVPR.2009.5206833Google ScholarGoogle ScholarCross RefCross Ref
  2. Liu S, Ou X, Qian R, Makeup like a superstar: Deep localized makeup transfer network[J]. arXiv preprint arXiv:1604.07102, 2016.Google ScholarGoogle Scholar
  3. Li T, Qian R, Dong C, Beautygan: Instance-level facial makeup transfer with deep generative adversarial network[C]//Proceedings of the 26th ACM international conference on Multimedia. 2018: 645-653. https://doi.org/10.1145/3240508.3240618Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Jiang W, Liu S, Gao C, Psgan: Pose and expression robust spatial-aware gan for customizable makeup transfer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 5194-5202.Google ScholarGoogle Scholar
  5. Goodfellow I, Pouget-Abadie J, Mirza M, Generative adversarial nets[J]. Advances in neural information processing systems, 2014, 27. https://doi.org/10.1145/3422622Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Yang T, Ren P, Xie X, Gan prior embedded network for blind face restoration in the wild[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 672-681. Jun-Yan Zhu, Taesung Park, Phillip Isola,and Alexei A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. CoRR,abs/1703.10593, 2017.Google ScholarGoogle Scholar
  7. Zhu J Y, Park T, Isola P, Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2223-2232.Google ScholarGoogle Scholar
  8. Marriott R T, Romdhani S, Chen L. A 3d gan for improved large-pose facial recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 13445-13455.Google ScholarGoogle Scholar
  9. Gatys L A, Ecker A S, Bethge M. Image style transfer using convolutional neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2414-2423.Google ScholarGoogle Scholar
  10. Johnson J, Alahi A, Fei-Fei L. Perceptual losses for real-time style transfer and super-resolution[C]//Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14. Springer International Publishing, 2016: 694-711.Google ScholarGoogle Scholar
  11. Luan F, Paris S, Shechtman E, Deep photo style transfer[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 4990-4998.Google ScholarGoogle Scholar
  12. Ulyanov D, Vedaldi A, Lempitsky V. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 6924-6932.Google ScholarGoogle Scholar
  13. Sheng L, Lin Z, Shao J, Avatar-net: Multi-scale zero-shot style transfer by feature decoration[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8242-8250.Google ScholarGoogle Scholar
  14. Yao X, Puy G, Pérez P. Photo style transfer with consistency losses[C]//2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019: 2314-2318.Google ScholarGoogle Scholar
  15. Li Y, Liu M Y, Li X, A closed-form solution to photorealistic image stylization[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 453-468.Google ScholarGoogle Scholar
  16. Deng Y, Tang F, Dong W, Arbitrary style transfer via multi-adaptation network[C]//Proceedings of the 28th ACM international conference on multimedia. 2020: 2719-2727. https://doi.org/10.1145/3394171.3414015Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Choi Y, Choi M, Kim M, Stargan: Unified generative adversarial networks for multi-domain image-to-image translation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8789-8797.Google ScholarGoogle Scholar
  18. Li C, Zhou K, Lin S. Simulating makeup through physics-based manipulation of intrinsic image layers[C]//Proceedings of the IEEE Conference on computer vision and pattern recognition. 2015: 4621-4629.Google ScholarGoogle Scholar
  19. Horita D, Aizawa K. SLGAN: style-and latent-guided generative adversarial network for desirable makeup transfer and removal[C]//Proceedings of the 4th ACM International Conference on Multimedia in Asia. 2022: 1-5. https://doi.org/10.1145/3551626.3564967Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Gu Q, Wang G, Chiu M T, Ladn: Local adversarial disentangling network for facial makeup and de-makeup[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 10481-10490.Google ScholarGoogle Scholar
  21. Chen L C, Papandreou G, Schroff F, Rethinking atrous convolution for semantic image segmentation[J]. arXiv preprint arXiv:1706.05587, 2017.Google ScholarGoogle Scholar
  22. Sun Z, Liu F, Liu W, Local facial makeup transfer via disentangled representation[C]//Proceedings of the Asian Conference on Computer Vision (4). 2020: 459-473. https://doi.org/10.1007/978-3-030-69538-5_28Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Face makeup transfer based on Generative Adversarial Network

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICAIP '23: Proceedings of the 2023 7th International Conference on Advances in Image Processing
      November 2023
      90 pages
      ISBN:9798400708275
      DOI:10.1145/3635118

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 5 February 2024

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)9
      • Downloads (Last 6 weeks)2

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format