ABSTRACT
The goal of the face makeup transfer algorithm is to transfer the face makeup from the specified reference image to the non-makeup face in the target image while maintaining the face structure of the target image during the transfer process. Aiming at the problem that the currently proposed facial makeup transfer network is prone to deformation and the strength of makeup is uncontrollable, we propose an algorithm to realize facial makeup transfer by using the generation of confrontation network, adding the mask area corresponding to the fusion of attention and makeup. Firstly, the corresponding masks of lip makeup, eye makeup and base makeup are generated based on the detection of face key points, and then the masks are embedded with the features extracted by the encoder to generate the corresponding feature matrix to ensure the consistency of the non-makeup area before and after the transfer. Secondly, the global style features are generated by integrating the feature results through the attention module, and the high-quality makeup image is synthesized while maintaining the integrity of the face structure. Our method can not only keep the face structure unchanged during the transfer process, but also form a more natural face makeup transfer effect. The experimental results show that the method has better training results on the data set, and the generated facial makeup transfer image is better than PSGAN and BeautyGAN in terms of visual experience, and the strength of the transferred makeup can be adjusted by parameters.
- Guo D, Sim T. Digital face makeup by example[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2009: 73-79. https://doi.org/10.1109/CVPR.2009.5206833Google ScholarCross Ref
- Liu S, Ou X, Qian R, Makeup like a superstar: Deep localized makeup transfer network[J]. arXiv preprint arXiv:1604.07102, 2016.Google Scholar
- Li T, Qian R, Dong C, Beautygan: Instance-level facial makeup transfer with deep generative adversarial network[C]//Proceedings of the 26th ACM international conference on Multimedia. 2018: 645-653. https://doi.org/10.1145/3240508.3240618Google ScholarDigital Library
- Jiang W, Liu S, Gao C, Psgan: Pose and expression robust spatial-aware gan for customizable makeup transfer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 5194-5202.Google Scholar
- Goodfellow I, Pouget-Abadie J, Mirza M, Generative adversarial nets[J]. Advances in neural information processing systems, 2014, 27. https://doi.org/10.1145/3422622Google ScholarDigital Library
- Yang T, Ren P, Xie X, Gan prior embedded network for blind face restoration in the wild[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 672-681. Jun-Yan Zhu, Taesung Park, Phillip Isola,and Alexei A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. CoRR,abs/1703.10593, 2017.Google Scholar
- Zhu J Y, Park T, Isola P, Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2223-2232.Google Scholar
- Marriott R T, Romdhani S, Chen L. A 3d gan for improved large-pose facial recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 13445-13455.Google Scholar
- Gatys L A, Ecker A S, Bethge M. Image style transfer using convolutional neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2414-2423.Google Scholar
- Johnson J, Alahi A, Fei-Fei L. Perceptual losses for real-time style transfer and super-resolution[C]//Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14. Springer International Publishing, 2016: 694-711.Google Scholar
- Luan F, Paris S, Shechtman E, Deep photo style transfer[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 4990-4998.Google Scholar
- Ulyanov D, Vedaldi A, Lempitsky V. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 6924-6932.Google Scholar
- Sheng L, Lin Z, Shao J, Avatar-net: Multi-scale zero-shot style transfer by feature decoration[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8242-8250.Google Scholar
- Yao X, Puy G, Pérez P. Photo style transfer with consistency losses[C]//2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019: 2314-2318.Google Scholar
- Li Y, Liu M Y, Li X, A closed-form solution to photorealistic image stylization[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 453-468.Google Scholar
- Deng Y, Tang F, Dong W, Arbitrary style transfer via multi-adaptation network[C]//Proceedings of the 28th ACM international conference on multimedia. 2020: 2719-2727. https://doi.org/10.1145/3394171.3414015Google ScholarDigital Library
- Choi Y, Choi M, Kim M, Stargan: Unified generative adversarial networks for multi-domain image-to-image translation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 8789-8797.Google Scholar
- Li C, Zhou K, Lin S. Simulating makeup through physics-based manipulation of intrinsic image layers[C]//Proceedings of the IEEE Conference on computer vision and pattern recognition. 2015: 4621-4629.Google Scholar
- Horita D, Aizawa K. SLGAN: style-and latent-guided generative adversarial network for desirable makeup transfer and removal[C]//Proceedings of the 4th ACM International Conference on Multimedia in Asia. 2022: 1-5. https://doi.org/10.1145/3551626.3564967Google ScholarDigital Library
- Gu Q, Wang G, Chiu M T, Ladn: Local adversarial disentangling network for facial makeup and de-makeup[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 10481-10490.Google Scholar
- Chen L C, Papandreou G, Schroff F, Rethinking atrous convolution for semantic image segmentation[J]. arXiv preprint arXiv:1706.05587, 2017.Google Scholar
- Sun Z, Liu F, Liu W, Local facial makeup transfer via disentangled representation[C]//Proceedings of the Asian Conference on Computer Vision (4). 2020: 459-473. https://doi.org/10.1007/978-3-030-69538-5_28Google ScholarDigital Library
Index Terms
- Face makeup transfer based on Generative Adversarial Network
Recommendations
Automatic makeup based on generative adversarial nets
ICIMCS '18: Proceedings of the 10th International Conference on Internet Multimedia Computing and ServiceAutomatic makeup aims to compose and synthesize makeup on human face through computer algorithm, which belongs to the field of face image analysis. It plays an important role in interactive entertainment applications, image and video editing and face ...
Pose-Invariant Facial Expression Recognition Based on 3D Face Morphable Model and Domain Adversarial Learning
Image and GraphicsAbstractPose is one of the most important factors affecting performance of face related recognition algorithms including facial expression recognition (FER). Traditionally, non-frontal FER is conducted by either performing face formalization or designing ...
Makeup-Invariant Face Recognition by 3D Face: Modeling and Dual-Tree Complex Wavelet Transform from Women's 2D Real-World Images
ICPR '14: Proceedings of the 2014 22nd International Conference on Pattern RecognitionIn this paper, a novel feature extraction method is proposed to handle facial makeup in face recognition. To develop a face recognition method robust to facial makeup, features are extracted from face depth in which facial makeup is not effective. Then, ...
Comments