Skip to main content

In-the-Wild Facial Highlight Removal via Generative Adversarial Networks

  • Conference paper
  • First Online:
Artificial Intelligence (CICAI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13069))

Included in the following conference series:

  • 2320 Accesses

Abstract

Facial highlight removal techniques aim to remove the specular highlight from facial images, which could improve image quality and facilitate tasks, e.g., face recognition and reconstruction. However, previous learning-based techniques often fail on the in-the-wild images, as their models are often trained on paired synthetic or laboratory images due to the requirement on paired training data (images with and without highlight). In contrast to these methods, we propose a highlight removal network, which is pre-trained on a synthetic dataset but finetuned on the unpaired in-the-wild images. To achieve this, we propose a highlight mask guidance training technique, which enables Generative Adversarial Networks (GANs) to utilize in-the-wild images in training a highlight removal network. We have an observation that although almost all in-the-wild images contain some highlights on some regions, small patches without highlight can still provide useful information to guide the highlight removal procedure. This motivates us to train a region-based discriminator to distinguish highlight and non-highlight for a facial image and use it to finetune the generator. From the experiments, our technique achieves high-quality results compared with the state-of-the-art highlight removal techniques, especially on the in-the-wild images.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bajcsy, R., Lee, S.W., Leonardis, A.: Detection of diffuse and specular interface reflections and inter-reflections by color image segmentation. Int. J. Comput. Vis. 17(3), 241–272 (1996)

    Article  Google Scholar 

  2. Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, pp. 187–194 (1999)

    Google Scholar 

  3. Debevec, P., Hawkins, T., Tchou, C., Duiker, H.P., Sarokin, W., Sagar, M.: Acquiring the reflectance field of a human face. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 145–156 (2000)

    Google Scholar 

  4. Denton, E.L., Chintala, S., Fergus, R., et al.: Deep generative image models using a Laplacian pyramid of adversarial networks. In: Advances in Neural Information Processing Systems, pp. 1486–1494 (2015)

    Google Scholar 

  5. Dosovitskiy, A., Springenberg, J.T., Brox, T.: Learning to generate chairs with convolutional neural networks. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1538–1546. IEEE (2015)

    Google Scholar 

  6. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  7. Guo, J., Zhou, Z., Wang, L.: Single image highlight removal with a sparse and low-rank reflection model. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11208, pp. 282–298. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01225-0_17

    Chapter  Google Scholar 

  8. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

    Google Scholar 

  9. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Chapter  Google Scholar 

  10. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401–4410 (2019)

    Google Scholar 

  11. Kim, H., Jin, H., Hadap, S., Kweon, I.: Specular reflection separation using dark channel prior. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1460–1467. IEEE (2013)

    Google Scholar 

  12. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  13. Klinker, G.J., Shafer, S.A., Kanade, T.: The measurement of highlights in color images. Int. J. Comput. Vis. 2(1), 7–32 (1988)

    Article  Google Scholar 

  14. Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: Deblurgan: blind motion deblurring using conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8183–8192 (2018)

    Google Scholar 

  15. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)

    Google Scholar 

  16. Lee, C.H., Liu, Z., Wu, L., Luo, P.: MaskGAN: towards diverse and interactive facial image manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020

    Google Scholar 

  17. Li, C., Lin, S., Zhou, K., Ikeuchi, K.: Specular highlight removal in facial images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3107–3116 (2017)

    Google Scholar 

  18. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)

  19. Perarnau, G., van de Weijer, J., Raducanu, B., Álvarez, J.M.: Invertible conditional gans for image editing. arXiv preprint arXiv:1611.06355 (2016)

  20. Quan, L., Shum, H.Y., et al.: Highlight removal by illumination-constrained inpainting. In: Ninth IEEE International Conference on Computer Vision, Proceedings, pp. 164–169. IEEE (2003)

    Google Scholar 

  21. Ramamoorthi, R., Hanrahan, P.: A signal-processing framework for inverse rendering. In: Computer Graphics Proceedings, SIGGRAPH 2001 pp. 117–128 (2001)

    Google Scholar 

  22. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: Advances in Neural Information Processing Systems, pp. 2234–2242 (2016)

    Google Scholar 

  23. Shashua, A., Riklin-Raviv, T.: The quotient image: class-based re-rendering and recognition with varying illuminations. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 129–139 (2001)

    Article  Google Scholar 

  24. Shen, H.L., Zheng, Z.H.: Real-time highlight removal using intensity ratio. Appl. Opt. 52(19), 4483–4493 (2013)

    Article  Google Scholar 

  25. Shen, W., Liu, R.: Learning residual images for face attribute manipulation. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1225–1233. IEEE (2017)

    Google Scholar 

  26. Shu, Z., Hadap, S., Shechtman, E., Sunkavalli, K., Paris, S., Samaras, D.: Portrait lighting transfer using a mass transport approach. ACM Trans. Graph. (TOG) 37(1), 2 (2018)

    Article  Google Scholar 

  27. Taigman, Y., Polyak, A., Wolf, L.: Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200 (2016)

  28. Wang, Y., et al.: Face relighting from a single image under arbitrary unknown lighting conditions. IEEE Trans. Pattern Anal. Mach. Intell. 31(11), 1968–1984 (2008)

    Article  Google Scholar 

  29. Wang, Z., Yu, X., Lu, M., Wang, Q., Qian, C., Xu, F.: Single image portrait relighting via explicit multiple reflectance channel modeling. ACM Trans. Graph. (TOG) 39(6), 1–13 (2020)

    Google Scholar 

  30. Yang, Q., Tang, J., Ahuja, N.: Efficient and robust specular highlight removal. IEEE Trans. Pattern Anal. Mach. Intell. 37(6), 1304–1311 (2015)

    Article  Google Scholar 

  31. Yang, Q., Wang, S., Ahuja, N.: Real-time specular highlight removal using bilateral filtering. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 87–100. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_7

    Chapter  Google Scholar 

  32. Yi, R., Zhu, C., Tan, P., Lin, S.: Faces as lighting probes via unsupervised deep highlight extraction. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 317–333 (2018)

    Google Scholar 

  33. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

  34. Zhu, T., Xia, S., Bian, Z., Lu, C.: Highlight removal in facial images. In: Peng, Y. (ed.) PRCV 2020. LNCS, vol. 12305, pp. 422–433. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60633-6_35

    Chapter  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Key R&D Program of China 2018YFA0704000, the NSFC (No. 61822111, 61727808, 61671268, 62025108) and Beijing Natural Science Foundation (JQ19015, L182052).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xun Cao .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 22885 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Z., Lu, M., Xu, F., Cao, X. (2021). In-the-Wild Facial Highlight Removal via Generative Adversarial Networks. In: Fang, L., Chen, Y., Zhai, G., Wang, J., Wang, R., Dong, W. (eds) Artificial Intelligence. CICAI 2021. Lecture Notes in Computer Science(), vol 13069. Springer, Cham. https://doi.org/10.1007/978-3-030-93046-2_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93046-2_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93045-5

  • Online ISBN: 978-3-030-93046-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics