Skip to main content

Neural Hair Rendering

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12363))

Included in the following conference series:

Abstract

In this paper, we propose a generic neural-based hair rendering pipeline that can synthesize photo-realistic images from virtual 3D hair models. Unlike existing supervised translation methods that require model-level similarity to preserve consistent structure representation for both real images and fake renderings, our method adopts an unsupervised solution to work on arbitrary hair models. The key component of our method is a shared latent space to encode appearance-invariant structure information of both domains, which generates realistic renderings conditioned by extra appearance inputs. This is achieved by domain-specific pre-disentangled structure representation, partially shared domain encoder layers and a structure discriminator. We also propose a simple yet effective temporal conditioning method to enforce consistency for video sequence generation. We demonstrate the superiority of our method by testing it on a large number of portraits and comparing it with alternative baselines and state-of-the-art unsupervised image translation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: CVPR, pp. 95–104 (2017)

    Google Scholar 

  2. Cao, C., Chai, M., Woodford, O.J., Luo, L.: Stabilized real-time face tracking via a learned dynamic rigidity prior. ACM Trans. Graph. 37(6), 233:1–233:11 (2018)

    Google Scholar 

  3. Chai, M., Luo, L., Sunkavalli, K., Carr, N., Hadap, S., Zhou, K.: High-quality hair modeling from a single portrait photo. ACM Trans. Graph. 34(6), 204:1–204:10 (2015)

    Article  Google Scholar 

  4. Chai, M., Shao, T., Wu, H., Weng, Y., Zhou, K.: AutoHair: fully automatic hair modeling from a single image. ACM Trans. Graph. 35(4), 116:1–116:12 (2016)

    Article  Google Scholar 

  5. Chai, M., Wang, L., Weng, Y., Jin, X., Zhou, K.: Dynamic hair manipulation in images and videos. ACM Trans. Graph. 32(4), 75:1–75:8 (2013)

    Article  Google Scholar 

  6. Chai, M., Wang, L., Weng, Y., Yu, Y., Guo, B., Zhou, K.: Single-view hair modeling for portrait manipulation. ACM Trans. Graph. 31(4), 116:1–116:8 (2012)

    Article  Google Scholar 

  7. Chai, M., Zheng, C., Zhou, K.: A reduced model for interactive hairs. ACM Trans. Graph. 33(4), 124:1–124:11 (2014)

    Article  Google Scholar 

  8. Chen, Q., Koltun, V.: Photographic image synthesis with cascaded refinement networks. In: ICCV, pp. 1520–1529 (2017)

    Google Scholar 

  9. Chen, T.Q., Schmidt, M.: Fast patch-based style transfer of arbitrary style. CoRR abs/1612.04337 (2016)

    Google Scholar 

  10. Chen, Y., Chen, W., Chen, Y., Tsai, B., Wang, Y.F., Sun, M.: No more discrimination: cross city adaptation of road scene segmenters. In: ICCV, pp. 2011–2020 (2017)

    Google Scholar 

  11. d’Eon, E., François, G., Hill, M., Letteri, J., Aubry, J.: An energy-conserving hair reflectance model. Comput. Graph. Forum 30(4), 1181–1187 (2011)

    Article  Google Scholar 

  12. Dundar, A., Liu, M., Wang, T., Zedlewski, J., Kautz, J.: Domain stylization: a strong, simple baseline for synthetic to real image domain adaptation. CoRR abs/1807.09384 (2018)

    Google Scholar 

  13. Fernando, B., Habrard, A., Sebban, M., Tuytelaars, T.: Unsupervised visual domain adaptation using subspace alignment. In: ICCV, pp. 2960–2967 (2013)

    Google Scholar 

  14. Ganin, Y., Lempitsky, V.S.: Unsupervised domain adaptation by backpropagation. In: ICML, vol. 37, pp. 1180–1189 (2015)

    Google Scholar 

  15. Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17, 59:1–59:35 (2016)

    MathSciNet  MATH  Google Scholar 

  16. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: CVPR, pp. 2414–2423 (2016)

    Google Scholar 

  17. Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: CVPR, pp. 2066–2073 (2012)

    Google Scholar 

  18. Gopalan, R., Li, R., Chellappa, R.: Domain adaptation for object recognition: an unsupervised approach. In: ICCV, pp. 999–1006 (2011)

    Google Scholar 

  19. Herrera, T.L., Zinke, A., Weber, A.: Lighting hair from the inside: a thermal approach to hair reconstruction. ACM Trans. Graph. 31(6), 146:1–146:9 (2012)

    Article  Google Scholar 

  20. Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.: Image analogies. In: SIGGRAPH, pp. 327–340 (2001)

    Google Scholar 

  21. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: Gans trained by a two time-scale update rule converge to a local nash equilibrium. In: NIPS, pp. 6626–6637 (2017)

    Google Scholar 

  22. Hoffman, J., et al.: CyCADA: cycle-consistent adversarial domain adaptation. ICML, vol. 80, pp. 1994–2003 (2018)

    Google Scholar 

  23. Hu, L., Ma, C., Luo, L., Li, H.: Robust hair capture using simulated examples. ACM Trans. Graph. 33(4), 126:1–126:10 (2014)

    Article  Google Scholar 

  24. Hu, L., Ma, C., Luo, L., Li, H.: Single-view hair modeling using a hairstyle database. ACM Trans. Graph. 34(4), 125:1–125:9 (2015)

    Google Scholar 

  25. Huang, X., Belongie, S.J.: Arbitrary style transfer in real-time with adaptive instance normalization. In: ICCV, pp. 1510–1519 (2017)

    Google Scholar 

  26. Huang, X., Liu, M., Belongie, S.J., Kautz, J.: Multimodal unsupervised image-to-image translation. In: ECCV, vol. 11207, pp. 179–196 (2018)

    Google Scholar 

  27. Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 5967–5976 (2017)

    Google Scholar 

  28. Jo, Y., Park, J.: SC-FEGAN: face editing generative adversarial network with user’s sketch and color. In: ICCV, pp. 1745–1753 (2019)

    Google Scholar 

  29. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Chapter  Google Scholar 

  30. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR, pp. 4401–4410 (2019)

    Google Scholar 

  31. Kim, T., Cha, M., Kim, H., Lee, J.K., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: ICML, vol. 70, pp. 1857–1865 (2017)

    Google Scholar 

  32. Kulis, B., Saenko, K., Darrell, T.: What you saw is not what you get: domain adaptation using asymmetric kernel transforms. In: CVPR, pp. 1785–1792 (2011)

    Google Scholar 

  33. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 105–114 (2017)

    Google Scholar 

  34. Lee, C., Liu, Z., Wu, L., Luo, P.: MaskGAN: towards diverse and interactive facial image manipulation. CoRR abs/1907.11922 (2019)

    Google Scholar 

  35. Lee, H., Tseng, H., Huang, J., Singh, M., Yang, M.: Diverse image-to-image translation via disentangled representations. ECCV, vol. 11205, pp. 36–52 (2018)

    Google Scholar 

  36. Li, C., Wand, M.: Precomputed real-time texture synthesis with markovian generative adversarial networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 702–716. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_43

    Chapter  Google Scholar 

  37. Li, Y., Wang, N., Liu, J., Hou, X.: Demystifying neural style transfer. In: IJCAI, pp. 2230–2236 (2017)

    Google Scholar 

  38. Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.: Diversified texture synthesis with feed-forward networks. In: CVPR, pp. 266–274 (2017)

    Google Scholar 

  39. Liu, M., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: NeurIPS, pp. 700–708 (2017)

    Google Scholar 

  40. Liu, M., et al.: Few-shot unsupervised image-to-image translation. In: ICCV, pp. 10550–10559 (2019)

    Google Scholar 

  41. Liu, M., Tuzel, O.: Coupled generative adversarial networks. In: NIPS, pp. 469–477 (2016)

    Google Scholar 

  42. Luo, L., Li, H., Rusinkiewicz, S.: Structure-aware hair capture. ACM Trans. Graph. 32(4), 76:1–76:12 (2013)

    Article  Google Scholar 

  43. Marschner, S.R., Jensen, H.W., Cammarano, M., Worley, S., Hanrahan, P.: Light scattering from human hair fibers. ACM Trans. Graph. 22(3), 780–791 (2003)

    Article  Google Scholar 

  44. Moon, J.T., Marschner, S.R.: Simulating multiple scattering in hair using a photon mapping approach. ACM Trans. Graph. 25(3), 1067–1074 (2006)

    Article  Google Scholar 

  45. Moon, J.T., Walter, B., Marschner, S.: Efficient multiple scattering in hair using spherical harmonics. ACM Trans. Graph. 27(3), 31 (2008)

    Article  Google Scholar 

  46. Olszewski, K., et al.: Intuitive, interactive beard and hair synthesis with generative models. In: CVPR, pp. 7446–7456 (2020)

    Google Scholar 

  47. Paris, S., et al.: Hair photobooth: geometric and photometric acquisition of real hairstyles. ACM Trans. Graph. 27(3), 30 (2008)

    Article  Google Scholar 

  48. Park, T., Liu, M., Wang, T., Zhu, J.: Semantic image synthesis with spatially-adaptive normalization. In: CVPR, pp. 2337–2346 (2019)

    Google Scholar 

  49. Qiu, H., Wang, C., Zhu, H., Zhu, X., Gu, J., Han, X.: Two-phase hair image synthesis by self-enhancing generative model. Comput. Graph. Forum 38(7), 403–412 (2019)

    Article  Google Scholar 

  50. Ren, Z., Zhou, K., Li, T., Hua, W., Guo, B.: Interactive hair rendering under environment lighting. ACM Trans. Graph. 29(4), 55:1–55:8 (2010)

    Article  Google Scholar 

  51. Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  52. Sadeghi, I., Pritchett, H., Jensen, H.W., Tamstorf, R.: An artist friendly hair shading system. ACM Trans. Graph. 29(4), 56:1–56:10 (2010)

    Article  Google Scholar 

  53. Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_16

    Chapter  Google Scholar 

  54. Sangkloy, P., Lu, J., Fang, C., Yu, F., Hays, J.: Scribbler: controlling deep image synthesis with sketch and color. In: CVPR, pp. 6836–6845 (2017)

    Google Scholar 

  55. Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: CVPR, pp. 2242–2251 (2017)

    Google Scholar 

  56. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)

    Google Scholar 

  57. Svanera, M., Muhammad, U.R., Leonardi, R., Benini, S.: Figaro, hair detection and segmentation in the wild. In: ICIP, pp. 933–937 (2016)

    Google Scholar 

  58. Taigman, Y., Polyak, A., Wolf, L.: Unsupervised cross-domain image generation. In: ICLR (2017)

    Google Scholar 

  59. Tan, Z., et al.: MichiGAN: multi-input-conditioned hair image generation for portrait editing. ACM Trans. Graph. 39(4), 95:1–95:13 (2020)

    Article  Google Scholar 

  60. Tsai, Y., Hung, W., Schulter, S., Sohn, K., Yang, M., Chandraker, M.: Learning to adapt structured output space for semantic segmentation. In: CVPR, pp. 7472–7481 (2018)

    Google Scholar 

  61. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: CVPR, pp. 2962–2971 (2017)

    Google Scholar 

  62. Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: Maximizing for domain invariance. CoRR abs/1412.3474 (2014)

    Google Scholar 

  63. Ulyanov, D., Lebedev, V., Vedaldi, A., Lempitsky, V.S.: Texture networks: feed-forward synthesis of textures and stylized images. In: ICML, vol. 48, pp. 1349–1357 (2016)

    Google Scholar 

  64. Wang, T., Liu, M., Zhu, J., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: CVPR, pp. 8798–8807 (2018)

    Google Scholar 

  65. Ward, K., Bertails, F., Kim, T., Marschner, S.R., Cani, M., Lin, M.C.: A survey on hair modeling: styling, simulation, and rendering. IEEE Trans. Vis. Comput. Graph. 13(2), 213–234 (2007)

    Article  Google Scholar 

  66. Wei, L., Hu, L., Kim, V.G., Yumer, E., Li, H.: Real-time hair rendering using sequential adversarial networks. In: ECCV, vol. 11208, pp. 105–122 (2018)

    Google Scholar 

  67. Xu, K., Ma, L., Ren, B., Wang, R., Hu, S.: Interactive hair rendering and appearance editing under environment lighting. ACM Trans. Graph. 30(6), 173 (2011)

    Google Scholar 

  68. Yan, L., Tseng, C., Jensen, H.W., Ramamoorthi, R.: Physically-accurate fur reflectance: modeling, measurement and rendering. ACM Trans. Graph. 34(6), 185:1–185:13 (2015)

    Article  Google Scholar 

  69. Yi, Z., Zhang, H.R., Tan, P., Gong, M.: DualGAN: unsupervised dual learning for image-to-image translation. In: ICCV, pp. 2868–2876 (2017)

    Google Scholar 

  70. Yuksel, C., Schaefer, S., Keyser, J.: Hair meshes. ACM Trans. Graph. 28(5), 166 (2009)

    Article  Google Scholar 

  71. Zhang, M., Chai, M., Wu, H., Yang, H., Zhou, K.: A data-driven approach to four-view image-based hair modeling. ACM Trans. Graph. 36(4), 156:1–156:11 (2017)

    Google Scholar 

  72. Zhou, Y., et al.: HairNet: single-view hair reconstruction using convolutional neural networks. In: ECCV, vol. 11215, pp. 249–265 (2018)

    Google Scholar 

  73. Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2242–2251 (2017)

    Google Scholar 

  74. Zinke, A., Yuksel, C., Weber, A., Keyser, J.: Dual scattering approximation for fast multiple scattering in hair. ACM Trans. Graph. 27(3), 32 (2008)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Menglei Chai .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 5018 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chai, M., Ren, J., Tulyakov, S. (2020). Neural Hair Rendering. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12363. Springer, Cham. https://doi.org/10.1007/978-3-030-58523-5_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58523-5_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58522-8

  • Online ISBN: 978-3-030-58523-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics