skip to main content
research-article

Dynamic hair modeling from monocular videos using deep neural networks

Published: 08 November 2019 Publication History

Abstract

We introduce a deep learning based framework for modeling dynamic hairs from monocular videos, which could be captured by a commodity video camera or downloaded from Internet. The framework mainly consists of two neural networks, i.e., HairSpatNet for inferring 3D spatial features of hair geometry from 2D image features, and HairTempNet for extracting temporal features of hair motions from video frames. The spatial features are represented as 3D occupancy fields depicting the hair volume shapes and 3D orientation fields indicating the hair growing directions. The temporal features are represented as bidirectional 3D warping fields, describing the forward and backward motions of hair strands cross adjacent frames. Both HairSpatNet and HairTempNet are trained with synthetic hair data. The spatial and temporal features predicted by the networks are subsequently used for growing hair strands with both spatial and temporal consistency. Experiments demonstrate that our method is capable of constructing plausible dynamic hair models that closely resemble the input video, and compares favorably to previous single-view techniques.

Supplemental Material

MP4 File

References

[1]
Xudong Cao, Yichen Wei, Fang Wen, and Jian Sun. 2014. Face alignment by explicit shape regression. International Journal of Computer Vision 107, 2 (2014), 177--190.
[2]
Menglei Chai, Linjie Luo, Kalyan Sunkavalli, Nathan Carr, Sunil Hadap, and Kun Zhou. 2015. High-quality hair modeling from a single portrait photo. ACM Trans. Graph. 34, 6 (2015), 204:1--204:10.
[3]
Menglei Chai, Tianjia Shao, Hongzhi Wu, Yanlin Weng, and Kun Zhou. 2016. AutoHair: Fully Automatic Hair Modeling from a Single Image. ACM Trans. Graph. 35, 4 (2016), 116:1--116:12.
[4]
Menglei Chai, Lvdi Wang, Yanlin Weng, Xiaogang Jin, and Kun Zhou. 2013. Dynamic Hair Manipulation in Images and Videos. ACM Trans. Graph. 32, 4 (2013), 75:1--75:8.
[5]
Menglei Chai, Lvdi Wang, Yanlin Weng, Yizhou Yu, Baining Guo, and Kun Zhou. 2012. Single-view Hair Modeling for Portrait Manipulation. ACM Trans. Graph. 31, 4 (2012), 116:1--116:8.
[6]
Dongdong Chen, Jing Liao, Lu Yuan, Nenghai Yu, and Gang Hua. 2017. Coherent online video style transfer. In Proceedings of the IEEE International Conference on Computer Vision. 1105--1114.
[7]
Jose I Echevarria, Derek Bradley, Diego Gutierrez, and Thabo Beeler. 2014. Capturing and stylizing hair for 3D fabrication. ACM Trans. Graph. 33, 4 (2014), 125:1--125:11.
[8]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672--2680.
[9]
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. 2017. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems. 5767--5777.
[10]
Tomas Lay Herrera, Arno Zinke, and Andreas Weber. 2012. Lighting hair from the inside: A thermal approach to hair reconstruction. ACM Trans. Graph. 31, 6 (2012), 146:1--146:9.
[11]
Liwen Hu, Derek Bradley, Hao Li, and Thabo Beeler. 2017a. Simulation-Ready Hair Capture. Computer Graphics Forum 36, 2 (2017), 281--294.
[12]
Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2014a. Robust hair capture using simulated examples. ACM Trans. Graph. 33, 4 (2014), 126:1--126:10.
[13]
Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2015. Single-view hair modeling using a hairstyle database. ACM Trans. Graph. 34, 4 (2015), 125:1--125:9.
[14]
Liwen Hu, Chongyang Ma, Linjie Luo, Li-Yi Wei, and Hao Li. 2014b. Capturing Braided Hairstyles. ACM Trans. Graph. 33, 6 (2014), 225:1--225:9.
[15]
Liwen Hu, Shunsuke Saito, Lingyu Wei, Koki Nagano, Jaewoo Seo, Jens Fursund, Iman Sadeghi, Carrie Sun, Yen-Chun Chen, and Hao Li. 2017b. Avatar digitization from a single image for real-time rendering. ACM Trans. Graph. 36, 6 (2017), 195:1--195:14.
[16]
Takahito Ishikawa, Yosuke Kazama, Eiji Sugisaki, and Shigeo Morishima. 2007. Hair Motion Reconstruction Using Motion Capture System. In ACM SIGGRAPH 2007 Posters (SIGGRAPH '07). Article 78.
[17]
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. In IEEE Conference on Computer Vision and Pattern Recognition. 5967--5976.
[18]
Wenzel Jakob, Jonathan T Moon, and Steve Marschner. 2009. Capturing hair assemblies fiber by fiber. ACM Trans. Graph. 28, 5 (2009), 164:1--164:9.
[19]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[20]
Shu Liang, Xiufeng Huang, Xianyu Meng, Kunyao Chen, Linda G. Shapiro, and Ira Kemelmacher-Shlizerman. 2018. Video to Fully Automatic 3D Hair Model. ACM Trans. Graph. 37, 6 (2018), 206:1--206:14.
[21]
Linjie Luo, Hao Li, and Szymon Rusinkiewicz. 2013. Structure-aware hair capture. ACM Trans. Graph. 32, 4 (2013), 76:1--76:12.
[22]
Linjie Luo, Hao Li, Thibaut Weise, Sylvain Paris, Mark Pauly, and Szymon Rusinkiewicz. 2011. Dynamic Hair Capture. Technical Report TR-907-11. Princeton University.
[23]
L. Luo, C. Zhang, Z. Zhang, and S. Rusinkiewicz. 2013. Wide-Baseline Hair Capture Using Strand-Based Refinement. In 2013 IEEE Conference on Computer Vision and Pattern Recognition. 265--272.
[24]
Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. 2016. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4040--4048.
[25]
Giljoo Nam, Chenglei Wu, Min H. Kim, and Yaser Sheikh. 2019. Strand-Accurate Multi-View Hair Capture. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 155--164.
[26]
Sylvain Paris, Will Chang, Oleg I Kozhushnyan, Wojciech Jarosz, Wojciech Matusik, Matthias Zwicker, and Frédo Durand. 2008. Hair photobooth: geometric and photometric acquisition of real hairstyles. ACM Trans. Graph. 27, 3 (2008), 30:1--30:9.
[27]
Shunsuke Saito, Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2018. 3D Hair Synthesis Using Volumetric Variational Autoencoders. ACM Trans. Graph. 37, 6 (2018), 208:1--208:12.
[28]
Andrew Selle, Michael Lentine, and Ronald Fedkiw. 2008. A mass spring model for hair simulation. In ACM Transactions on Graphics (TOG), Vol. 27. 64:1--64:11.
[29]
Lvdi Wang, Yizhou Yu, Kun Zhou, and Baining Guo. 2009. Example-based hair geometry synthesis. ACM Trans. Graph. 28, 3 (2009), 56:1--56:9.
[30]
Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018a. Video-to-Video Synthesis. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). 1144--1156.
[31]
Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018b. High-Resolution Image Synthesis and Semantic Manipulation With Conditional GANs. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[32]
Kelly Ward, Florence Bertails, Tae-Yong Kim, Stephen R Marschner, Marie-Paule Cani, and Ming C Lin. 2007. A survey on hair modeling: Styling, simulation, and rendering. IEEE Trans. Vis. Comp. Graph. 13, 2 (2007), 213--234.
[33]
Zexiang Xu, Hsiang-Tao Wu, Lvdi Wang, Changxi Zheng, Xin Tong, and Yue Qi. 2014. Dynamic Hair Capture Using Spacetime Optimization. ACM Trans. Graph. 33, 6 (2014), 224:1--224:11.
[34]
Tatsuhisa Yamaguchi, Bennett Wilburn, and Eyal Ofek. 2009. Video-based modeling of dynamic hair. In Pacific-Rim Symposium on Image and Video Technology. Springer, 585--596.
[35]
Bo Yang, Hongkai Wen, Sen Wang, Ronald Clark, Andrew Markham, and Niki Trigoni. 2017. 3d object reconstruction from a single depth view with adversarial learning. In Proceedings of the IEEE International Conference on Computer Vision. 679--688.
[36]
Meng Zhang, Menglei Chai, Hongzhi Wu, Hao Yang, and Kun Zhou. 2017. A Data-driven Approach to Four-view Image-based Hair Modeling. ACM Trans. Graph. 36, 4 (2017), 156:1--156:11.
[37]
Meng Zhang, Pan Wu, Hongzhi Wu, Yanlin Weng, Youyi Zheng, and Kun Zhou. 2018. Modeling Hair from an RGB-D Camera. ACM Trans. Graph. 37, 6 (2018), 205:1--205:10.
[38]
Meng Zhang and Youyi Zheng. 2019. Hair-GANs: Recovering 3D Hair Structure from a Single Image. The Visual Informatics (2019), to appear.
[39]
Qing Zhang, Jing Tong, Huamin Wang, Zhigeng Pan, and Ruigang Yang. 2012. Simulation Guided Hair Dynamics Modeling from Video. Comput. Graph. Forum 31 (2012), 2003--2010.
[40]
Yi Zhou, Liwen Hu, Jun Xing, Weikai Chen, Han-Wei Kung, Xin Tong, and Hao Li. 2018. HairNet: Single-View Hair Reconstruction using Convolutional Neural Networks. In Proceedings of the European Conference on Computer Vision. 235--251.

Cited By

View all
  • (2024)GroomCap: High-Fidelity Prior-Free Hair CaptureACM Transactions on Graphics10.1145/368776843:6(1-15)Online publication date: 19-Nov-2024
  • (2024)Towards Unified 3D Hair Reconstruction from Single-View PortraitsSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687597(1-11)Online publication date: 3-Dec-2024
  • (2024)Hairmony: Fairness-aware hairstyle classificationSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687582(1-11)Online publication date: 3-Dec-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 38, Issue 6
December 2019
1292 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/3355089
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 November 2019
Published in TOG Volume 38, Issue 6

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. deep convolutional neural networks
  2. dynamic hair modeling

Qualifiers

  • Research-article

Funding Sources

  • The National Key Research & Development Program of China
  • NSF China

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)98
  • Downloads (Last 6 weeks)12
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)GroomCap: High-Fidelity Prior-Free Hair CaptureACM Transactions on Graphics10.1145/368776843:6(1-15)Online publication date: 19-Nov-2024
  • (2024)Towards Unified 3D Hair Reconstruction from Single-View PortraitsSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687597(1-11)Online publication date: 3-Dec-2024
  • (2024)Hairmony: Fairness-aware hairstyle classificationSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687582(1-11)Online publication date: 3-Dec-2024
  • (2024)MonoHair: High-Fidelity Hair Modeling from a Monocular Video2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02281(24164-24173)Online publication date: 16-Jun-2024
  • (2024)Dr.Hair: Reconstructing Scalp-Connected Hair Strands without Pre-Training via Differentiable Rendering of Line Segments2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01947(20601-20611)Online publication date: 16-Jun-2024
  • (2024)Text-Conditioned Generative Model of 3D Strand-Based Human Hairstyles2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.00450(4703-4712)Online publication date: 16-Jun-2024
  • (2024)A Local Appearance Model for Volumetric Capture of Diverse Hairstyles2024 International Conference on 3D Vision (3DV)10.1109/3DV62453.2024.00013(190-200)Online publication date: 18-Mar-2024
  • (2024)Strand-accurate multi-view facial hair reconstruction and trackingThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-024-03465-540:7(4713-4724)Online publication date: 14-Jun-2024
  • (2023)Contactless Multi-User Virtual Hair Design SynthesisElectronics10.3390/electronics1217368612:17(3686)Online publication date: 31-Aug-2023
  • (2023)EMS: 3D Eyebrow Modeling from Single-View ImagesACM Transactions on Graphics10.1145/361832342:6(1-19)Online publication date: 5-Dec-2023
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media