skip to main content
10.1145/3550469.3555385acmconferencesArticle/Chapter ViewAbstractPublication Pagessiggraph-asiaConference Proceedingsconference-collections
research-article

DeepMVSHair: Deep Hair Modeling from Sparse Views

Published: 30 November 2022 Publication History

Abstract

We present DeepMVSHair, the first deep learning-based method for multi-view hair strand reconstruction. The key component of our pipeline is HairMVSNet, a differentiable neural architecture which represents a spatial hair structure as a continuous 3D hair growing direction field implicitly. Specifically, given a 3D query point, we decide its occupancy value and direction from observed 2D structure features. With the query point’s pixel-aligned features from each input view, we utilize a view-aware transformer encoder to aggregate anisotropic structure features to an integrated representation, which is decoded to yield 3D occupancy and direction at the query point. HairMVSNet effectively gathers multi-view hair structure features and preserves high-frequency details based on this implicit representation. Guided by HairMVSNet, our hair-growing algorithm produces results faithful to input multi-view images. We propose a novel image-guided multi-view strand deformation algorithm to enrich modeling details further. Extensive experiments show that the results by our sparse-view method are comparable to those by state-of-the-art dense multi-view methods and significantly better than those by single-view and sparse-view methods. In addition, our method is an order of magnitude faster than previous multi-view hair modeling methods.

Supplemental Material

MP4 File
presentation
MP4 File
Presentation video - short version
PDF File
Supplementary Document

References

[1]
Chen Cao, Derek Bradley, Kun Zhou, and Thabo Beeler. 2015. Real-time high-fidelity facial performance capture. ACM Transactions on Graphics (ToG) 34, 4 (2015), 1–9.
[2]
Chen Cao, Qiming Hou, and Kun Zhou. 2014. Displaced dynamic expression regression for real-time facial tracking and animation. ACM Transactions on graphics (TOG) 33, 4 (2014), 1–10.
[3]
Menglei Chai, Linjie Luo, Kalyan Sunkavalli, Nathan Carr, Sunil Hadap, and Kun Zhou. 2015. High-quality hair modeling from a single portrait photo. ACM Transactions on Graphics (TOG) 34, 6 (2015), 1–10.
[4]
Menglei Chai, Tianjia Shao, Hongzhi Wu, Yanlin Weng, and Kun Zhou. 2016. Autohair: Fully automatic hair modeling from a single image. ACM Transactions on Graphics 35, 4 (2016).
[5]
Menglei Chai, Lvdi Wang, Yanlin Weng, Xiaogang Jin, and Kun Zhou. 2013. Dynamic hair manipulation in images and videos. ACM Transactions on Graphics (TOG) 32, 4 (2013), 1–8.
[6]
Menglei Chai, Lvdi Wang, Yanlin Weng, Yizhou Yu, Baining Guo, and Kun Zhou. 2012. Single-view hair modeling for portrait manipulation. ACM Transactions on Graphics (TOG) 31, 4 (2012), 1–8.
[7]
Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2014. Robust hair capture using simulated examples. ACM Transactions on Graphics (TOG) 33, 4 (2014), 1–10.
[8]
Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 2015. Single-view hair modeling using a hairstyle database. ACM Transactions on Graphics (ToG) 34, 4 (2015), 1–9.
[9]
Zeng Huang, Tianye Li, Weikai Chen, Yajie Zhao, Jun Xing, Chloe LeGendre, Linjie Luo, Chongyang Ma, and Hao Li. 2018. Deep volumetric video from very sparse multi-view performance capture. In Proceedings of the European Conference on Computer Vision (ECCV). 336–354.
[10]
Shu Liang, Xiufeng Huang, Xianyu Meng, Kunyao Chen, Linda G Shapiro, and Ira Kemelmacher-Shlizerman. 2018. Video to fully automatic 3d hair model. ACM Transactions on Graphics (TOG) 37, 6 (2018), 1–14.
[11]
Linjie Luo, Hao Li, Sylvain Paris, Thibaut Weise, Mark Pauly, and Szymon Rusinkiewicz. 2012. Multi-view hair capture using orientation fields. In 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1490–1497.
[12]
Linjie Luo, Hao Li, and Szymon Rusinkiewicz. 2013a. Structure-aware hair capture. ACM Transactions on Graphics (TOG) 32, 4 (2013), 1–12.
[13]
Linjie Luo, Cha Zhang, Zhengyou Zhang, and Szymon Rusinkiewicz. 2013b. Wide-baseline hair capture using strand-based refinement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 265–272.
[14]
Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. 2019. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 4460–4470.
[15]
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2020. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision. Springer, 405–421.
[16]
Giljoo Nam, Chenglei Wu, Min H Kim, and Yaser Sheikh. 2019. Strand-accurate multi-view hair capture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 155–164.
[17]
Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2020. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3504–3515.
[18]
Michael Oechsle, Songyou Peng, and Andreas Geiger. 2021. Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5589–5599.
[19]
Sylvain Paris, Hector M Briceno, and François X Sillion. 2004. Capture of hair geometry from multiple images. ACM transactions on graphics (TOG) 23, 3 (2004), 712–719.
[20]
Sylvain Paris, Will Chang, Oleg I Kozhushnyan, Wojciech Jarosz, Wojciech Matusik, Matthias Zwicker, and Frédo Durand. 2008. Hair photobooth: geometric and photometric acquisition of real hairstyles.ACM Trans. Graph. 27, 3 (2008), 30.
[21]
Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 165–174.
[22]
Shunsuke Saito, Liwen Hu, Chongyang Ma, Hikaru Ibayashi, Linjie Luo, and Hao Li. 2018. 3D hair synthesis using volumetric variational autoencoders. ACM Transactions on Graphics (TOG) 37, 6 (2018), 1–12.
[23]
Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. 2019. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2304–2314.
[24]
Tiancheng Sun, Giljoo Nam, Carlos Aliaga, Christophe Hery, and Ravi Ramamoorthi. 2021. Human Hair Inverse Rendering using Multi-View Photometric data. (2021).
[25]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30 (2017).
[26]
Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. 2021a. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint arXiv:2106.10689(2021).
[27]
Ziyan Wang, Giljoo Nam, Tuur Stuyck, Stephen Lombardi, Michael Zollhoefer, Jessica Hodgins, and Christoph Lassner. 2021b. HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture. arXiv preprint arXiv:2112.06904(2021).
[28]
Keyu Wu, Yifan Ye, Lingchen Yang, Hongbo Fu, Kun Zhou, and Youyi Zheng. 2022. NeuralHDHair: Automatic High-fidelity Hair Modeling from a Single Image Using Implicit Neural Representations. https://doi.org/10.48550/ARXIV.2205.04175
[29]
Lingchen Yang, Zefeng Shi, Youyi Zheng, and Kun Zhou. 2019. Dynamic hair modeling from monocular videos using deep neural networks. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1–12.
[30]
Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Basri Ronen, and Yaron Lipman. 2020. Multiview neural surface reconstruction by disentangling geometry and appearance. Advances in Neural Information Processing Systems 33 (2020), 2492–2502.
[31]
Meng Zhang, Menglei Chai, Hongzhi Wu, Hao Yang, and Kun Zhou. 2017. A data-driven approach to four-view image-based hair modeling.ACM Trans. Graph. 36, 4 (2017), 156–1.
[32]
Meng Zhang, Pan Wu, Hongzhi Wu, Yanlin Weng, Youyi Zheng, and Kun Zhou. 2018. Modeling hair from an rgb-d camera. ACM Transactions on Graphics (TOG) 37, 6 (2018), 1–10.
[33]
Meng Zhang and Youyi Zheng. 2018. Hair-gans: Recovering 3d hair structure from a single image. arXiv preprint arXiv:1811.06229(2018).
[34]
Yi Zhou, Liwen Hu, Jun Xing, Weikai Chen, Han-Wei Kung, Xin Tong, and Hao Li. 2018. Hairnet: Single-view hair reconstruction using convolutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV). 235–251.

Cited By

View all
  • (2024)Towards Unified 3D Hair Reconstruction from Single-View PortraitsSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687597(1-11)Online publication date: 3-Dec-2024
  • (2024)Hairmony: Fairness-aware hairstyle classificationSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687582(1-11)Online publication date: 3-Dec-2024
  • (2024)MonoHair: High-Fidelity Hair Modeling from a Monocular Video2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02281(24164-24173)Online publication date: 16-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SA '22: SIGGRAPH Asia 2022 Conference Papers
November 2022
482 pages
ISBN:9781450394703
DOI:10.1145/3550469
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 November 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. hair modeling
  2. implicit functions
  3. neural networks

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Data Availability

Funding Sources

  • NSFC

Conference

SA '22
Sponsor:
SA '22: SIGGRAPH Asia 2022
December 6 - 9, 2022
Daegu, Republic of Korea

Acceptance Rates

Overall Acceptance Rate 178 of 869 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)265
  • Downloads (Last 6 weeks)16
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Towards Unified 3D Hair Reconstruction from Single-View PortraitsSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687597(1-11)Online publication date: 3-Dec-2024
  • (2024)Hairmony: Fairness-aware hairstyle classificationSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687582(1-11)Online publication date: 3-Dec-2024
  • (2024)MonoHair: High-Fidelity Hair Modeling from a Monocular Video2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02281(24164-24173)Online publication date: 16-Jun-2024
  • (2024)Dr.Hair: Reconstructing Scalp-Connected Hair Strands without Pre-Training via Differentiable Rendering of Line Segments2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01947(20601-20611)Online publication date: 16-Jun-2024
  • (2024)Text-Conditioned Generative Model of 3D Strand-Based Human Hairstyles2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.00450(4703-4712)Online publication date: 16-Jun-2024
  • (2024)A Local Appearance Model for Volumetric Capture of Diverse Hairstyles2024 International Conference on 3D Vision (3DV)10.1109/3DV62453.2024.00013(190-200)Online publication date: 18-Mar-2024
  • (2023)EMS: 3D Eyebrow Modeling from Single-View ImagesACM Transactions on Graphics10.1145/361832342:6(1-19)Online publication date: 5-Dec-2023
  • (2023)CT2Hair: High-Fidelity 3D Hair Modeling using Computed TomographyACM Transactions on Graphics10.1145/359210642:4(1-13)Online publication date: 26-Jul-2023
  • (2023)Single-Shot Implicit Morphable Faces with Consistent Texture ParameterizationACM SIGGRAPH 2023 Conference Proceedings10.1145/3588432.3591494(1-12)Online publication date: 23-Jul-2023
  • (2023)Refinement of Hair Geometry by Strand IntegrationComputer Graphics Forum10.1111/cgf.1497042:7Online publication date: 5-Nov-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media