Skip to main content
Log in

Parametric fur from an image

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Parametric fur is a powerful tool for content creation in computer graphics. However, setting parameters to realize the desired result is difficult. To address this problem, we propose a method to automatically estimate appropriate parameters from an image. We formulate the process as an optimization problem wherein the system searches for parameters such that the appearance of the rendered parametric fur is as similar as possible to the appearance of the real fur. In each optimization step, we render an image using an off-the-shelf fur renderer and measure image similarity using a pre-trained deep convolutional neural network model. We demonstrate that the proposed method can estimate fur parameters appropriately for a wide range of fur types.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Andersen, T.G., Falster, V., Frisvad, J.R., Christensen, N.J.: Hybrid fur rendering: combining volumetric fur with explicit hair strands. Vis. Comput. 32(6), 739–749 (2016). https://doi.org/10.1007/s00371-016-1252-x

    Article  Google Scholar 

  2. Autodesk Inc.: Autodesk maya. https://www.autodesk.com/products/maya (1998–2020)

  3. Beeler, T., Bickel, B., Noris, G., Beardsley, P., Marschner, S., Sumner, R.W., Gross, M.: Coupled 3D reconstruction of sparse facial hair and skin. ACM Trans. Graph. 31(4), 117:1–117:10 (2012). https://doi.org/10.1145/2185520.2185613

    Article  Google Scholar 

  4. Chollet, F., et al.: Keras. https://keras.io (2015)

  5. Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, pp. 39–43 (1995). https://doi.org/10.1109/MHS.1995.494215

  6. Gatys, L., Ecker, A.S., Bethge, M.: Texture synthesis using convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 28, pp. 262–270. Curran Associates, Inc. (2015)

  7. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2414–2423 (2016). https://doi.org/10.1109/CVPR.2016.265

  8. Hansen, N.: The CMA evolution strategy: a tutorial. CoRR abs/1604.00772 (2016). arXiv:1604.00772

  9. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Computer Vision–ECCV 2016, pp. 694–711. Springer International Publishing, Berlin (2016)

  10. Lengyel, J., Praun, E., Finkelstein, A., Hoppe, H.: Real-time fur over arbitrary surfaces. In: Proceedings of the 2001 Symposium on Interactive 3D Graphics, I3D ’01, pp. 227–232. ACM (2001). https://doi.org/10.1145/364338.364407

  11. Loper, M.M., Black, M.J.: Opendr: an approximate differentiable renderer. In: Computer Vision – ECCV 2014, pp. 154–169. Springer International Publishing, Berlin (2014)

  12. Luo, L., Li, H., Rusinkiewicz, S.: Structure-aware hair capture. ACM Trans. Graph. 32(4), 76:1–76:12 (2013). https://doi.org/10.1145/2461912.2462026

    Article  MATH  Google Scholar 

  13. Paris, S., Briceño, H.M., Sillion, F.X.: Capture of hair geometry from multiple images. ACM Trans. Graph. 23(3), 712–719 (2004). https://doi.org/10.1145/1015706.1015784

    Article  Google Scholar 

  14. Paris, S., Chang, W., Kozhushnyan, O.I., Jarosz, W., Matusik, W., Zwicker, M., Durand, F.: Hair photobooth: geometric and photometric acquisition of real hairstyles. ACM Trans. Graph. 27(3), 30:1–30:9 (2008). https://doi.org/10.1145/1360612.1360629

    Article  Google Scholar 

  15. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks (2015)

  16. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  17. Shahriari, B., Swersky, K., Wang, Z., Adams, R.P., de Freitas, N.: Taking the human out of the loop: a review of bayesian optimization. Proc. IEEE 104(1), 148–175 (2016). https://doi.org/10.1109/JPROC.2015.2494218

    Article  Google Scholar 

  18. Shi, T., Yuan, Y., Fan, C., Zou, Z., Shi, Z., Liu, Y.: Face-to-parameter translation for game character auto-creation. In: Proceedings of the IEEE International Conference on Computer Vision 2019 (pp. 161-170) (2019)

  19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR arXiv:1409.1556 (2014)

  20. Wei, Y., Ofek, E., Quan, L., Shum, H.Y.: Modeling hair from multiple views. ACM Trans. Graph. 24(3), 816–820 (2005). https://doi.org/10.1145/1073204.1073267

    Article  Google Scholar 

  21. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). https://doi.org/10.1109/34.888718

    Article  Google Scholar 

  22. Zhao, S., Jakob, W., Marschner, S., Bala, K.: Building volumetric appearance models of fabric using micro CT imaging. In: ACM SIGGRAPH 2011 Papers, SIGGRAPH ’11. Association for Computing Machinery, New York, NY, USA (2011). https://doi.org/10.1145/1964921.1964939

Download references

Acknowledgements

We would like to thank Zheyuan Cai for helping on the preliminary experiment.

Funding

This work was partially supported by JSPS KAKENHI (Grant Number JP17H00752 and 19J13492). Seung-Tak Noh is funded by JSPS Research Fellowships for Young Scientists.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seung-Tak Noh.

Ethics declarations

Conflict of interest

Authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 31710 KB)

Fur parameters

Fur parameters

The selected fur parameters are listed in Table 1. We selected 25 parameters from 89 parameters in Maya Fur and converted them to normalized space. In our experiment, we fixed the other 64 parameters as default values.

Table 1 The selected 25 parameters in our framework. Among them, 15 parameters are related to the fur geometry and 10 are related to color

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Noh, ST., Takahashi, K., Adachi, M. et al. Parametric fur from an image. Vis Comput 37, 1129–1138 (2021). https://doi.org/10.1007/s00371-020-01857-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-020-01857-x

Navigation