Abstract
We present a novel technique for acquiring detailed facial geometry of a dynamic performance using extended spherical gradient illumination. Key to our method is a new algorithm for jointly aligning two photographs, under a gradient illumination condition and its complement, to a full-on tracking frame, providing dense temporal correspondences under changing lighting conditions. We employ a two-step algorithm to reconstruct detailed geometry for every captured frame. In the first step, we coalesce information from the gradient illumination frames to the full-on tracking frame, and form a temporally aligned photometric normal map, which is subsequently combined with dense stereo correspondences yielding a detailed geometry. In a second step, we propagate the detailed geometry back to every captured instance guided by the previously computed dense correspondences. We demonstrate reconstructed dynamic facial geometry, captured using moderate to video rates of acquisition, for every captured frame.
Supplemental Material
- Ahmed, N., Theobalt, C., Dobrev, P., Seidel, H.-P., and Thrun, S. 2008. Robust fusion of dynamic shape and normal capture for high-quality reconstruction of time-varying geometry. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR'08). 1--8.Google Scholar
- Bickel, B., Botsch, M., Angst, R., Matusik, W., Otaduy, M., Pfister, H., and Gross, M. 2007. Multi-Scale capture of facial geometry and motion. ACM Trans. Graph. 26, 3, 33: 1--10. Google ScholarDigital Library
- Brox, T., Bruhn, A., Papenberg, N., and Weickert, J. 2004. High accuracy optical flow estimation based on a theory for warping. In Proceedings of the European Conference on Computer Vision. 25--36.Google Scholar
- Davis, J., Nehab, D., Ramamoorthi, R., and Rusinkiewicz, S. 2005. Spacetime stereo: A unifying framework for depth from triangulation. IEEE Trans. Patt. Anal. Mach. Intell. 27, 2, 296--302. Google ScholarDigital Library
- Hernandez, C., Vogiatzis, G., Brostow, G. J., Stenger, B., and Cipolla, R. 2007. Non-Rigid photometric stereo with colored lights. In Proceedings of the IEEE International Conference on Computer Vision. 1--8.Google Scholar
- Kang, S., Uyttendaele, M., Winder, S., and Szeliski, R. 2003. High dynamic range video. ACM Trans. Graph. 22, 3, 319--325. Google ScholarDigital Library
- Lim, J., Ho, J., Yang, M.-H., and Kriegman, D. 2005. Passive photometric stereo from motion. In Proceedings of the IEEE International Conference on Computer Vision. 1635--1642. Google ScholarDigital Library
- Ma, W.-C., Hawkins, T., Peers, P., Chabert, C.-F., Weiss, M., and Debevec, P. 2007. Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination. In Proceedings of the Eurographics Symposium on Rendering. 183--194. Google ScholarDigital Library
- Ma, W.-C., Jones, A., Chiang, J.-Y., Hawkins, T., Frederiksen, S., Peers, P., Vukovic, M., Ouhyoung, M., and Debevec, P. 2008. Facial performance synthesis using deformation-driven polynomial displacement maps. ACM Trans. Graph. 27, 5, 121: 1--10. Google ScholarDigital Library
- Malzbender, T., Wilburn, B., Gelb, D., and Ambrisco, B. 2006. Surface enhancement using real-time photometric stereo and reflectance transformation. In Proceedings of the Eurographics Symposium on Rendering. 245--250. Google ScholarDigital Library
- Nehab, D., Rusinkiewicz, S., Davis, J., and Ramamoorthi, R. 2005. Efficiently combining positions and normals for precise 3D geometry. ACM Trans. Graph. 24, 3, 536--543. Google ScholarDigital Library
- Rusinkiewicz, S., Hall-Holt, O., and Levoy, M. 2002. Real-time 3d model acquisition. ACM Trans. Graph. 21, 3, 438--446. Google ScholarDigital Library
- Scharstein, D. and Szeliski, R. 2002. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vision 47, 1--3, 7--42. Google ScholarDigital Library
- Vedula, S., Baker, S., and Kanade, T. 2005. Image based spatio-temporal modeling and view interpolation of dynamic events. ACM Trans. Graph. 24, 2, 240--261. Google ScholarDigital Library
- Vlasic, D., Peers, P., Baran, I., Debevec, P., Popović, J., Rusinkiewicz, S., and Matusik, W. 2009. Dynamic shape capture using multi-view photometric stereo. ACM Trans. Graph. 28, 5, 174: 1--11. Google ScholarDigital Library
- Wand, M., Adams, B., Ovsjanikov, M., Berner, A., Bokeloh, M., Jenke, P., Guibas, L., Seidel, H.-P., and Schilling, A. 2009. Efficient reconstruction of nonrigid shape and motion from real-time 3D scanner data. ACM Trans. Graph. 28, 2, 15: 1--15. Google ScholarDigital Library
- Wenger, A., Gardner, A., Tchou, C., Unger, J., Hawkins, T., and Debevec, P. 2005. Performance relighting and reflectance transformation with time-multiplexed illumination. ACM Trans. Graph. 24, 3, 756--764. Google ScholarDigital Library
- XYZRGB. 3D laser scanning—XYZ RGB Inc. http://www.xyzrgb.com/.Google Scholar
- Zhang, S., and Huang, P. 2006. High-Resolution, real-time three-dimensional shape measurement. Optical Engin. 45, 12, 123601: 1--8.Google Scholar
- Zhang, L., Curless, B., Hertzmann, A., and Seitz, S. M. 2003. Shape and motion under varying illumination: Unifying structure from motion, photometric stereo, and multi-view stereo. In Proceedings of the IEEE International Conference on Computer Vision. 618--625. Google ScholarDigital Library
- Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: High resolution capture for modeling and animation. ACM Trans. Graph. 23, 3, 548--558. Google ScholarDigital Library
Index Terms
Temporal upsampling of performance geometry using photometric alignment
Recommendations
Motion estimation and segmentation in depth and intensity videos
This paper investigates motion estimation and segmentation of independently moving objects in video sequences that contain depth and intensity information, such as videos captured by a Time of Flight camera. Specifically, we present a motion estimation ...
Real-time photometric registration from arbitrary geometry
ISMAR '12: Proceedings of the 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)Visually coherent rendering for augmented reality is concerned with seamlessly blending the virtual world and the real world in real-time. One challenge in achieving this is the correct handling of lighting. We are interested in applying real-world ...
Interreflection removal for photometric stereo by using spectrum-dependent albedo
CVPR '11: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern RecognitionWe present a novel method that can separate m-bounced light and remove the interreflections in a photometric stereo setup. Under the assumption of a uniformly colored lambertian surface, the intensity of a point in the scene is the sum of 1-bounced ...
Comments