IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Online ISSN : 1745-1337
Print ISSN : 0916-8508
Special Section on Information Theory and Its Applications
3D Face and Motion from Feature Points Using Adaptive Constrained Minima
Varin CHOUVATUTSuthep MADARASMIMihran TUCERYAN
Author information
JOURNAL RESTRICTED ACCESS

2011 Volume E94.A Issue 11 Pages 2207-2219

Details
Abstract

This paper presents a novel method for reconstructing 3D geometry of camera motion and human-face model from a video sequence. The approach combines the concepts of Powell's line minimization with gradient descent. We adapted the line minimization with bracketing used in Powell's minimization to our method. However, instead of bracketing and searching deep down a direction for the minimum point along that direction as done in their line minimization, we achieve minimization by bracketing and searching for the direction in the bracket which provides a lower energy than the previous iteration. Thus, we do not need a large memory as required by Powell's algorithm. The approach to moving in a better direction is similar to classical gradient descent except that the derivative calculation and a good starting point are not needed. The system's constraints are also used to control the bracketing direction. The reconstructed solution is further improved using the Levenberg Marquardt algorithm. No average face model or known-coordinate markers are needed. Feature points defining the human face are tracked using the active appearance model. Occluded points, even in the case of self occlusion, do not pose a problem. The reconstructed space is normalized where the origin can be arbitrarily placed. To use the obtained reconstruction, one can rescale the computed volume to a known scale and transform the coordinate system to any other desired coordinates. This is relatively easy since the 3D geometry of the facial points and the camera parameters of all frames are explicitly computed. Robustness to noise and lens distortion, and 3D accuracy are also demonstrated. All experiments were conducted with an off-the-shelf digital camera carried by a person walking without using any dolly to demonstrate the robustness and practicality of the method. Our method does not require a large memory or the use of any particular, expensive equipment.

Content from these authors
© 2011 The Institute of Electronics, Information and Communication Engineers
Previous article Next article
feedback
Top