Skip to main content

View-Invariant Method for Calculating 2D Optical Strain

  • Conference paper
Advances in Depth Image Analysis and Applications (WDIA 2012)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 7854))

Included in the following conference series:

Abstract

Two-dimensional optical strain maps have been shown to be a useful feature that describes a bio-mechanical property of facial skin tissue during the non-rigid motion that occurs during facial expressions. In this paper, we propose a method for accurately estimating and modeling the three-dimensional strain impacted onto the face and demonstrate its robustness at different depth resolutions and views. Experimental results are given for a publically available dataset that contains high depth resolutions of facial expressions, as well as a new dataset collected using the Microsoft Kinect synchronized with two HD webcams.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 49.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Shreve, M., Godavarthy, S., Goldgof, D.: Macro- and micro-expression spotting in long videos using spatio-temporal strain. In: Proceedings of Int. Conference on Automatic Face and Gesture Recognition, pp. 51–56 (2011)

    Google Scholar 

  2. Shreve, M., Manohar, V., Goldgof, D., Sarkar, S.: Face recognition under camouflage and adverse illumination. In: Proceedings of International Conference on Biometrics: Theory Applications and Systems, pp. 1–6 (2010)

    Google Scholar 

  3. Shreve, M., Jain, N., Goldgof, D., Sarkar, S., Kropatsch, W., Tzou, C.-H.J., Frey, M.: Evaluation of facial reconstructive surgery on patients with facial palsy using optical strain. In: Real, P., Diaz-Pernil, D., Molina-Abril, H., Berciano, A., Kropatsch, W. (eds.) CAIP 2011, Part I. LNCS, vol. 6854, pp. 512–519. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  4. Bickel, B., Botsch, M., Angst, R., Matusik, W., Otaduy, M., Pfister, H., Gross, M.: Multi-scale capture of facial geometry and motion. ACM Transactions on Graphics 29(3), 33 (2007)

    Article  Google Scholar 

  5. Blanz, V., Basso, C., Poggio, T., Vetter, T.: Reanimating faces in images and video. Computer Graphics Forum 22(3), 641–650 (2003)

    Article  Google Scholar 

  6. Lin, I., Ouhyoung, M.: Mirror MoCap: Automatic and efficient capture of dense 3D facial motion parameters. Visual Computer 21(6), 355–372

    Google Scholar 

  7. Bradley, D., Heidrich, W., Popa, T., Sheffer, A.: High resolution passive facial performance capture. ACM Transactions on Graphics 29(4), 41 (2010)

    Article  Google Scholar 

  8. Furukawa, Y., Ponce, J.: Dense 3D motion capture from synchronized video streams. In: Image and Geometry Processing for 3-D Cinematography, 193–211 (2010)

    Google Scholar 

  9. Furukawa, Y., Ponce, J.: Dense 3D motion capture for human faces. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1674–1681 (2009)

    Google Scholar 

  10. Pons, J., Keriven, R., Faugeras, O.: Multi-view stereo reconstruction and scene flow estimation with a global image-based matching score. International Journal of Computer Vision 72(2), 179–193 (2007)

    Article  Google Scholar 

  11. Penna, M.: The Incremental Approximation of Nonrigid Motion. Computer Vision, Graphics, and Image Processing 60(2), 141–156 (1994)

    Article  Google Scholar 

  12. Hadfield, S., Bowden, R.: Kinecting the dots: particle based scene flow from depth sensors. In: Proceedings of International Conference on Computer Vision, pp. 2290–2295 (2011)

    Google Scholar 

  13. Weise, T., Bouaziz, S., Li, H., Pauly, M.: Realtime Performance-Based Facial Animation. ACM Transactions on Graphics 30(4), 77 (2011)

    Article  Google Scholar 

  14. Horn, B., Schunck, B.: Determining optical flow. Artificial Intelligence 17, 185–203 (1981)

    Article  Google Scholar 

  15. Neumann, J., Aloimonos, Y.: Spatio-temporal stereo using multi-resolution subdivision surfaces. International Journal of Computer Vision 47(1-3), 181–193 (2002)

    Article  MATH  Google Scholar 

  16. Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artificial Intelligence 17, 185–203 (1981)

    Article  Google Scholar 

  17. Yin, L., Chen, X., Sun, Y., Worm, T., Reale, M.: A High-Resolution 3D Dynamic Facial Expression Database. In: International Conference on Automatic Face and Gesture Recognition (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Shreve, M., Fefilatyev, S., Bonilla, N., Hernandez, G., Goldgof, D., Sarkar, S. (2013). View-Invariant Method for Calculating 2D Optical Strain. In: Jiang, X., Bellon, O.R.P., Goldgof, D., Oishi, T. (eds) Advances in Depth Image Analysis and Applications. WDIA 2012. Lecture Notes in Computer Science, vol 7854. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40303-3_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-40303-3_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-40302-6

  • Online ISBN: 978-3-642-40303-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics