Skip to main content

Depth-Varying Human Video Sprite Synthesis

  • Conference paper
Transactions on Edutainment VII

Part of the book series: Lecture Notes in Computer Science ((TEDUTAIN,volume 7145))

  • 1137 Accesses

Abstract

Video texture is an appealing method to extract and replay natural human motion from video shots. There have been much research on video texture analysis, generation and interactive control. However, the video sprites created by existing methods are typically restricted to constant depths, so that the motion diversity is strongly limited. In this paper, we propose a novel depth-varying human video sprite synthesis method, which significantly increases the degrees of freedom of human video sprite. A novel image distance function encoding scale variation is proposed, which can effectively measure the human snapshots with different depths/scales and poses, so that aligning similar poses with different depths is possible. The transitions among non-consecutive frames are modeled as a 2D transformation matrix, which can effectively avoid drifting without leveraging markers or user intervention. The synthesized depth-varying human video sprites can be seamlessly inserted into new scenes for realistic video composition. A variety of challenging examples demonstrate the effectiveness of our method.

This work was supported by the 973 program of China (No. 2009CB320802), NSF of China (Nos. 60633070 and 60903135), China Postdoctoral Science Foundation funded project(No. 20100470092).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Arikan, O., Forsyth, D.A.: Interactive motion generation from examples. In: SIGGRAPH 2002: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, pp. 483–490. ACM, New York (2002)

    Chapter  Google Scholar 

  2. Bai, X., Wang, J., Simons, D., Sapiro, G.: Video snapcut: robust video object cutout using localized classifiers. In: SIGGRAPH 2009: ACM SIGGRAPH 2009 Papers, pp. 1–11. ACM, New York (2009)

    Chapter  Google Scholar 

  3. Beier, T., Neely, S.: Feature-based image metamorphosis. In: Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1992, pp. 35–42. ACM, New York (1992), http://doi.acm.org/10.1145/133994.134003

    Chapter  Google Scholar 

  4. Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(4), 509–522 (2002)

    Article  Google Scholar 

  5. Berg, A.C., Malik, J.: Geometric blur for template matching, vol. 1, p. 607. IEEE Computer Society, Los Alamitos (2001)

    Google Scholar 

  6. Celly, B., Zordan, V.B.: Animated people textures. In: Proceedings of 17th International Conference on Computer Animation and Social Agents (CASA), Citeseer (2004)

    Google Scholar 

  7. Flagg, M., Nakazawa, A., Zhang, Q., Kang, S.B., Ryu, Y.K., Essa, I., Rehg, J.M.: Human video textures. In: SI3D 2009: Proceedings of the 2009 Symposium on Interactive 3D Graphics and Games (2009)

    Google Scholar 

  8. Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2002, pp. 473–482. ACM, New York (2002), http://doi.acm.org/10.1145/566570.566605

    Chapter  Google Scholar 

  9. Lee, J., Chai, J., Reitsma, P.S.A., Hodgins, J.K., Pollard, N.S.: Interactive control of avatars animated with human motion data. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2002, pp. 491–500. ACM, New York (2002), http://doi.acm.org/10.1145/566570.566607

    Chapter  Google Scholar 

  10. Li, Y., Wang, T., Shum, H.Y.: Motion texture: a two-level statistical model for character motion synthesis. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2002, pp. 465–472. ACM, New York (2002), http://doi.acm.org/10.1145/566570.566604

    Chapter  Google Scholar 

  11. Liu, C., Freeman, W.T., Adelson, E.H., Weiss, Y.: Human-assisted motion annotation. In: CVPR (2008)

    Google Scholar 

  12. Mori, G., Berg, A., Efros, A., Eden, A.: Video based motion synthesis by splicing and morphing. Tech. rep., University of California, Berkeley (2004)

    Google Scholar 

  13. Niebles, J.C., Chen, C.-W., Fei-Fei, L.: Modeling Temporal Structure of Decomposable Motion Segments for Activity Classification. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6312, pp. 392–405. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  14. Schödl, A., Essa, I.: Machine learning for video-based rendering. In: Advances in Neural Information Processing Systems, pp. 1002–1008. MIT Press (2000)

    Google Scholar 

  15. Schödl, A., Essa, I.A.: Controlled animation of video sprites. In: SCA 2002: Proceedings of the 2002 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 121–127. ACM, New York (2002)

    Chapter  Google Scholar 

  16. Schödl, A., Szeliski, R., Salesin, D.H., Essa, I.: Video textures. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2000, pp. 489–498. ACM Press/Addison-Wesley Publishing Co., New York, NY (2000), http://dx.doi.org/10.1145/344779.345012

    Chapter  Google Scholar 

  17. Xu, X., Wan, L., Liu, X., Wong, T.-T., Wang, L., Leung, C.-S.: Animating animal motion from still. ACM Trans. Graph. 27(5), 1–8 (2008)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hua, W., Yang, W., Dong, Z., Zhang, G. (2012). Depth-Varying Human Video Sprite Synthesis. In: Pan, Z., Cheok, A.D., Müller, W., Chang, M., Zhang, M. (eds) Transactions on Edutainment VII. Lecture Notes in Computer Science, vol 7145. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29050-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-29050-3_4

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-29049-7

  • Online ISBN: 978-3-642-29050-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics