skip to main content
10.1145/2380116.2380170acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

3D puppetry: a kinect-based interface for 3D animation

Authors Info & Claims
Published:07 October 2012Publication History

ABSTRACT

We present a system for producing 3D animations using physical objects (i.e., puppets) as input. Puppeteers can load 3D models of familiar rigid objects, including toys, into our system and use them as puppets for an animation. During a performance, the puppeteer physically manipulates these puppets in front of a Kinect depth sensor. Our system uses a combination of image-feature matching and 3D shape matching to identify and track the physical puppets. It then renders the corresponding 3D models into a virtual set. Our system operates in real time so that the puppeteer can immediately see the resulting animation and make adjustments on the fly. It also provides 6D virtual camera \\rev{and lighting} controls, which the puppeteer can adjust before, during, or after a performance. Finally our system supports layered animations to help puppeteers produce animations in which several characters move at the same time. We demonstrate the accessibility of our system with a variety of animations created by puppeteers with no prior animation experience.

Skip Supplemental Material Section

Supplemental Material

paper_0221-file3.mp4

mp4

43.5 MB

References

  1. Autodesk. 123D Catch. http://www.123dapp.com/catch, 2012.Google ScholarGoogle Scholar
  2. Autodesk. 3ds Max. http://usa.autodesk.com/3ds-max/, 2012.Google ScholarGoogle Scholar
  3. Autodesk. Maya. http://usa.autodesk.com/maya/, 2012.Google ScholarGoogle Scholar
  4. Avrahami, D., Wobbrock, J. O., and Izadi, S. Portico: tangible interaction on and around a tablet. In Proc. UIST (2011), 347--356. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Barnes, C., Jacobs, D. E., Sanders, J., Goldman, D. B., Rusinkiewicz, S., Finkelstein, A., and Agrawala, M. Video Puppetry: A performative interface for cutout animation. ACM TOG (Proc. SIGGRAPH) 27, 5 (2008), 124:1--124:9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Besl, P., and McKay, N. A method for registration of 3-D shapes. IEEE PAMI 14 (1992), 239--256. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Blender Foundation. Blender. http://www.blender.org, 2012.Google ScholarGoogle Scholar
  8. Cline, D., Jeschke, S., White, K., Razdan, A., and Wonka, P. Dart throwing on surfaces. Computer Graphics Forum 28, 4 (2009), 1217--1226. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Dontcheva, M., Yngve, G., and Popović, Z. Layered acting for character animation. ACM TOG (Proc. SIGGRAPH) 22 (2003), 409--416. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Freedman, B., Shpunt, A., Machline, M., and Arieli, Y. Depth mapping using projected patterns. Patent. US8150142 (2012).Google ScholarGoogle Scholar
  11. Gallo, L., Placitelli, A., and Ciampi, M. Controller-free exploration of medical image data: Experiencing the kinect. In Proc. CBMS (2011), 1--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Google. Google 3D Warehouse. http://sketchup.google.com/3dwarehouse/, 2012.Google ScholarGoogle Scholar
  13. Heindl, C., and Kopf, C. ReconstructMe. http://reconstructme.net, 2012.Google ScholarGoogle Scholar
  14. Horn, B. K. P. Closed-form solution of absolute orientation using unit quaternions. JOSA A 4, 4 (1987), 629--642.Google ScholarGoogle ScholarCross RefCross Ref
  15. Iason Oikonomidis, N. K., and Argyros, A. Efficient model-based 3D tracking of hand articulations using kinect. In Proc. BMVC (2011), 101.1--101.11.Google ScholarGoogle ScholarCross RefCross Ref
  16. Ishii, H., and Ullmer, B. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proc. CHI (1997), 234--241. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., and Fitzgibbon, A. Kinectfusion: real-time 3D reconstruction and interaction using a moving depth camera. In Proc. UIST (2011), 559--568. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Johnson, M. P., Wilson, A., Blumberg, B., Kline, C., and Bobick, A. Sympathetic interfaces: using a plush toy to direct synthetic characters. In Proc. CHI (1999), 152--158. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Joung, J. H., An, K. H., Kang, J. W., Chung, M. J., and Yu, W. 3d environment reconstruction using modified color ICP algorithm by fusion of a camera and a 3D laser range finder. In Proc. IROS (2009), 3082--3088. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Kato, H., Billinghurst, M., Poupyrev, I., Imamoto, K., and Tachibana, K. Virtual object manipulation on a table-top ar environment. In Proc. ISAR (2000), 111 --119.Google ScholarGoogle ScholarCross RefCross Ref
  21. Klemmer, S. R., Li, J., Lin, J., and Landay, J. A. Papier-mache: Toolkit support for tangible input. In Proc. CHI (2004), 399--406. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Lee, G. A., Kim, G. J., and Billinghurst, M. Immersive authoring: What you experience is what you get (wyxiwyg). Comm. ACM 48, 7 (2005), 76--81. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Lowe, D. G. Object recognition from local scale-invariant features. In Proc. ICCV (1999), 1150--1157. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Numaguchi, N., Nakazawa, A., Shiratori, T., and Hodgins, J. K. A puppet interface for retrieval of motion capture data. In Proc. SCA (2011), 157--166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Oore, S., Terzopoulos, D., and Hinton, G. E. A desktop input device and interface for interactive 3D character animation. In Proc. Graphics Interface (2002), 133--140.Google ScholarGoogle Scholar
  26. OpenNI Organization. OpenNI. http://openni.org, 2012.Google ScholarGoogle Scholar
  27. Schmidt, R., and Singh, K. meshmixer: an interface for rapid mesh composition. In ACM TOG (Proc. SIGGRAPH) (2010), 6:1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., and Blake, A. Real-time human pose recognition in parts from single depth images. In Proc. CVPR (2011), 1297 --1304. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Sturman, D. J. Computer puppetry. IEEE Computer Graphics and Applications 18, 1 (1998), 38--45. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Tomasi, C., and Manduchi, R. Bilateral filtering for gray and color images. In Proc. IEEE Int. Conf. on Computer Vision (1998), 839 --846. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Wang, R. Y., and Popović, J. Real-time hand-tracking with a color glove. In ACM TOG (Proc. SIGGRAPH) (2009), 63:1--63:8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Willow Garage. Robotics Operating System. http://ros.org, 2012.Google ScholarGoogle Scholar
  33. Wu, C. SiftGPU: A GPU implementation of scale invariant feature transform (SIFT). http://cs.unc.edu/ ccwu/siftgpu, 2007.Google ScholarGoogle Scholar
  34. Ziola, R., Grampurohit, S., Landes, N., Fogarty, J., and Harrison, B. Examining interaction with general-purpose object recognition in LEGO OASIS. In Proc. IEEE VL/HCC (2011), 65--68.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. 3D puppetry: a kinect-based interface for 3D animation

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        UIST '12: Proceedings of the 25th annual ACM symposium on User interface software and technology
        October 2012
        608 pages
        ISBN:9781450315807
        DOI:10.1145/2380116

        Copyright © 2012 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 7 October 2012

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate842of3,967submissions,21%

        Upcoming Conference

        UIST '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader