ABSTRACT
We present a system for producing 3D animations using physical objects (i.e., puppets) as input. Puppeteers can load 3D models of familiar rigid objects, including toys, into our system and use them as puppets for an animation. During a performance, the puppeteer physically manipulates these puppets in front of a Kinect depth sensor. Our system uses a combination of image-feature matching and 3D shape matching to identify and track the physical puppets. It then renders the corresponding 3D models into a virtual set. Our system operates in real time so that the puppeteer can immediately see the resulting animation and make adjustments on the fly. It also provides 6D virtual camera \\rev{and lighting} controls, which the puppeteer can adjust before, during, or after a performance. Finally our system supports layered animations to help puppeteers produce animations in which several characters move at the same time. We demonstrate the accessibility of our system with a variety of animations created by puppeteers with no prior animation experience.
Supplemental Material
- Autodesk. 123D Catch. http://www.123dapp.com/catch, 2012.Google Scholar
- Autodesk. 3ds Max. http://usa.autodesk.com/3ds-max/, 2012.Google Scholar
- Autodesk. Maya. http://usa.autodesk.com/maya/, 2012.Google Scholar
- Avrahami, D., Wobbrock, J. O., and Izadi, S. Portico: tangible interaction on and around a tablet. In Proc. UIST (2011), 347--356. Google ScholarDigital Library
- Barnes, C., Jacobs, D. E., Sanders, J., Goldman, D. B., Rusinkiewicz, S., Finkelstein, A., and Agrawala, M. Video Puppetry: A performative interface for cutout animation. ACM TOG (Proc. SIGGRAPH) 27, 5 (2008), 124:1--124:9. Google ScholarDigital Library
- Besl, P., and McKay, N. A method for registration of 3-D shapes. IEEE PAMI 14 (1992), 239--256. Google ScholarDigital Library
- Blender Foundation. Blender. http://www.blender.org, 2012.Google Scholar
- Cline, D., Jeschke, S., White, K., Razdan, A., and Wonka, P. Dart throwing on surfaces. Computer Graphics Forum 28, 4 (2009), 1217--1226. Google ScholarDigital Library
- Dontcheva, M., Yngve, G., and Popović, Z. Layered acting for character animation. ACM TOG (Proc. SIGGRAPH) 22 (2003), 409--416. Google ScholarDigital Library
- Freedman, B., Shpunt, A., Machline, M., and Arieli, Y. Depth mapping using projected patterns. Patent. US8150142 (2012).Google Scholar
- Gallo, L., Placitelli, A., and Ciampi, M. Controller-free exploration of medical image data: Experiencing the kinect. In Proc. CBMS (2011), 1--6. Google ScholarDigital Library
- Google. Google 3D Warehouse. http://sketchup.google.com/3dwarehouse/, 2012.Google Scholar
- Heindl, C., and Kopf, C. ReconstructMe. http://reconstructme.net, 2012.Google Scholar
- Horn, B. K. P. Closed-form solution of absolute orientation using unit quaternions. JOSA A 4, 4 (1987), 629--642.Google ScholarCross Ref
- Iason Oikonomidis, N. K., and Argyros, A. Efficient model-based 3D tracking of hand articulations using kinect. In Proc. BMVC (2011), 101.1--101.11.Google ScholarCross Ref
- Ishii, H., and Ullmer, B. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proc. CHI (1997), 234--241. Google ScholarDigital Library
- Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., and Fitzgibbon, A. Kinectfusion: real-time 3D reconstruction and interaction using a moving depth camera. In Proc. UIST (2011), 559--568. Google ScholarDigital Library
- Johnson, M. P., Wilson, A., Blumberg, B., Kline, C., and Bobick, A. Sympathetic interfaces: using a plush toy to direct synthetic characters. In Proc. CHI (1999), 152--158. Google ScholarDigital Library
- Joung, J. H., An, K. H., Kang, J. W., Chung, M. J., and Yu, W. 3d environment reconstruction using modified color ICP algorithm by fusion of a camera and a 3D laser range finder. In Proc. IROS (2009), 3082--3088. Google ScholarDigital Library
- Kato, H., Billinghurst, M., Poupyrev, I., Imamoto, K., and Tachibana, K. Virtual object manipulation on a table-top ar environment. In Proc. ISAR (2000), 111 --119.Google ScholarCross Ref
- Klemmer, S. R., Li, J., Lin, J., and Landay, J. A. Papier-mache: Toolkit support for tangible input. In Proc. CHI (2004), 399--406. Google ScholarDigital Library
- Lee, G. A., Kim, G. J., and Billinghurst, M. Immersive authoring: What you experience is what you get (wyxiwyg). Comm. ACM 48, 7 (2005), 76--81. Google ScholarDigital Library
- Lowe, D. G. Object recognition from local scale-invariant features. In Proc. ICCV (1999), 1150--1157. Google ScholarDigital Library
- Numaguchi, N., Nakazawa, A., Shiratori, T., and Hodgins, J. K. A puppet interface for retrieval of motion capture data. In Proc. SCA (2011), 157--166. Google ScholarDigital Library
- Oore, S., Terzopoulos, D., and Hinton, G. E. A desktop input device and interface for interactive 3D character animation. In Proc. Graphics Interface (2002), 133--140.Google Scholar
- OpenNI Organization. OpenNI. http://openni.org, 2012.Google Scholar
- Schmidt, R., and Singh, K. meshmixer: an interface for rapid mesh composition. In ACM TOG (Proc. SIGGRAPH) (2010), 6:1. Google ScholarDigital Library
- Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., and Blake, A. Real-time human pose recognition in parts from single depth images. In Proc. CVPR (2011), 1297 --1304. Google ScholarDigital Library
- Sturman, D. J. Computer puppetry. IEEE Computer Graphics and Applications 18, 1 (1998), 38--45. Google ScholarDigital Library
- Tomasi, C., and Manduchi, R. Bilateral filtering for gray and color images. In Proc. IEEE Int. Conf. on Computer Vision (1998), 839 --846. Google ScholarDigital Library
- Wang, R. Y., and Popović, J. Real-time hand-tracking with a color glove. In ACM TOG (Proc. SIGGRAPH) (2009), 63:1--63:8. Google ScholarDigital Library
- Willow Garage. Robotics Operating System. http://ros.org, 2012.Google Scholar
- Wu, C. SiftGPU: A GPU implementation of scale invariant feature transform (SIFT). http://cs.unc.edu/ ccwu/siftgpu, 2007.Google Scholar
- Ziola, R., Grampurohit, S., Landes, N., Fogarty, J., and Harrison, B. Examining interaction with general-purpose object recognition in LEGO OASIS. In Proc. IEEE VL/HCC (2011), 65--68.Google ScholarCross Ref
Index Terms
- 3D puppetry: a kinect-based interface for 3D animation
Recommendations
Video puppetry: a performative interface for cutout animation
SIGGRAPH Asia '08: ACM SIGGRAPH Asia 2008 papersWe present a video-based interface that allows users of all skill levels to quickly create cutout-style animations by performing the character motions. The puppeteer first creates a cast of physical puppets using paper, markers and scissors. He then ...
Video puppetry: a performative interface for cutout animation
We present a video-based interface that allows users of all skill levels to quickly create cutout-style animations by performing the character motions. The puppeteer first creates a cast of physical puppets using paper, markers and scissors. He then ...
Turning to the masters: motion capturing cartoons
In this paper, we present a technique we call "cartoon capture and retargeting" which we use to track the motion from traditionally animated cartoons and retarget it onto 3-D models, 2-D drawings, and photographs. By using animation as the source, we ...
Comments