ABSTRACT
Body-controlled avatars provide a more intuitive method to real-time control virtual avatars but require larger environment space and more user effort. In contrast, hand-controlled avatars give more dexterous and fewer fatigue manipulations within a close-range space for avatar control but provide fewer sensory cues than the body-based method. This paper investigates the differences between the two manipulations and explores the possibility of a combination. We first performed a formative study to understand when and how users prefer manipulating hands and bodies to represent avatars’ actions in current popular video games. Based on the top video games survey, we decided to represent human avatars’ motions. Besides, we found that players used their bodies to represent avatar actions but changed to using hands when they were too unrealistic and exaggerated to mimic by bodies (e.g., flying in the sky, rolling over quickly). Hand gestures also provide an alternative to lower-body motions when players want to sit during gaming and do not want extensive effort to move their avatars. Hence, we focused on the design of hand gestures and upper-body postures. We present Puppeteer, an input prototype system that allows players directly control their avatars through intuitive hand gestures and upper-body postures. We selected 17 avatar actions discovered in the formative study and conducted a gesture elicitation study to invite 12 participants to design best representing hand gestures and upper-body postures for each action. Then we implemented a prototype system using the MediaPipe framework to detect keypoints and a self-trained model to recognize 17 hand gestures and 17 upper-body postures. Finally, three applications demonstrate the interactions enabled by Puppeteer.
Supplemental Material
Available for Download
- Karan Ahuja, Eyal Ofek, Mar Gonzalez-Franco, Christian Holz, and Andrew D. Wilson. 2021. CoolMoves: User Motion Accentuation in Virtual Reality. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5, 2, Article 52 (jun 2021), 23 pages. https://doi.org/10.1145/3463499Google ScholarDigital Library
- Edwin Chan, Teddy Seyed, Wolfgang Stuerzlinger, Xing-Dong Yang, and Frank Maurer. 2016. User Elicitation on Single-Hand Microgestures. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 3403–3414. https://doi.org/10.1145/2858036.2858589Google ScholarDigital Library
- Jiawen Chen, Shahram Izadi, and Andrew Fitzgibbon. 2012. KinÊTre: Animating the World with the Human Body. Association for Computing Machinery, New York, NY, USA, 435–444. https://doi.org/10.1145/2380116.2380171Google ScholarDigital Library
- Yu-Ting Cheng, Timothy K Shih, and Chih-Yang Lin. 2017. Create a puppet play and interative digital models with leap Motion. In 2017 10th International Conference on Ubi-media Computing and Workshops (Ubi-Media). IEEE, 1–6.Google ScholarCross Ref
- Mira Dontcheva, Gary Yngve, and Zoran Popović. 2003. Layered Acting for Character Animation. In ACM SIGGRAPH 2003 Papers (San Diego, California) (SIGGRAPH ’03). Association for Computing Machinery, New York, NY, USA, 409–416. https://doi.org/10.1145/1201775.882285Google ScholarDigital Library
- Maxime Garcia, Remi Ronfard, and Marie-Paule Cani. 2019. Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis. In Motion, Interaction and Games (Newcastle upon Tyne, United Kingdom) (MIG ’19). Association for Computing Machinery, New York, NY, USA, Article 10, 10 pages. https://doi.org/10.1145/3359566.3360061Google ScholarDigital Library
- Oliver Glauser, Wan-Chun Ma, Daniele Panozzo, Alec Jacobson, Otmar Hilliges, and Olga Sorkine-Hornung. 2016. Rig Animation with a Tangible and Modular Input Device. ACM Trans. Graph. 35, 4, Article 144 (jul 2016), 11 pages. https://doi.org/10.1145/2897824.2925909Google ScholarDigital Library
- Saikat Gupta, Sujin Jang, and Karthik Ramani. 2014. PuppetX: A Framework for Gestural Interactions with User Constructed Playthings. In Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces (Como, Italy) (AVI ’14). Association for Computing Machinery, New York, NY, USA, 73–80. https://doi.org/10.1145/2598153.2598171Google ScholarDigital Library
- Robert Held, Ankit Gupta, Brian Curless, and Maneesh Agrawala. 2012. 3D puppetry: a kinect-based interface for 3D animation.. In UIST, Vol. 12. Citeseer, 423–434.Google Scholar
- Narukawa Hiroki, Natapon Pantuwong, and Masanori Sugimoto. 2012. A puppet interface for the development of an intuitive computer animation system. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). IEEE, 3136–3139.Google Scholar
- Christian Holz and Andrew Wilson. 2011. Data miming: inferring spatial object descriptions from human gesture. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 811–820.Google ScholarDigital Library
- An-Pin Huang, Fay Huang, and Jing-Siang Jhu. 2018. Unreal Interactive Puppet Game Development Using Leap Motion. In Journal of Physics: Conference Series, Vol. 1004. IOP Publishing, 012025.Google Scholar
- James A Jablonski, Trevor J Bihl, and Kenneth W Bauer. 2015. Principal component reconstruction error for hyperspectral anomaly detection. IEEE Geoscience and Remote Sensing Letters 12, 8 (2015), 1725–1729.Google ScholarCross Ref
- Ji-Sun Kim, Denis Gračanin, Krešimir Matković, and Francis Quek. 2008. Finger walking in place (FWIP): A traveling technique in virtual environments. In International Symposium on Smart Graphics. Springer, 58–69.Google ScholarDigital Library
- Fabrizio Lamberti, Gianluca Paravati, Valentina Gatteschi, Alberto Cannavo, and Paolo Montuschi. 2017. Virtual character animation based on affordable motion capture and reconfigurable tangible interfaces. IEEE transactions on visualization and computer graphics 24, 5(2017), 1742–1755.Google Scholar
- Luís Leite and Veronica Orvalho. 2012. Shape Your Body: Control a Virtual Silhouette Using Body Motion. In CHI ’12 Extended Abstracts on Human Factors in Computing Systems (Austin, Texas, USA) (CHI EA ’12). Association for Computing Machinery, New York, NY, USA, 1913–1918. https://doi.org/10.1145/2212776.2223728Google ScholarDigital Library
- Luis LEite and Veronica Orvalho. 2017. Mani-Pull-Action: Hand-Based Digital Puppetry. Proc. ACM Hum.-Comput. Interact. 1, EICS, Article 2 (jun 2017), 16 pages. https://doi.org/10.1145/3095804Google ScholarDigital Library
- Hui Liang, Jian Chang, Ismail K Kazmi, Jian J Zhang, and Peifeng Jiao. 2017. Hand gesture-based interactive puppetry system to assist storytelling for children. The Visual Computer 33, 4 (2017), 517–531.Google ScholarDigital Library
- Noah Lockwood and Karan Singh. 2012. Fingerwalking: motion editing with contact-based hand performance. In Proceedings of the 11th ACM SIGGRAPH/Eurographics conference on Computer Animation. 43–52.Google Scholar
- Zhiqiang Luo, I-Ming Chen, Song Huat Yeo, Chih-Chung Lin, and Tsai-Yen Li. 2010. Building hand motion-based character animation: The case of puppetry. In 2010 International Conference on Cyberworlds. IEEE, 46–52.Google ScholarDigital Library
- Meredith Ringel Morris, Andreea Danielescu, Steven Drucker, Danyel Fisher, Bongshin Lee, MC Schraefel, and Jacob O Wobbrock. 2014. Reducing legacy bias in gesture elicitation studies. interactions 21, 3 (2014), 40–45.Google ScholarDigital Library
- Yoshihiro Okada. 2003. Real-time character animation using puppet metaphor. In Entertainment Computing. Springer, 101–108.Google Scholar
- Masaki Oshita, Yuta Senju, and Syun Morishige. 2013. Character Motion Control Interface with Hand Manipulation Inspired by Puppet Mechanism. In Proceedings of the 12th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry(Hong Kong, Hong Kong) (VRCAI ’13). Association for Computing Machinery, New York, NY, USA, 131–138. https://doi.org/10.1145/2534329.2534360Google ScholarDigital Library
- Siyou Pei, Alexander Chen, Jaewook Lee, and Yang Zhang. 2022. Hand Interfaces: Using Hands to Imitate Objects in AR/VR for Expressive Interactions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 166–181.Google ScholarDigital Library
- Mose Sakashita, Tatsuya Minagawa, Amy Koike, Ippei Suzuki, Keisuke Kawahara, and Yoichi Ochiai. 2017. You as a Puppet: Evaluation of Telepresence User Interface for Puppetry. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (Québec City, QC, Canada) (UIST ’17). Association for Computing Machinery, New York, NY, USA, 217–228. https://doi.org/10.1145/3126594.3126608Google ScholarDigital Library
- Andrea Sanna, Fabrizio Lamberti, Gianluca Paravati, Gilles Carlevaris, and Paolo Montuschi. 2013. Automatically mapping human skeletons onto virtual character armatures. In International Conference on Intelligent Technologies for Interactive Entertainment. Springer, 80–89.Google ScholarCross Ref
- Eunbi Seol and Gerard J Kim. 2019. Handytool: Object manipulation through metaphorical hand/fingers-to-tool mapping. In International Conference on Human-Computer Interaction. Springer, 432–439.Google ScholarCross Ref
- Yeongho Seol, Carol O’Sullivan, and Jehee Lee. 2013. Creature Features: Online Motion Puppetry for Non-Human Characters. In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation (Anaheim, California) (SCA ’13). Association for Computing Machinery, New York, NY, USA, 213–221. https://doi.org/10.1145/2485895.2485903Google ScholarDigital Library
- Paul Skalski, Ron Tamborini, Ashleigh Shelton, Michael Buncher, and Pete Lindmark. 2011. Mapping the road to fun: Natural video game controllers, presence, and game enjoyment. New Media & Society 13, 2 (2011), 224–242.Google ScholarCross Ref
- Christian Steins, Sean Gustafson, Christian Holz, and Patrick Baudisch. 2013. Imaginary Devices: Gesture-Based Interaction Mimicking Traditional Input Devices. In Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services (Munich, Germany) (MobileHCI ’13). Association for Computing Machinery, New York, NY, USA, 123–126. https://doi.org/10.1145/2493190.2493208Google ScholarDigital Library
- Ron Tamborini and Paul Skalski. 2006. The role of presence in the experience of electronic games. Playing video games: Motives, responses, and consequences 1 (2006), 225–240.Google Scholar
- Shan-Yuan Teng, Tzu-Sheng Kuo, Chi Wang, Chi-huan Chiang, Da-Yuan Huang, Liwei Chan, and Bing-Yu Chen. 2018. PuPoP: Pop-up Prop on Palm for Virtual Reality. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (Berlin, Germany) (UIST ’18). Association for Computing Machinery, New York, NY, USA, 5–17. https://doi.org/10.1145/3242587.3242628Google ScholarDigital Library
- Amato Tsuji and Keita Ushida. 2021. Telecommunication Using 3DCG Avatars Manipulated with Finger Plays and Hand Shadow. In 2021 IEEE 10th Global Conference on Consumer Electronics (GCCE). IEEE, 39–40.Google ScholarCross Ref
- Amato Tsuji and Keita Ushida. 2021. A Telepresence System using Toy Robots the Users can Assemble and Manipulate with Finger Plays and Hand Shadow. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 661–662.Google ScholarCross Ref
- Amato Tsuji, Keita Ushida, and Qiu Chen. 2018. Real Time Animation of 3D Models with Finger Plays and Hand Shadow. In Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces. 441–444.Google ScholarDigital Library
- Ying-Chao Tung, Chun-Yen Hsu, Han-Yu Wang, Silvia Chyou, Jhe-Wei Lin, Pei-Jung Wu, Andries Valstar, and Mike Y. Chen. 2015. User-Defined Game Input for Smart Glasses in Public Space. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 3327–3336. https://doi.org/10.1145/2702123.2702214Google ScholarDigital Library
- Yusuke Ujitoko and Koichi Hirota. 2019. Interpretation of tactile sensation using an anthropomorphic finger motion interface to operate a virtual avatar. arXiv preprint arXiv:1902.07403(2019).Google Scholar
- Bo-Xiang Wang, Yu-Wei Wang, Yen-Kai Chen, Chun-Miao Tseng, Min-Chien Hsu, Cheng An Hsieh, Hsin-Ying Lee, and Mike Y. Chen. 2020. Miniature Haptics: Experiencing Haptic Feedback through Hand-Based and Embodied Avatars. Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3313831.3376292Google ScholarDigital Library
- Meng Wang, Kehua Lei, Zhichun Li, Haipeng Mi, and Yingqing Xu. 2018. TwistBlocks: Pluggable and Twistable Modular TUI for Armature Interaction in 3D Design. In Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction (Stockholm, Sweden) (TEI ’18). Association for Computing Machinery, New York, NY, USA, 19–26. https://doi.org/10.1145/3173225.3173231Google ScholarDigital Library
- Jacob O. Wobbrock, Htet Htet Aung, Brandon Rothrock, and Brad A. Myers. 2005. Maximizing the Guessability of Symbolic Input. In CHI ’05 Extended Abstracts on Human Factors in Computing Systems (Portland, OR, USA) (CHI EA ’05). Association for Computing Machinery, New York, NY, USA, 1869–1872. https://doi.org/10.1145/1056808.1057043Google ScholarDigital Library
- Hui Ye, Kin Chung Kwan, Wanchao Su, and Hongbo Fu. 2020. ARAnimator: In-Situ Character Animation in Mobile AR with User-Defined Motion Gestures. ACM Trans. Graph. 39, 4, Article 83 (jul 2020), 12 pages. https://doi.org/10.1145/3386569.3392404Google ScholarDigital Library
- Wataru Yoshizaki, Yuta Sugiura, Albert C. Chiou, Sunao Hashimoto, Masahiko Inami, Takeo Igarashi, Yoshiaki Akazawa, Katsuaki Kawachi, Satoshi Kagami, and Masaaki Mochimaru. 2011. An Actuated Physical Puppet as an Input Device for Controlling a Digital Manikin. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 637–646. https://doi.org/10.1145/1978942.1979034Google ScholarDigital Library
- Fan Zhang, Shaowei Chu, Ruifang Pan, Naye Ji, and Lian Xi. 2017. Double hand-gesture interaction for walk-through in VR environment. In 2017 IEEE/ACIS 16th International Conference on Computer and Information Science (ICIS). IEEE, 539–544.Google ScholarCross Ref
- Yupeng Zhang, Teng Han, Zhimin Ren, Nobuyuki Umetani, Xin Tong, Yang Liu, Takaaki Shiratori, and Xiang Cao. 2013. BodyAvatar: Creating Freeform 3D Avatars Using First-Person Body Gestures. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (St. Andrews, Scotland, United Kingdom) (UIST ’13). Association for Computing Machinery, New York, NY, USA, 387–396. https://doi.org/10.1145/2501988.2502015Google ScholarDigital Library
Index Terms
- Puppeteer: Exploring Intuitive Hand Gestures and Upper-Body Postures for Manipulating Human Avatar Actions
Recommendations
Puppeteer: Manipulating Human Avatar Actions with Intuitive Hand Gestures and Upper-Body Postures
UIST '22 Adjunct: Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and TechnologyWe present Puppeteer, an input prototype system that allows players directly control their avatars through intuitive hand gestures and upper-body postures. We selected 17 avatar actions discovered in the pilot study and conducted a gesture elicitation ...
Human-Machine Interaction based on Hand Gesture Recognition using Skeleton Information of Kinect Sensor
ICAIT'2018: Proceedings of the 3rd International Conference on Applications in Information TechnologyThe hand gesture provides a natural and intuitive communication medium for the human and machine interaction. Because, it can use in virtual reality, language detection, computer games, and other human-computer or human-machine instruction applications. ...
Designing and Evaluating Hand-to-Hand Gestures with Dual Commodity Wrist-Worn Devices
Hand gestures provide a natural and easy-to-use way to input commands. However, few works have studied the design space of bimanual hand gestures or attempted to infer gestures that involve devices on both hands. We explore the design space of hand-to-...
Comments