skip to main content
10.1145/3562939.3565609acmconferencesArticle/Chapter ViewAbstractPublication PagesvrstConference Proceedingsconference-collections
research-article

Puppeteer: Exploring Intuitive Hand Gestures and Upper-Body Postures for Manipulating Human Avatar Actions

Published:29 November 2022Publication History

ABSTRACT

Body-controlled avatars provide a more intuitive method to real-time control virtual avatars but require larger environment space and more user effort. In contrast, hand-controlled avatars give more dexterous and fewer fatigue manipulations within a close-range space for avatar control but provide fewer sensory cues than the body-based method. This paper investigates the differences between the two manipulations and explores the possibility of a combination. We first performed a formative study to understand when and how users prefer manipulating hands and bodies to represent avatars’ actions in current popular video games. Based on the top video games survey, we decided to represent human avatars’ motions. Besides, we found that players used their bodies to represent avatar actions but changed to using hands when they were too unrealistic and exaggerated to mimic by bodies (e.g., flying in the sky, rolling over quickly). Hand gestures also provide an alternative to lower-body motions when players want to sit during gaming and do not want extensive effort to move their avatars. Hence, we focused on the design of hand gestures and upper-body postures. We present Puppeteer, an input prototype system that allows players directly control their avatars through intuitive hand gestures and upper-body postures. We selected 17 avatar actions discovered in the formative study and conducted a gesture elicitation study to invite 12 participants to design best representing hand gestures and upper-body postures for each action. Then we implemented a prototype system using the MediaPipe framework to detect keypoints and a self-trained model to recognize 17 hand gestures and 17 upper-body postures. Finally, three applications demonstrate the interactions enabled by Puppeteer.

Skip Supplemental Material Section

Supplemental Material

Puppeteer_full_video.mp4

mp4

141.6 MB

Puppeteer_30s_video.mp4

mp4

45.7 MB

References

  1. Karan Ahuja, Eyal Ofek, Mar Gonzalez-Franco, Christian Holz, and Andrew D. Wilson. 2021. CoolMoves: User Motion Accentuation in Virtual Reality. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5, 2, Article 52 (jun 2021), 23 pages. https://doi.org/10.1145/3463499Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Edwin Chan, Teddy Seyed, Wolfgang Stuerzlinger, Xing-Dong Yang, and Frank Maurer. 2016. User Elicitation on Single-Hand Microgestures. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 3403–3414. https://doi.org/10.1145/2858036.2858589Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Jiawen Chen, Shahram Izadi, and Andrew Fitzgibbon. 2012. KinÊTre: Animating the World with the Human Body. Association for Computing Machinery, New York, NY, USA, 435–444. https://doi.org/10.1145/2380116.2380171Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Yu-Ting Cheng, Timothy K Shih, and Chih-Yang Lin. 2017. Create a puppet play and interative digital models with leap Motion. In 2017 10th International Conference on Ubi-media Computing and Workshops (Ubi-Media). IEEE, 1–6.Google ScholarGoogle ScholarCross RefCross Ref
  5. Mira Dontcheva, Gary Yngve, and Zoran Popović. 2003. Layered Acting for Character Animation. In ACM SIGGRAPH 2003 Papers (San Diego, California) (SIGGRAPH ’03). Association for Computing Machinery, New York, NY, USA, 409–416. https://doi.org/10.1145/1201775.882285Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Maxime Garcia, Remi Ronfard, and Marie-Paule Cani. 2019. Spatial Motion Doodles: Sketching Animation in VR Using Hand Gestures and Laban Motion Analysis. In Motion, Interaction and Games (Newcastle upon Tyne, United Kingdom) (MIG ’19). Association for Computing Machinery, New York, NY, USA, Article 10, 10 pages. https://doi.org/10.1145/3359566.3360061Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Oliver Glauser, Wan-Chun Ma, Daniele Panozzo, Alec Jacobson, Otmar Hilliges, and Olga Sorkine-Hornung. 2016. Rig Animation with a Tangible and Modular Input Device. ACM Trans. Graph. 35, 4, Article 144 (jul 2016), 11 pages. https://doi.org/10.1145/2897824.2925909Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Saikat Gupta, Sujin Jang, and Karthik Ramani. 2014. PuppetX: A Framework for Gestural Interactions with User Constructed Playthings. In Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces (Como, Italy) (AVI ’14). Association for Computing Machinery, New York, NY, USA, 73–80. https://doi.org/10.1145/2598153.2598171Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Robert Held, Ankit Gupta, Brian Curless, and Maneesh Agrawala. 2012. 3D puppetry: a kinect-based interface for 3D animation.. In UIST, Vol. 12. Citeseer, 423–434.Google ScholarGoogle Scholar
  10. Narukawa Hiroki, Natapon Pantuwong, and Masanori Sugimoto. 2012. A puppet interface for the development of an intuitive computer animation system. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). IEEE, 3136–3139.Google ScholarGoogle Scholar
  11. Christian Holz and Andrew Wilson. 2011. Data miming: inferring spatial object descriptions from human gesture. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 811–820.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. An-Pin Huang, Fay Huang, and Jing-Siang Jhu. 2018. Unreal Interactive Puppet Game Development Using Leap Motion. In Journal of Physics: Conference Series, Vol. 1004. IOP Publishing, 012025.Google ScholarGoogle Scholar
  13. James A Jablonski, Trevor J Bihl, and Kenneth W Bauer. 2015. Principal component reconstruction error for hyperspectral anomaly detection. IEEE Geoscience and Remote Sensing Letters 12, 8 (2015), 1725–1729.Google ScholarGoogle ScholarCross RefCross Ref
  14. Ji-Sun Kim, Denis Gračanin, Krešimir Matković, and Francis Quek. 2008. Finger walking in place (FWIP): A traveling technique in virtual environments. In International Symposium on Smart Graphics. Springer, 58–69.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Fabrizio Lamberti, Gianluca Paravati, Valentina Gatteschi, Alberto Cannavo, and Paolo Montuschi. 2017. Virtual character animation based on affordable motion capture and reconfigurable tangible interfaces. IEEE transactions on visualization and computer graphics 24, 5(2017), 1742–1755.Google ScholarGoogle Scholar
  16. Luís Leite and Veronica Orvalho. 2012. Shape Your Body: Control a Virtual Silhouette Using Body Motion. In CHI ’12 Extended Abstracts on Human Factors in Computing Systems (Austin, Texas, USA) (CHI EA ’12). Association for Computing Machinery, New York, NY, USA, 1913–1918. https://doi.org/10.1145/2212776.2223728Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Luis LEite and Veronica Orvalho. 2017. Mani-Pull-Action: Hand-Based Digital Puppetry. Proc. ACM Hum.-Comput. Interact. 1, EICS, Article 2 (jun 2017), 16 pages. https://doi.org/10.1145/3095804Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Hui Liang, Jian Chang, Ismail K Kazmi, Jian J Zhang, and Peifeng Jiao. 2017. Hand gesture-based interactive puppetry system to assist storytelling for children. The Visual Computer 33, 4 (2017), 517–531.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Noah Lockwood and Karan Singh. 2012. Fingerwalking: motion editing with contact-based hand performance. In Proceedings of the 11th ACM SIGGRAPH/Eurographics conference on Computer Animation. 43–52.Google ScholarGoogle Scholar
  20. Zhiqiang Luo, I-Ming Chen, Song Huat Yeo, Chih-Chung Lin, and Tsai-Yen Li. 2010. Building hand motion-based character animation: The case of puppetry. In 2010 International Conference on Cyberworlds. IEEE, 46–52.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Meredith Ringel Morris, Andreea Danielescu, Steven Drucker, Danyel Fisher, Bongshin Lee, MC Schraefel, and Jacob O Wobbrock. 2014. Reducing legacy bias in gesture elicitation studies. interactions 21, 3 (2014), 40–45.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Yoshihiro Okada. 2003. Real-time character animation using puppet metaphor. In Entertainment Computing. Springer, 101–108.Google ScholarGoogle Scholar
  23. Masaki Oshita, Yuta Senju, and Syun Morishige. 2013. Character Motion Control Interface with Hand Manipulation Inspired by Puppet Mechanism. In Proceedings of the 12th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry(Hong Kong, Hong Kong) (VRCAI ’13). Association for Computing Machinery, New York, NY, USA, 131–138. https://doi.org/10.1145/2534329.2534360Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Siyou Pei, Alexander Chen, Jaewook Lee, and Yang Zhang. 2022. Hand Interfaces: Using Hands to Imitate Objects in AR/VR for Expressive Interactions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 166–181.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Mose Sakashita, Tatsuya Minagawa, Amy Koike, Ippei Suzuki, Keisuke Kawahara, and Yoichi Ochiai. 2017. You as a Puppet: Evaluation of Telepresence User Interface for Puppetry. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (Québec City, QC, Canada) (UIST ’17). Association for Computing Machinery, New York, NY, USA, 217–228. https://doi.org/10.1145/3126594.3126608Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Andrea Sanna, Fabrizio Lamberti, Gianluca Paravati, Gilles Carlevaris, and Paolo Montuschi. 2013. Automatically mapping human skeletons onto virtual character armatures. In International Conference on Intelligent Technologies for Interactive Entertainment. Springer, 80–89.Google ScholarGoogle ScholarCross RefCross Ref
  27. Eunbi Seol and Gerard J Kim. 2019. Handytool: Object manipulation through metaphorical hand/fingers-to-tool mapping. In International Conference on Human-Computer Interaction. Springer, 432–439.Google ScholarGoogle ScholarCross RefCross Ref
  28. Yeongho Seol, Carol O’Sullivan, and Jehee Lee. 2013. Creature Features: Online Motion Puppetry for Non-Human Characters. In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation (Anaheim, California) (SCA ’13). Association for Computing Machinery, New York, NY, USA, 213–221. https://doi.org/10.1145/2485895.2485903Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Paul Skalski, Ron Tamborini, Ashleigh Shelton, Michael Buncher, and Pete Lindmark. 2011. Mapping the road to fun: Natural video game controllers, presence, and game enjoyment. New Media & Society 13, 2 (2011), 224–242.Google ScholarGoogle ScholarCross RefCross Ref
  30. Christian Steins, Sean Gustafson, Christian Holz, and Patrick Baudisch. 2013. Imaginary Devices: Gesture-Based Interaction Mimicking Traditional Input Devices. In Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services (Munich, Germany) (MobileHCI ’13). Association for Computing Machinery, New York, NY, USA, 123–126. https://doi.org/10.1145/2493190.2493208Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Ron Tamborini and Paul Skalski. 2006. The role of presence in the experience of electronic games. Playing video games: Motives, responses, and consequences 1 (2006), 225–240.Google ScholarGoogle Scholar
  32. Shan-Yuan Teng, Tzu-Sheng Kuo, Chi Wang, Chi-huan Chiang, Da-Yuan Huang, Liwei Chan, and Bing-Yu Chen. 2018. PuPoP: Pop-up Prop on Palm for Virtual Reality. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (Berlin, Germany) (UIST ’18). Association for Computing Machinery, New York, NY, USA, 5–17. https://doi.org/10.1145/3242587.3242628Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Amato Tsuji and Keita Ushida. 2021. Telecommunication Using 3DCG Avatars Manipulated with Finger Plays and Hand Shadow. In 2021 IEEE 10th Global Conference on Consumer Electronics (GCCE). IEEE, 39–40.Google ScholarGoogle ScholarCross RefCross Ref
  34. Amato Tsuji and Keita Ushida. 2021. A Telepresence System using Toy Robots the Users can Assemble and Manipulate with Finger Plays and Hand Shadow. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 661–662.Google ScholarGoogle ScholarCross RefCross Ref
  35. Amato Tsuji, Keita Ushida, and Qiu Chen. 2018. Real Time Animation of 3D Models with Finger Plays and Hand Shadow. In Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces. 441–444.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Ying-Chao Tung, Chun-Yen Hsu, Han-Yu Wang, Silvia Chyou, Jhe-Wei Lin, Pei-Jung Wu, Andries Valstar, and Mike Y. Chen. 2015. User-Defined Game Input for Smart Glasses in Public Space. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 3327–3336. https://doi.org/10.1145/2702123.2702214Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Yusuke Ujitoko and Koichi Hirota. 2019. Interpretation of tactile sensation using an anthropomorphic finger motion interface to operate a virtual avatar. arXiv preprint arXiv:1902.07403(2019).Google ScholarGoogle Scholar
  38. Bo-Xiang Wang, Yu-Wei Wang, Yen-Kai Chen, Chun-Miao Tseng, Min-Chien Hsu, Cheng An Hsieh, Hsin-Ying Lee, and Mike Y. Chen. 2020. Miniature Haptics: Experiencing Haptic Feedback through Hand-Based and Embodied Avatars. Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3313831.3376292Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Meng Wang, Kehua Lei, Zhichun Li, Haipeng Mi, and Yingqing Xu. 2018. TwistBlocks: Pluggable and Twistable Modular TUI for Armature Interaction in 3D Design. In Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction (Stockholm, Sweden) (TEI ’18). Association for Computing Machinery, New York, NY, USA, 19–26. https://doi.org/10.1145/3173225.3173231Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Jacob O. Wobbrock, Htet Htet Aung, Brandon Rothrock, and Brad A. Myers. 2005. Maximizing the Guessability of Symbolic Input. In CHI ’05 Extended Abstracts on Human Factors in Computing Systems (Portland, OR, USA) (CHI EA ’05). Association for Computing Machinery, New York, NY, USA, 1869–1872. https://doi.org/10.1145/1056808.1057043Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Hui Ye, Kin Chung Kwan, Wanchao Su, and Hongbo Fu. 2020. ARAnimator: In-Situ Character Animation in Mobile AR with User-Defined Motion Gestures. ACM Trans. Graph. 39, 4, Article 83 (jul 2020), 12 pages. https://doi.org/10.1145/3386569.3392404Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Wataru Yoshizaki, Yuta Sugiura, Albert C. Chiou, Sunao Hashimoto, Masahiko Inami, Takeo Igarashi, Yoshiaki Akazawa, Katsuaki Kawachi, Satoshi Kagami, and Masaaki Mochimaru. 2011. An Actuated Physical Puppet as an Input Device for Controlling a Digital Manikin. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 637–646. https://doi.org/10.1145/1978942.1979034Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Fan Zhang, Shaowei Chu, Ruifang Pan, Naye Ji, and Lian Xi. 2017. Double hand-gesture interaction for walk-through in VR environment. In 2017 IEEE/ACIS 16th International Conference on Computer and Information Science (ICIS). IEEE, 539–544.Google ScholarGoogle ScholarCross RefCross Ref
  44. Yupeng Zhang, Teng Han, Zhimin Ren, Nobuyuki Umetani, Xin Tong, Yang Liu, Takaaki Shiratori, and Xiang Cao. 2013. BodyAvatar: Creating Freeform 3D Avatars Using First-Person Body Gestures. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (St. Andrews, Scotland, United Kingdom) (UIST ’13). Association for Computing Machinery, New York, NY, USA, 387–396. https://doi.org/10.1145/2501988.2502015Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Puppeteer: Exploring Intuitive Hand Gestures and Upper-Body Postures for Manipulating Human Avatar Actions

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      VRST '22: Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology
      November 2022
      466 pages
      ISBN:9781450398893
      DOI:10.1145/3562939

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 29 November 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate66of254submissions,26%

      Upcoming Conference

      VRST '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format