skip to main content
10.1145/3670947.3670965acmotherconferencesArticle/Chapter ViewAbstractPublication PagesgiConference Proceedingsconference-collections
research-article

JollyGesture: Exploring Dual-Purpose Gestures and Gesture Guidance in VR Presentations

Published: 21 September 2024 Publication History

Abstract

Virtual reality (VR) offers new opportunities for presenters to use expressive body language to engage their audience. Yet, most VR presentation systems have adopted control mechanisms that mimic those found in face-to-face presentation systems. We explore the use of gestures that have dual-purpose: first, for the audience, a communicative purpose; second, for the presenter, a control purpose to alter content in slides. To support presenters, we provide guidance on what gestures are available and their effects. We realize our design approach in JollyGesture, a VR technology probe that recognizes dual-purpose gestures in a presentation scenario. We evaluate our approach through a design study with 12 participants, where in addition to using JollyGesture to deliver a mock presentation, we asked them to imagine gestures with the same communicative and control purpose, before and after being exposed to our probe. The study revealed several new design avenues valuable for VR presentation system design: expressive and coarse-grained communicative gestures, as well as subtle and hidden gestures intended for system control. Our work suggests that VR presentation systems of the future that embrace expressive body language will face design tensions relating to task loading and authenticity.

Supplemental Material

MP4 File
Video figure; Example presentation video
MP4 File
Video figure; Example presentation video

References

[1]
Roland Aigner, Daniel Wigdor, Hrvoje Benko, Michael Haller, David Lindbauer, Alexandra Ion, Shengdong Zhao, and JTKV Koh. 2012. Understanding Mid-Air Hand Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI. Microsoft Research Technical Report (2012), 10.
[2]
Fraser Anderson, Tovi Grossman, Justin Matejka, and George Fitzmaurice. 2013. YouMove: enhancing movement training with an augmented reality mirror. In Proceedings of the 26th annual ACM symposium on User interface software and technology. ACM, St. Andrews Scotland, United Kingdom, 311–320. https://doi.org/10.1145/2501988.2502045
[3]
Gilles Bailly, Eric Lecolinet, and Laurence Nigay. 2016. Visual menu techniques. ACM Computing Surveys (CSUR) 49, 4 (2016), 1–41.
[4]
Olivier Bau and Wendy E. Mackay. 2008. OctoPocus: a dynamic guide for learning gesture-based command sets. In Proceedings of the 21st annual ACM symposium on User interface software and technology - UIST ’08. ACM Press, Monterey, CA, USA, 37. https://doi.org/10.1145/1449715.1449724
[5]
Thomas Baudel and Michel Beaudouin-Lafon. 1993. Charade: remote control of objects using free-hand gestures. Commun. ACM 36, 7 (July 1993), 28–35. https://doi.org/10.1145/159544.159562
[6]
BBC. 2010. Hans Rosling’s 200 Countries, 200 Years, 4 Minutes - The Joy of Stats - BBC Four. https://www.youtube.com/watch?v=jbkSRLYSojo
[7]
Philip Cash and Anja Maier. 2016. Prototyping with your hands: the many roles of gesture in the communication of design concepts. Journal of Engineering Design 27, 1-3 (March 2016), 118–145. https://doi.org/10.1080/09544828.2015.1126702
[8]
Ben J. Congdon, Gun Woo (Warren) Park, Jingyi Zhang, and Anthony Steed. 2023. Comparing Mixed Reality Agent Representations: Studies in the Lab and in the Wild. In Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology (, Christchurch, New Zealand,) (VRST ’23). Association for Computing Machinery, New York, NY, USA, Article 26, 11 pages. https://doi.org/10.1145/3611659.3615719
[9]
Josh Urban Davis, Paul Asente, and Xing-Dong Yang. 2023. Multimodal Direct Manipulation in Video Conferencing: Challenges and Opportunities. In Proceedings of the 2023 ACM Designing Interactive Systems Conference. 1174–1193.
[10]
Marion Dohen and Benjamin Roustan. 2017. Co-Production of Speech and Pointing Gestures in Clear and Perturbed Interactive Tasks: Multimodal Designation Strategies. In Interspeech 2017. ISCA, 166–170. https://doi.org/10.21437/Interspeech.2017-1329
[11]
Katherine Fennedy, Jeremy Hartmann, Quentin Roy, Simon Tangi Perrault, and Daniel Vogel. 2021. OctoPocus in VR: Using a Dynamic Guide for 3D Mid-Air Gestures in Virtual Reality. IEEE Transactions on Visualization and Computer Graphics 27, 12 (Dec. 2021), 4425–4438. https://doi.org/10.1109/TVCG.2021.3101854
[12]
Gaëlle Ferré. 2019. Gesture / speech alignment in weather reports. Proceedings of the 6th Gesture and Speech in Interaction Conference (2019). https://doi.org/10.17619/UNIPB/1-805 Publisher: UB-PAD - Paderborn University Library Version Number: 1.
[13]
Celeste Groenewald, Craig Anslow, Junayed Islam, Chris Rooney, Peter Passmore, and William Wong. 2016. Understanding 3D mid-air hand gestures with interactive surfaces and displays: a systematic literature review. In Proceedings of the 30th International BCS Human Computer Interaction Conference. BCS Learning & Development.
[14]
Brian D. Hall, Lyn Bartram, and Matthew Brehmer. 2022. Augmented Chironomia for Presenting Data to Remote Audiences. In The 35th Annual ACM Symposium on User Interface Software and Technology. ACM, Bend OR USA, 1–14. https://doi.org/10.1145/3526113.3545614
[15]
Ping-Hsuan Han, Kuan-Wen Chen, Chen-Hsin Hsieh, Yu-Jie Huang, and Yi-Ping Hung. 2016. AR-Arm: Augmented Visualization for Guiding Arm Movement in the First-Person Perspective. In Proceedings of the 7th Augmented Human International Conference 2016. ACM, Geneva Switzerland, 1–4. https://doi.org/10.1145/2875194.2875237
[16]
Benedikt Hensen, Lukas Liß, and Ralf Klamma. 2021. ImPres: An Immersive 3D Presentation Framework for Mixed Reality Enhanced Learning. In Advances in Web-Based Learning – ICWL 2021, Wanlei Zhou and Yi Mu (Eds.). Vol. 13103. Springer International Publishing, Cham, 28–39. https://doi.org/10.1007/978-3-030-90785-3_3 Series Title: Lecture Notes in Computer Science.
[17]
Nicholas Kong and Maneesh Agrawala. 2009. Perceptual interpretation of ink annotations on line charts. In Proceedings of the 22nd annual ACM symposium on User interface software and technology. 233–236.
[18]
Panayiotis Koutsabasis and Panagiotis Vogiatzidakis. 2019. Empirical research in mid-air interaction: A systematic review. International Journal of Human–Computer Interaction 35, 18 (2019), 1747–1768.
[19]
Gordon Kurtenbach and William Buxton. 1994. User learning and performance with marking menus. In Proceedings of the SIGCHI conference on Human factors in computing systems. 258–264.
[20]
Vincent Lambert, Adrien Chaffangeon Caillet, Alix Goguey, Sylvain Malacria, and Laurence Nigay. 2023. Studying the Visual Representation of Microgestures. Proceedings of the ACM on Human-Computer Interaction 7, MHCI (2023), 1–36.
[21]
Wenjing Li, Fuxing Wang, Richard E. Mayer, and Huashan Liu. 2019. Getting the point: Which kinds of gestures by pedagogical agents improve multimedia learning?Journal of Educational Psychology 111, 8 (Nov. 2019), 1382–1395. https://doi.org/10.1037/edu0000352
[22]
Jian Liao, Adnan Karim, Shivesh Singh Jadon, Rubaiat Habib Kazi, and Ryo Suzuki. 2022. RealityTalk: Real-Time Speech-Driven Augmented Presentation for AR Live Storytelling. In The 35th Annual ACM Symposium on User Interface Software and Technology. ACM, Bend OR USA, 1–12. https://doi.org/10.1145/3526113.3545702
[23]
Nicolai Marquardt and Saul Greenberg. 2015. Proxemic Interactions: From Theory to Practice. Springer International Publishing, Cham. https://doi.org/10.1007/978-3-031-02208-1
[24]
Fabrice Matulic, Lars Engeln, Christoph Träger, and Raimund Dachselt. 2016. Embodied interactions for novel immersive presentational experiences. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 1713–1720.
[25]
Bernhard Maurer, Alina Krischkowsky, and Manfred Tscheligi. 2017. Exploring Gaze and Hand Gestures for Non-Verbal In-Game Communication. In Extended Abstracts Publication of the Annual Symposium on Computer-Human Interaction in Play. ACM, Amsterdam The Netherlands, 315–322. https://doi.org/10.1145/3130859.3131296
[26]
Richard E. Mayer and C. Scott DaPra. 2012. An embodiment effect in computer-based learning with animated pedagogical agents.Journal of Experimental Psychology: Applied 18, 3 (2012), 239–252. https://doi.org/10.1037/a0028616
[27]
Andreea Muresan, Jess McIntosh, and Kasper Hornbæk. 2023. Using Feedforward to Reveal Interaction Possibilities in Virtual Reality. ACM Transactions on Computer-Human Interaction 30, 6 (2023), 1–47.
[28]
Michael Nebeling, Shwetha Rajaram, Liwei Wu, Yifei Cheng, and Jaylin Herskovitz. 2021. XRStudio: A Virtual Production and Live Streaming System for Immersive Instructional Experiences. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–12. https://doi.org/10.1145/3411764.3445323
[29]
Paul, Annie Murphy. 2021. The extended mind: The power of thinking outside the brain. Eamon Dolan Books.
[30]
Thammathip Piumsomboon, Adrian Clark, Mark Billinghurst, and Andy Cockburn. 2013. User-defined gestures for augmented reality. In CHI’13 Extended Abstracts on Human Factors in Computing Systems. 955–960.
[31]
Nazmus Saquib, Rubaiat Habib Kazi, Li-Yi Wei, and Wilmot Li. 2019. Interactive Body-Driven Graphics for Augmented Video Performance. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland UK, 1–12. https://doi.org/10.1145/3290605.3300852
[32]
Rajeev Sharma, Jiongyu Cai, Srivat Chakravarthy, Indrajit Poddar, and Yogesh Sethi. 2000. Exploiting speech/gesture co-occurrence for improving continuous gesture recognition in weather narration. In Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580). IEEE Comput. Soc, Grenoble, France, 422–427. https://doi.org/10.1109/AFGR.2000.840669
[33]
Minjeong Shin, Joohee Kim, Yunha Han, Lexing Xie, Mitchell Whitelaw, Bum Chul Kwon, Sungahn Ko, and Niklas Elmqvist. 2023. Roslingifier: Semi-Automated Storytelling for Animated Scatterplots. IEEE Transactions on Visualization and Computer Graphics 29, 6 (June 2023), 2980–2995. https://doi.org/10.1109/TVCG.2022.3146329
[34]
Rajinder Sodhi, Hrvoje Benko, and Andrew Wilson. 2012. LightGuide: projected visualizations for hand movement guidance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Austin Texas USA, 179–188. https://doi.org/10.1145/2207676.2207702
[35]
Mark Stefik, Daniel G Bobrow, Gregg Foster, Stan Lanning, and Deborah Tatar. 1987. WYSIWIS revised: early experiences with multiuser interfaces. ACM Transactions on Office Information Systems 5, 2 (April 1987).
[36]
Studdert-Kennedy, Michael. 1994. Hand and Mind: What Gestures Reveal About Thought. In Language and Speech(2, Vol. 37). 203–209.
[37]
Richard Tang, Xing-Dong Yang, Scott Bateman, Joaquim Jorge, and Anthony Tang. 2015. Physio@Home: Exploring Visual Guidance and Feedback Techniques for Physiotherapy Exercises. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, Seoul Republic of Korea, 4123–4132. https://doi.org/10.1145/2702123.2702401
[38]
Yi Tian and Marie-Luce Bourguet. 2016. Lecturers’ Hand Gestures as Clues to Detect Pedagogical Significance in Video Lectures. In Proceedings of the European Conference on Cognitive Ergonomics. ACM, Nottingham United Kingdom, 1–3. https://doi.org/10.1145/2970930.2970933
[39]
Gustav Verhulsdonck and Morie Jacquelyn Ford. 2009. Virtual chironomia: Developing standards for non-verbal communication in virtual worlds. Journal For Virtual Worlds Research 2, 3 (Oct. 2009), 10.
[40]
Daniel Vogel and Ravin Balakrishnan. 2004. Interactive public ambient displays: transitioning from implicit to explicit, public to personal, interaction with multiple users. In Proceedings of the 17th annual ACM symposium on User interface software and technology. 137–146.
[41]
Petra Wagner, Zofia Malisz, and Stefan Kopp. 2014. Gesture and speech in interaction: An overview. Speech Communication 57 (Feb. 2014), 209–232. https://doi.org/10.1016/j.specom.2013.09.008
[42]
Bodo Winter, Marcus Perlman, and Teenie Matlock. 2013. Using space to talk and gesture about numbers: Evidence from the TV News Archive. Gesture 13, 3 (Dec. 2013), 377–408. https://doi.org/10.1075/gest.13.3.06win
[43]
Haijun Xia, Michael Glueck, Michelle Annett, Michael Wang, and Daniel Wigdor. 2022. Iteratively Designing Gesture Vocabularies: A Survey and Analysis of Best Practices in the HCI Literature. ACM Transactions on Computer-Human Interaction (TOCHI) 29, 4 (2022), 1–54.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
GI '24: Proceedings of the 50th Graphics Interface Conference
June 2024
437 pages
ISBN:9798400718281
DOI:10.1145/3670947
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 September 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Gestural input
  2. Presentation.
  3. Virtual Reality

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Meta Reality Labs
  • National Science and Engineering Research Council
  • Faculty of Information, University of Toronto

Conference

GI '24
GI '24: Graphics Interface
June 3 - 6, 2024
NS, Halifax, Canada

Acceptance Rates

Overall Acceptance Rate 206 of 508 submissions, 41%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 88
    Total Downloads
  • Downloads (Last 12 months)88
  • Downloads (Last 6 weeks)18
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media