Skip to main content
Log in

Comparing head gesture, hand gesture and gamepad interfaces for answering Yes/No questions in virtual environments

  • Original Article
  • Published:
Virtual Reality Aims and scope Submit manuscript

A Correction to this article was published on 09 December 2019

This article has been updated

Abstract

A potential application of gesture recognition algorithms is to use them as interfaces to interact with virtual environments. However, the performance and the user preference of such interfaces in the context of virtual reality (VR) have been rarely studied. In the present paper, we focused on a typical VR interaction scenario—answering Yes/No questions in VR systems to compare the performance and the user preference of three types of interfaces. These interfaces included a head gesture interface, a hand gesture interface and a conventional gamepad interface. We designed a memorization task, in which participants were asked to memorize several everyday objects presented in a virtual room and later respond to questions on whether they saw a specific object through the given interfaces when these objects were absent. The performance of the interfaces was evaluated in terms of the real-time accuracy and the response time. A user interface questionnaire was also used to reveal the user preference for these interfaces. The results showed that head gesture is a very promising interface, which can be easily added to existing VR systems for answering Yes/No questions and other binary responses in virtual environments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Change history

  • 09 December 2019

    In the original publication of the article, the set of Equations 1 was wrongly typeset.

References

  • Abate AF, Acampora G, Ricciardi S (2011) An interactive virtual guide for the AR based visit of archaeological sites. J Vis Lang Comput 22:415–425

    Article  Google Scholar 

  • Cardoso JCS (2016) Comparison of gesture, gamepad, and gaze-based locomotion for VR worlds. In: Proceedings of the 22nd ACM conference on virtual reality software and technology, pp 319–320

  • Chang C, Lin C (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 27(2):1–27

    Article  Google Scholar 

  • Cheng H, Yang L, Liu Z (2016) Survey on 3D hand gesture recognition. IEEE Trans Circuits Syst Video Technol 26:1659–1673

    Article  Google Scholar 

  • Coomer N, Bullard S, Clinton W, Williams B (2018) Evaluating the effects of four VR locomotion methods: joystick, arm-cycling, point-tugging, and teleporting. In: Proceedings of the 15th ACM symposium on applied perception, pp 7:1–7:8

  • Kitson A, Hashemian AM, Stepanova ER, Kruijff E, Riecke BE (2017) Comparing leaning-based motion cueing interfaces for virtual reality locomotion. In: 2017 IEEE symposium on 3D user interfaces, pp 73–82

  • Lun R, Zhao W (2015) A survey of applications and human motion recognition with microsoft kinect. Int J Patt Recogn Artif Intell 29:1555008

    Article  Google Scholar 

  • Marin G, Dominio F, Zanuttigh P (2016) Hand gesture recognition with jointly calibrated Leap Motion and depth sensor. Multimed Tools Appl 75:14991–15015

    Article  Google Scholar 

  • Morency L-P, Sidner C, Lee C, Darrell T (2007) Head gestures for perceptual interfaces: the role of context in improving recognition. Artif Intell 171:568–585

    Article  Google Scholar 

  • Morimoto C, Yacoob Y, Davis L (1996) Recognition of head gestures using hidden Markov models. In: Proceedings of 13th international conference on pattern recognition, pp 461–465

  • Nabiyouni M, Saktheeswaran A, Bowman DA, Karanth A (2015) Comparing the performance of natural, semi-natural, and non-natural locomotion techniques in virtual reality. In: 2015 IEEE symposium on 3D user interfaces, pp 3–10

  • Rabiner LR (1989) A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE 77:257–286

    Article  Google Scholar 

  • Robinett W, Holloway R (1992) Implementation of flying, scaling and grabbing in virtual worlds. In: Proceedings of the 1992 symposium on interactive 3D graphics, pp 189–192

  • Terven JR, Salas J, Raducanu B (2014) Robust head gestures recognition for assistive technology. In: Pattern recognition, pp 152–161

  • Wille M, Wischniewski S (2015) Influence of head mounted display hardware on performance and strain. In: Proceedings of the HFES annual meeting

  • Yan Z, Lindeman RW, Dey A (2016) Let your fingers do the walking: a unified approach for efficient short-, medium-, and long-distance travel in VR. In: 2016 IEEE symposium on 3D user interfaces (3DUI), pp 27–30

  • Zhao J, Allison RS (2017) Real-time head gesture recognition on head-mounted displays using cascaded hidden Markov models. In: 2017 IEEE international conference on systems, man, and cybernetics (SMC), pp 2361–2366

  • Zielasko D, Horn S, Freitag S, Weyers B, Kuhlen TW (2016) Evaluation of hands-free HMD-based navigation techniques for immersive data analysis. In: 2016 IEEE symposium on 3D user interfaces, pp 113–119

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jingbo Zhao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original version of this article has been revised: Equation 1 has been corrected.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, J., Allison, R.S. Comparing head gesture, hand gesture and gamepad interfaces for answering Yes/No questions in virtual environments. Virtual Reality 24, 515–524 (2020). https://doi.org/10.1007/s10055-019-00416-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-019-00416-7

Keywords