ABSTRACT
In this paper we present DigiTap---a wrist-worn device specially designed for symbolic input in virtual and augmented reality (VR/AR) environments. DigiTap is able to robustly sense thumb-to-finger taps on the four fingertips and the eight minor knuckles. These taps are detected by an accelerometer, which triggers capturing of an image sequence with a small wrist-mounted camera. The tap position is then extracted with low computational effort from the images by an image processing pipeline. Thus, the device is very energy efficient and may potentially be integrated in a smartwatch-like device, allowing an unobtrusive, always available, eyes-free input. To demonstrate the feasibility of our approach an initial user study with our prototype device was conducted. In this study the suitability of the twelve tapping locations was evaluated, and the most prominent sources of error were identified. Our prototype system was able to correctly classify 92% of the input locations.
Supplemental Material
- Ahmad, F., and Musilek, P. 2006. A keystroke and pointer control input interface for wearable computers. In Fourth Annual IEEE International Conference on Pervasive Computing and Communications, 10--pp. Google ScholarDigital Library
- Ashbrook, D., Baudisch, P., and White, S. 2011. Nenya: subtle and eyes-free mobile input with a magnetically-tracked finger ring. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 2043--2046. Google ScholarDigital Library
- Bailly, G., Müller, J., Rohs, M., Wigdor, D., and Kratz, S. 2012. Shoesense: a new perspective on gestural interaction and wearable applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 1239--1248. Google ScholarDigital Library
- Bowman, D. A., and Wingrave, C. A. 2001. Design and evaluation of menu systems for immersive virtual environments. In Proceedings of IEEE Virtual Reality, 149--156. Google ScholarDigital Library
- Bowman, D. A., Rhoton, C. J., and Pinho, M. S. 2002. Text input techniques for immersive virtual environments: An empirical comparison. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 46, SAGE Publications, 2154--2158.Google Scholar
- Bowman, D. A., Kruijff, E., LaViola Jr, J. J., and Poupyrev, I. 2004. 3D user interfaces: theory and practice. Addison-Wesley. Google ScholarDigital Library
- Chowdhury, A., Ramadas, R., and Karmakar, S. 2013. Muscle computer interface: A review. In ICoRD'13. Springer, 411--421.Google Scholar
- Gustafson, S. G., Rabe, B., and Baudisch, P. M. 2013. Understanding palm-based imaginary interfaces: The role of visual and tactile cues when browsing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 889--898. Google ScholarDigital Library
- Harrison, C., Tan, D., and Morris, D. 2010. Skinput: appropriating the body as an input surface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 453--462. Google ScholarDigital Library
- Harrison, C., Benko, H., and Wilson, A. D. 2011. Omnitouch: wearable multitouch interaction everywhere. In Proceedings of the 24th annual ACM symposium on User interface software and technology, 441--450. Google ScholarDigital Library
- Howard, B., and Howard, S. 2001. Lightglove: Wrist-worn virtual typing and pointing. In Proceedings of the Fifth International Symposium on Wearable Computers, IEEE, 172--173. Google ScholarDigital Library
- Hrabia, C.-E., Wolf, K., and Wilhelm, M. 2013. Whole hand modeling using 8 wearable sensors: biomechanics for hand pose prediction. In Proceedings of the 4th Augmented Human International Conference, ACM, 21--28. Google ScholarDigital Library
- Kim, D., Hilliges, O., Izadi, S., Butler, A. D., Chen, J., Oikonomidis, I., and Olivier, P. 2012. Digits: freehand 3d interactions anywhere using a wrist-worn gloveless sensor. In Proceedings of the 25th annual ACM symposium on User interface software and technology, 167--176. Google ScholarDigital Library
- Kuester, F., Chen, M., Phair, M. E., and Mehring, C. 2005. Towards keyboard independent touch typing in vr. In Proceedings of the ACM symposium on Virtual reality software and technology, 86--95. Google ScholarDigital Library
- Lyons, K., Starner, T., Plaisted, D., Fusia, J., Lyons, A., Drew, A., and Looney, E. 2004. Twiddler typing: One-handed chording text entry for mobile phones. In Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, 671--678. Google ScholarDigital Library
- Matias, E., MacKenzie, I. S., and Buxton, W. 1993. Half-qwerty: A one-handed keyboard facilitating skill transfer from qwerty. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 88--94. Google ScholarDigital Library
- Pratt, V. R. 1998. Thumbcode: A device-independent digital sign language. In Proceedings of the 13th Annual IEEE Symposium on Logic in Computer Science.Google Scholar
- Rosenberg, R., and Slater, M. 1999. The chording glove: a glove-based text input device. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 29, 2, 186--191. Google ScholarDigital Library
- Song, P., Goh, W. B., Hutama, W., Fu, C.-W., and Liu, X. 2012. A handle bar metaphor for virtual object manipulation with mid-air interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 1297--1306. Google ScholarDigital Library
- Vardy, A., Robinson, J., and Cheng, L.-T. 1999. The wrist-cam as input device. In Proceedings of the IEEE 16th International Symposium on Wearable Computers, 199--199. Google ScholarDigital Library
- Wigdor, D., and Balakrishnan, R. 2004. A comparison of consecutive and concurrent input text entry techniques for mobile phones. In Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, 81--88. Google ScholarDigital Library
- Zhang, X., Chen, X., Li, Y., Lantz, V., Wang, K., and Yang, J. 2011. A framework for hand gesture recognition based on accelerometer and emg sensors. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans 41, 6, 1064--1076. Google ScholarDigital Library
Index Terms
- DigiTap: an eyes-free VR/AR symbolic input device
Recommendations
Hand gesture recognition-based non-touch character writing system on a virtual keyboard
AbstractThe non-touch system is a modern approach of computer-interface technology capable of revolutionizing human-computer interaction. The interface allows the user to input data and interact with a human, machine or robot in an uncontrolled ...
A SLAM-based 6DoF controller with smooth auto-calibration for virtual reality
AbstractThe ability to track handheld devices in three-dimensional (3D) space is basic and critical in virtual reality (VR) and other systems. Although outside-in tracking solutions for six-degrees-of-freedom (6DoF) input are stable, such passive tracking ...
TanGo: Exploring Expressive Tangible Interactions on Head-Mounted Displays
SUI '20: Proceedings of the 2020 ACM Symposium on Spatial User InteractionWe present TanGo, an always-available input modality on VR headset, which can be complementary to current VR accessories. TanGO is an active mechanical structure symmetrically equipped on Head-Mounted Display, enabling 3-dimensional bimanual sliding ...
Comments