skip to main content
10.1145/3139131.3139146acmconferencesArticle/Chapter ViewAbstractPublication PagesvrstConference Proceedingsconference-collections
research-article

CheekInput: turning your cheek into an input surface by embedded optical sensors on a head-mounted display

Published: 08 November 2017 Publication History

Abstract

In this paper, we propose a novel technology called "CheekInput" with a head-mounted display (HMD) that senses touch gestures by detecting skin deformation. We attached multiple photo-reflective sensors onto the bottom front frame of the HMD. Since these sensors measure the distance between the frame and cheeks, our system is able to detect the deformation of a cheek when the skin surface is touched by fingers. Our system uses a Support Vector Machine to determine the gestures: pushing face up and down, left and right. We combined these 4 directional gestures for each cheek to extend 16 possible gestures. To evaluate the accuracy of the gesture detection, we conducted a user study. The results revealed that CheekInput achieved 80.45 % recognition accuracy when gestures were made by touching both cheeks with both hands, and 74.58 % when by touching both cheeks with one hand.

References

[1]
Mayank Goel, Chen Zhao, Ruth Vinisha, and Shwetak N Patel. 2015. Tongue-in-Cheek: Using Wireless Signals to Enable Non-Intrusive and Flexible Facial Gestures Detection. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 255--258.
[2]
Google. 2013. Google Glass. (2013). https://www.google.com/glass/start/
[3]
Jan Gugenheimer, David Dobbelstein, Christian Winkler, Gabriel Haas, and Enrico Rukzio. 2016. Facetouch: Enabling touch interaction in display fixed uis for mobile virtual reality. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, 49--60.
[4]
Sean G Gustafson, Bernhard Rabe, and Patrick M Baudisch. 2013. Understanding palm-based imaginary interfaces: the role of visual and tactile cues when browsing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 889--898.
[5]
Chris Harrison, Desney Tan, and Dan Morris. 2010. Skinput: appropriating the body as an input surface. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 453--462.
[6]
Akira Ishii, Takuya Adachi, Keigo Shima, Shuta Nakamae, Buntarou Shizuki, and Shin Takahashi. 2017. FistPointer: Target Selection Technique using Mid-air Interaction for Mobile VR Environment. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 474--474.
[7]
Kunihiro Kato and Homei Miyashita. 2015. Creating a mobile head-mounted display with proprietary controllers for interactive virtual reality content. In Adjunct Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. ACM, 35--36.
[8]
Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, and Bruce H Thomas. 2017. EarTouch: turning the ear into an input surface. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services. ACM, 27.
[9]
Naoya Koizumi, Hidekazu Tanaka, Yuji Uema, and Masahiko Inami. 2011. Chewing jockey: augmented food texture by using sound based on the cross-modal effect. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology. ACM, 21.
[10]
Shunsuke Koyama, Yuta Sugiura, Masa Ogata, Anusha Withana, Yuji Uema, Makoto Honda, Sayaka Yoshizu, Chihiro Sannomiya, Kazunari Nawa, and Masahiko Inami. 2014. Multi-touch steering wheel for in-car tertiary applications using infrared sensors. In Proceedings of the 5th Augmented Human International Conference. ACM, 5.
[11]
LeapMotion. 2012. Leap Motion. (2012). https://www.leapmotion.com/
[12]
Hao Li, Laura Trutoiu, Kyle Olszewski, Lingyu Wei, Tristan Trutna, Pei-Lun Hsieh, Aaron Nicholls, and Chongyang Ma. 2015. Facial performance sensing head-mounted display. ACM Transactions on Graphics (TOG) 34, 4 (2015), 47.
[13]
Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, and Max Mühlhäuser. 2013. Earput: Augmenting behind-the-ear devices for ear-based interaction. In CHI'13 Extended Abstracts on Human Factors in Computing Systems. ACM, 1323--1328.
[14]
Xin Liu, Katia Vega, Pattie Maes, and Joe A Paradiso. 2016. Wearability factors for skin interfaces. In Proceedings of the 7th Augmented Human International Conference 2016. ACM, 21.
[15]
Makematics 2012. Support Vector Machines for Processing. Makematics. http://makematics.com/code/psvm/.
[16]
Yasutoshi Makino, Yuta Sugiura, Masa Ogata, and Masahiko Inami. 2013. Tangential force sensing system on forearm. In Proceedings of the 4th Augmented Human International Conference. ACM, 29--34.
[17]
Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Kai Kunze, Masahiko Inami, and Maki Sugimoto. 2016. Facial expression recognition in daily life by embedded photo reflective sensors on smart eyewear. In Proceedings of the 21st International Conference on Intelligent User Interfaces. ACM, 317--326.
[18]
Kohei Matsumura, Daisuke Sakamoto, Masahiko Inami, and Takeo Igarashi. 2012. Universal earphones: earphones with automatic side and shared use detection. In Proceedings of the 2012 ACM international conference on Intelligent User Interfaces. ACM, 305--306.
[19]
Christian Metzger, Matt Anderson, and Thad Starner. 2004. Freedigiter: A contact-free device for gesture control. In Wearable Computers, 2004. ISWC 2004. Eighth International Symposium on, Vol. 1. IEEE, 18--21.
[20]
Microsoft. 2016. HoloLens. (2016). https://www.microsoft.com/microsoft-hololens/
[21]
Pranav Mistry and Pattie Maes. 2009. SixthSense: a wearable gestural interface. In ACM SIGGRAPH ASIA 2009 Sketches. ACM, 11.
[22]
Hiromi Nakamura and Homei Miyashita. 2010. Control of augmented reality information volume by glabellar fader. In Proceedings of the 1st Augmented Human international Conference. ACM, 20.
[23]
Kei Nakatsuma, Hiroyuki Shinoda, Yasutoshi Makino, Katsunari Sato, and Takashi Maeno. 2011. Touch interface on back of the hand. In ACM SIGGRAPH 2011 Emerging Technologies. ACM, 19.
[24]
Mark Nicas and Daniel Best. 2008. A study quantifying the hand-to-face contact rate and its potential application to predicting respiratory tract infection. Journal of occupational and environmental hygiene 5, 6 (2008), 347--352.
[25]
Masa Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, and Michita Imai. 2013. SenSkin: adapting skin as a soft interface. In Proceedings of the 26th annual ACM symposium on User interface software and technology. ACM, 539--544.
[26]
Julie Rico and Stephen Brewster. 2010. Usable gestures for mobile interfaces: evaluating social acceptability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 887--896.
[27]
T Scott Saponas, Daniel Kelly, Babak A Parviz, and Desney S Tan. 2009. Optically sensing tongue gestures for computer input. In Proceedings of the 22nd annual ACM symposium on User interface software and technology. ACM, 177--180.
[28]
Marcos Serrano, Barrett M Ens, and Pourang P Irani. 2014. Exploring the use of hand-to-face input for interacting with head-worn displays. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems. ACM, 3181--3190.
[29]
Oleg Špakov and Päivi Majaranta. 2012. Enhanced gaze interaction using simple head gestures. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing. ACM, 705--710.
[30]
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto. 2017. Recognition and mapping of facial expressions to avatar by embedded photo reflective sensors in head mounted display. In Virtual Reality (VR), 2017 IEEE. IEEE, 177--185.
[31]
Benjamin Tag, Junichi Shimizu, Chi Zhang, Naohisa Ohta, Kai Kunze, and Kazunori Sugiura. 2016. Eye blink as an input modality for a responsive adaptable video system. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. ACM, 205--208.
[32]
Katia Vega, Marcio Cunha, and Hugo Fuks. 2015. Hairware: the conscious use of unconscious auto-contact behaviors. In Proceedings of the 20th International Conference on Intelligent User Interfaces. ACM, 78--86.
[33]
Martin Weigel, Tong Lu, Gilles Bailly, Antti Oulasvirta, Carmel Majidi, and Jürgen Steimle. 2015. Iskin: flexible, stretchable and visually customizable on-body touch sensors for mobile computing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 2991--3000.
[34]
Martin Weigel, Vikram Mehta, and Jürgen Steimle. 2014. More than touch: understanding how people use skin as an input surface for mobile computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 179--188.
[35]
Anusha Withana, Roshan Peiris, Nipuna Samarasekara, and Suranga Nanayakkara. 2015. zsense: Enabling shallow depth gesture recognition for greater input expressivity on smart wearables. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 3661--3670.
[36]
Jacob O Wobbrock, Andrew D Wilson, and Yang Li. 2007. Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. In Proceedings of the 20th annual ACM symposium on User interface software and technology. ACM, 159--168.

Cited By

View all
  • (2025)SkinRing: Ring-shaped Device Enabling Wear Direction-Independent Gesture Input on Side of Finger2025 IEEE/SICE International Symposium on System Integration (SII)10.1109/SII59315.2025.10871054(386-392)Online publication date: 21-Jan-2025
  • (2024)Designing More Private and Socially Acceptable Hand-to-Face Gestures for Heads-Up ComputingCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678994(637-639)Online publication date: 5-Oct-2024
  • (2024)Understanding Novice Users' Mental Models of Gesture Discoverability and Designing Effective OnboardingCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678370(290-295)Online publication date: 5-Oct-2024
  • Show More Cited By

Index Terms

  1. CheekInput: turning your cheek into an input surface by embedded optical sensors on a head-mounted display

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    VRST '17: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology
    November 2017
    437 pages
    ISBN:9781450355483
    DOI:10.1145/3139131
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 November 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. OST-HMD
    2. photo-reflective sensor
    3. skin interface

    Qualifiers

    • Research-article

    Funding Sources

    • JSPS

    Conference

    VRST '17

    Acceptance Rates

    Overall Acceptance Rate 66 of 254 submissions, 26%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)46
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)SkinRing: Ring-shaped Device Enabling Wear Direction-Independent Gesture Input on Side of Finger2025 IEEE/SICE International Symposium on System Integration (SII)10.1109/SII59315.2025.10871054(386-392)Online publication date: 21-Jan-2025
    • (2024)Designing More Private and Socially Acceptable Hand-to-Face Gestures for Heads-Up ComputingCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678994(637-639)Online publication date: 5-Oct-2024
    • (2024)Understanding Novice Users' Mental Models of Gesture Discoverability and Designing Effective OnboardingCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678370(290-295)Online publication date: 5-Oct-2024
    • (2024)iFace: Hand-Over-Face Gesture Recognition Leveraging Impedance SensingProceedings of the Augmented Humans International Conference 202410.1145/3652920.3652923(131-137)Online publication date: 4-Apr-2024
    • (2024)Exploring the Design Space of Input Modalities for Working in Mixed Reality on Long-haul FlightsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661560(2267-2285)Online publication date: 1-Jul-2024
    • (2024)RimSenseProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314567:4(1-24)Online publication date: 12-Jan-2024
    • (2024)Do I Just Tap My Headset?Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314517:4(1-28)Online publication date: 12-Jan-2024
    • (2023)Surveying the Social Comfort of Body, Device, and Environment-Based Augmented Reality Interactions in Confined Passenger Spaces Using Mixed Reality Composite VideosProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36109237:3(1-25)Online publication date: 27-Sep-2023
    • (2023)TouchLog: Finger Micro Gesture Recognition Using Photo-Reflective SensorsProceedings of the 2023 ACM International Symposium on Wearable Computers10.1145/3594738.3611371(92-97)Online publication date: 8-Oct-2023
    • (2023)DUMask: A Discrete and Unobtrusive Mask-Based Interface for Facial GesturesProceedings of the Augmented Humans International Conference 202310.1145/3582700.3582726(255-266)Online publication date: 12-Mar-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media