skip to main content
10.1145/2935334.2935389acmconferencesArticle/Chapter ViewAbstractPublication PagesmobilehciConference Proceedingsconference-collections
research-article

Bitey: an exploration of tooth click gestures for hands-free user interface control

Published: 06 September 2016 Publication History

Abstract

We present Bitey, a subtle, wearable device for enabling input via tooth clicks. Based on a bone-conduction microphone worn just above the ears, Bitey recognizes the click sounds from up to five different pairs of teeth, allowing fully hands-free interface control. We explore the space of tooth input and show that Bitey allows for a high degree of accuracy in distinguishing between different tooth clicks, with up to 94% accuracy under laboratory conditions for five different tooth pairs. Finally, we illustrate Bitey's potential through two demonstration applications: a list navigation and selection interface and a keyboard input method.

References

[1]
Brian Amento, Will Hill, and Loren Terveen. 2002. The sound of one hand: a wrist-mounted bio-acoustic fingertip gesture interface. In CHI EA '02: CHI '02 Extended Abstracts on Human Factors in Computing Systems. ACM, New York, New York, USA, 724--725.
[2]
Oliver Amft, Martin Kusserow, and G. Tröster. 2009. Bite Weight Prediction From Acoustic Recognition of Chewing. Biomedical Engineering, IEEE Transactions on 56, 6 (June 2009), 1663--1672.
[3]
Oliver Amft, Mathias Stäger, Paul Lukowicz, and Gerhard Tröster. 2005. Analysis of Chewing Sounds for Dietary Monitoring. In UbiComp 2005: Ubiquitous Computing. Springer Berlin Heidelberg, Berlin, Heidelberg, 56--72.
[4]
Daniel Ashbrook. 2009. Enabling Mobile Microinteractions. Ph.D. Dissertation. PhD Thesis, Georgia Tech, Georgia Institute of Technology.
[5]
Daniel Ashbrook, Patrick Baudisch, and Sean White. 2011. Nenya: subtle and eyes-free mobile input with a magnetically-tracked finger ring. In CHI '11: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, New York, USA, 2043--2046.
[6]
William Biederman. 1962. Etiology and treatment of tooth ankylosis. American Journal of Orthodontics 48, 9 (Sept. 1962), 670--684.
[7]
Gábor Balázs Blaskó. 2007. Cursorless Interaction Techniques for Wearable and Mobile Computing. Ph.D. Dissertation. Columbia University.
[8]
H. S. Brenman. 1974. Gnathosonics and occlusion. Frontiers of oral physiology 1 (1974), 238--256.
[9]
H. S. Brenman and James S. Millsap. 1959. A "Sound" Approach to Occlusion. The bulletin of the Philadelphia County Dental Society 24 (1959), 4--8.
[10]
H. S. Brenman, R. C. Weiss, and M. Black. 1966. Sound as a diagnostic aid in the detection of occlusal discrepancies. The Penn-Dental Journal 69, 2 (1966), 33--49.
[11]
C. A. Chin, A. Barreto, and J. G. Cremades. 2008. Integrated electromyogram and eye-gaze tracking cursor control system for computer users with motor disabilities. Journal of Rehabilitation Research and Development 45, 1 (2008), 161--174.
[12]
Travis Deyle, S. Palinko, E. S. Poole, and T. Starner. 2007. Hambone: A Bio-Acoustic Gesture Interface. In Wearable Computers, 2007 11th IEEE International Symposium on. IEEE, 3--10.
[13]
David J. Fuller and Victor C. West. 1987. The tooth contact sound as an analogue of the "quality of occlusion". The Journal of Prosthetic Dentistry 57, 2 (Feb. 1987), 236--243.
[14]
Mayank Goel, Chen Zhao, Ruth Vinisha, and Shwetak N. Patel. 2015. Tongue-in-Cheek: Using Wireless Signals to Enable Non-Intrusive and Flexible Facial Gestures Detection. In CHI '16: Proceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems. New York, New York, USA, 255--258.
[15]
Sean Gustafson, Daniel Bierwirth, and Patrick Baudisch. 2010. Imaginary interfaces: spatial interaction with empty hands and without visual feedback. In Proceedings of the 23nd annual ACM symposium on User interface software and technology. New York, New York, USA, 3.
[16]
Chris Harrison, Desney Tan, and Dan Morris. 2010. Skinput: appropriating the body as an input surface. In CHI '10: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, New York, USA, 453--462.
[17]
Hȩdzelek and Hornowski. 1998. The analysis of frequency of occlusal sounds in patients with periodontal diseases and gnathic dysfunction. Journal of Oral Rehabilitation 25, 2 (Feb. 1998), 139--145.
[18]
David A. Huffman. 1952. A method for the construction of minimum-redundancy codes. Proceedings of the I.R.E. 40, 9 (1952), 1098--1101.
[19]
ISO. 2009. Dentistry---Designation system for teeth and areas of the oral cavity. ISO 3950:2009. International Organization for Standardization, Geneva, Switzerland.
[20]
Krishan K. Kapur. 1971. Frequency Spectrographic Analysis of Bone Conducted Chewing Sounds in Persons With Natural and Artificial Dentitions. Journal of Texture Studies 2 (1971), 50--61.
[21]
Koichi Kuzume. 2008. A Character Input System Using Tooth-Touch Sound for Disabled People. In International Conference on Computers Helping People with Special Needs. Springer Berlin Heidelberg, Berlin, Heidelberg, 1157--1160.
[22]
Koichi Kuzume. 2011. Tooth-touch Sound and Expiration Signal Detection and Its Application in a Mouse Interface Device for Disabled Persons: Realization of a Mouse Interface Device Driven by Biomedical Signals. In International Conference on Pervasive and Embedded Computing and Communication Systems. SciTePress - Science and and Technology Publications, 15--21.
[23]
Koichi Kuzume. 2012. Evaluation of tooth-touch sound and expiration based mouse device for disabled persons. In 2012 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). IEEE, 387--390.
[24]
Koichi Kuzume and T. Morimoto. 2006. Hands-free man-machine interface device using tooth-touch sound for disabled persons. In Proceedings of the 6th International Confernece on Disability, Virtual Reality and Associated Technology. 147--152.
[25]
Peter R. L'Estrange, Alan R. Blowers, Robert G. Carlyon, and Stig L. Karlsson. 1993. A microcomputer system for physiological data collection and analysis. Australian Dental Journal 38, 5 (Oct. 1993), 400--405.
[26]
Cheng-Yuan Li, Yen-Chang Chen, Wei-Ju Chen, Polly Huang, and Hao-hua Chu. 2013. Sensor-embedded teeth for oral activity recognition. In the 17th annual international symposium. ACM Press, New York, New York, USA, 41.
[27]
Zheng Li, Ryan Robucci, Nilanjan Banerjee, and Chintan Patel. 2015. Tongue-n-cheek: non-contact tongue gesture recognition. In IPSN '15: Proceedings of the 14th International Conference on Information Processing in Sensor Networks. New York, New York, USA, 95--105.
[28]
Zicheng Liu, Amar Subramanya, Zhengyou Zhang, Jasha Droppo, and Alex Acero. 2005. Leakage Model and Teeth Clack Removal for Air- and Bone-Conductive Integrated Microphones. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. 1 (2005), 1093--1096.
[29]
I. S. MacKenzie, R. W. Soukoreff, and J. Helga. 2011. 1 thumb, 4 buttons, 20 words per minute: Design and evaluation of H4-Writer. In Proceedings of the 24th annual ACM symposium on User interface software and technology.
[30]
W. D. Mccall Jr., Antje Tallgren, and M. M Ash Jr. 1979. EMG Silent Periods in Immediate Complete Denture Patients: A Longitudinal Study. Journal of Dental Research 58, 12 (Dec. 1979), 2353--2359.
[31]
Tamer Mohamed and Lin Zhong. 2006. TeethClick: Input with Teeth Clacks. Technical Report. Rice University.
[32]
Alexander Ng, Stephen A Brewster, and John Williamson. 2013. The Impact of Encumbrance on Mobile Interactions. In Proceedings of The International Symposium on Open Collaboration. Springer Berlin Heidelberg, Berlin, Heidelberg, 92--109.
[33]
Ian Oakley, Doyoung Lee, MD Rasel Islam, and Augusto Esteves. 2015. Beats: Tapping Gestures for Smart Watches. ACM, New York, New York, USA.
[34]
Jerome Pasquero, Scott J. Stobbe, and Noel Stonehouse. 2011. A haptic wristwatch for eyes-free interactions. In CHI '11: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, New York, USA, 3257.
[35]
J. F. Prinz. 2000. Computer aided gnathosonic analysis: distinguishing between single and multiple tooth impact sounds. Journal of Oral Rehabilitation 27 (2000), 682--689.
[36]
A. Prochazka. 2005. Method and apparatus for controlling a device or process with vibrations generated by tooth clicks. (Nov. 1 2005). US Patent 6,961,623.
[37]
Tauhidur Rahman, Alexander T. Adams, Mi Zhang, Erin Cherry, Bobby Zhou, Huaishu Peng, and Tanzeem Choudhury. 2014. BodyBeat: a mobile system for sensing non-speech body sounds. In MobiSys '14: Proceedings of the 12th annual international conference on Mobile systems, applications, and services. New York, New York, USA, 2--13.
[38]
Himanshu Sahni, Abdelkareem Bedri, Gabriel Reyes, Pavleen Thukral, Zehua Guo, Thad Starner, and Maysam Ghovanloo. 2014. The tongue and ear interface: a wearable system for silent speech recognition. In ISWC '14: Proceedings of the 2014 ACM International Symposium on Wearable Computers. New York, New York, USA, 47--54.
[39]
A. Sears, M. Lin, J. Jacko, and Y. Xiao. 2003. When computers fade... Pervasive computing and situationally-induced impairments and disabilities. In International Conference on Human Computer Interaction. HCI International.
[40]
C. S. SHI and Y. MAO. 1993. Elementary identification of a gnathosonic classification using an autoregressive model. Journal of Oral Rehabilitation 20, 4 (July 1993), 373--378.
[41]
Chong-Shan Shi, Guan Ouyang, and Tian-wen Guo. 1991. Power spectral analysis of occlusal sounds of natural dentition subjects. Journal of Oral Rehabilitation 18, 3 (May 1991), 273--277.
[42]
Tyler Simpson, Colin Broughton, Michel J. A. Gauthier, and Arthur Prochazka. 2008. Tooth-Click Control of a Hands-Free Computer Interface. Biomedical Engineering, IEEE Transactions on 55, 8 (Aug. 2008), 2050--2056.
[43]
Tyler Simpson, Michel Gauthier, and Arthur Prochazka. 2010. Evaluation of Tooth-Click Triggering and Speech Recognition in Assistive Technology for Computer Access. Neurorehabilitation and Neural Repair 24, 2 (Feb. 2010), 188--194.
[44]
J. M. Stewart. 1953. Diagnosis of Traumatic Occlusion. The Journal of the Florida State Dental Society 24 (Oct. 1953), 4--9.
[45]
H. N. Teodorescu, V. Burlui, and P. D. Leca. 1988. Gnathosonic analyser. Medical and Biological Engineering and Computing 26, 4 (July 1988), 428--431.
[46]
Outi Tuisku, Veikko Surakka, Toni Vanhala, Ville Rantanen, and Jukka Lekkala. 2012. Wireless Face Interface: Using voluntary gaze direction and facial muscle activations for human--computer interaction. Interacting with Computers 24, 1 (Jan. 2012), 1--9.
[47]
K. W. Tyson. 1998. Monitoring the state of the occlusion -- gnathosonics can be reliable. Journal of Oral Rehabilitation 25, 5 (May 1998), 395--402.
[48]
David M. Watt. 1963. A preliminary report on the auscultation of the masticatory mechanism. Dental Practitioner 14 (Sept. 1963), 27--30.
[49]
David M. Watt. 1966. Gnathosonics---A study of sounds produced by the masticatory mechanism. The Journal of Prosthetic Dentistry 16, 1 (Jan. 1966), 73--82.
[50]
David M. Watt. 1969. Recording the sounds of tooth contact: a diagnostic technique for evaluation of occlusal disturbances. International Dental Journal 2 (June 1969), 221--238.
[51]
David M. Watt. 1970. Use of sound in oral diagnosis. Proceedings of the Royal Society of Medicine 63, 8 (Aug. 1970), 793.
[52]
David M. Watt. 1981. Gnathosonic Diagnosis and Occlusal Dynamics. Praeger Publishers.
[53]
Koji Yatani and Khai N. Truong. 2012. BodyScope: a wearable acoustic sensor for activity recognition. In UbiComp '12: Proceedings of the 2012 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, New York, New York, USA, 341--350.
[54]
Shengdong Zhao, Pierre Dragicevic, Mark Chignell, Ravin Balakrishnan, and Patrick Baudisch. 2007. Earpod: eyes-free menu selection using touch input and reactive audio feedback. CHI '07: Proceedings of the SIGCHI conference on Human factors in computing systems (April 2007), 1395--1404.
[55]
Xiaoyu Amy Zhao, Elias D. Guestrin, Dimitry Sayenko, Tyler Simpson, Michel Gauthier, and Milos R. Popovic. 2012. Typing with eye-gaze and tooth-clicks. In ETRA '12: Proceedings of the Symposium on Eye Tracking Research and Applications. New York, New York, USA, 341.
[56]
Lin Zhong, Dania El-Daye, Brett Kaufman, Nick Tobaoda, Tamer Mohamed, and Michael Liebschner. 2007. OsteoConduct: wireless body-area communication based on bone conduction. In ICST 2nd international conference on Body area networks. ICST.

Cited By

View all
  • (2024)Reviewing the potential of hearables for the assessment of bruxismat - Automatisierungstechnik10.1515/auto-2024-002972:5(389-398)Online publication date: 7-May-2024
  • (2024)Exploring the ’EarSwitch’ concept: a novel ear based control method for assistive technologyJournal of NeuroEngineering and Rehabilitation10.1186/s12984-024-01500-z21:1Online publication date: 2-Dec-2024
  • (2024)ReHEarSSE: Recognizing Hidden-in-the-Ear Silently Spelled ExpressionsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642095(1-16)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. Bitey: an exploration of tooth click gestures for hands-free user interface control

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MobileHCI '16: Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services
    September 2016
    567 pages
    ISBN:9781450344081
    DOI:10.1145/2935334
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 06 September 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. audio interfaces
    2. bio-acoustics
    3. gestures
    4. subtle interfaces
    5. tooth input
    6. wearable computing

    Qualifiers

    • Research-article

    Conference

    MobileHCI '16
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 202 of 906 submissions, 22%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)86
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 26 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Reviewing the potential of hearables for the assessment of bruxismat - Automatisierungstechnik10.1515/auto-2024-002972:5(389-398)Online publication date: 7-May-2024
    • (2024)Exploring the ’EarSwitch’ concept: a novel ear based control method for assistive technologyJournal of NeuroEngineering and Rehabilitation10.1186/s12984-024-01500-z21:1Online publication date: 2-Dec-2024
    • (2024)ReHEarSSE: Recognizing Hidden-in-the-Ear Silently Spelled ExpressionsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642095(1-16)Online publication date: 11-May-2024
    • (2024)TeethFa: Real-Time, Hand-Free Teeth Gestures Interaction Using Fabric SensorsIEEE Internet of Things Journal10.1109/JIOT.2024.343465711:21(35223-35237)Online publication date: 1-Nov-2024
    • (2024)Toward Wearables for Bruxism Detection: Voluntary Oral Behaviors Sound Recorded Across the Head Depend on Transducer PlacementClinical and Experimental Dental Research10.1002/cre2.7000110:5Online publication date: 22-Sep-2024
    • (2023)Pneunocchio: A playful nose augmentation for facilitating embodied representationAdjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586182.3616651(1-3)Online publication date: 29-Oct-2023
    • (2023)DUMask: A Discrete and Unobtrusive Mask-Based Interface for Facial GesturesProceedings of the Augmented Humans International Conference 202310.1145/3582700.3582726(255-266)Online publication date: 12-Mar-2023
    • (2023)ParaGlassMenu: Towards Social-Friendly Subtle Interactions in ConversationsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581065(1-21)Online publication date: 19-Apr-2023
    • (2023)Unobtrusive interaction: a systematic literature review and expert surveyHuman–Computer Interaction10.1080/07370024.2022.216240439:5-6(380-416)Online publication date: Mar-2023
    • (2023)Unvoiced Vowel Recognition Using Active Bio-Acoustic Sensing for Silent Speech InteractionArtificial Intelligence in HCI10.1007/978-3-031-35891-3_10(150-161)Online publication date: 9-Jul-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media