Skip to main content

A Mouth Gesture Interface Featuring a Mutual-Capacitance Sensor Embedded in a Surgical Mask

  • Conference paper
  • First Online:
Book cover Human-Computer Interaction. Multimodal and Natural Interaction (HCII 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12182))

Included in the following conference series:

Abstract

We developed a mouth gesture interface featuring a mutual-capacitance sensor embedded in a surgical mask. This wearable hands-free interface recognizes non-verbal mouth gestures; others cannot eavesdrop on anything the user does with the user’s device. The mouth is hidden by the mask; others do not know what the user is doing. We confirm the feasibility of our approach and demonstrate the accuracy of mouth shape recognition. We present two applications. Mouth shape can be used to zoom in or out, or to select an application from a menu.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://reference.digilentinc.com/reference/instrumentation/analog-discovery-2/start.

  2. 2.

    https://docs.scipy.org/.

  3. 3.

    https://scikit-learn.org/.

References

  1. Fukumoto, M.: SilentVoice: unnoticeable voice input by ingressive speech. In: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, UIST 2018, pp. 237–246. ACM, New York (2018)

    Google Scholar 

  2. Ronkainen, S., Häkkilä, J., Kaleva, S., Colley, A., Linjama, J.: Tap input as an embedded interaction method for mobile devices. In: Proceedings of the 1st International Conference on Tangible and Embedded Interaction, TEI 2007, pp. 263–270. ACM, New York (2007)

    Google Scholar 

  3. Sun, K., Yu, C., Shi, W., Liu, L., Shi, Y.: Lip-Interact: improving mobile device interaction with silent speech commands. In: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, UIST 2018, pp. 581–593. ACM, New York (2018)

    Google Scholar 

  4. Cheng, J., et al.: On the tip of my tongue: a non-invasive pressure-based tongue interface. In: Proceedings of the 5th Augmented Human International Conference, AH 2014, pp. 12:1–12:4. ACM, New York (2014)

    Google Scholar 

  5. Sasaki, M., et al.: Tongue interface based on surface EMG signals of suprahyoid muscles. ROBOMECH J. 3 (2016). Article number: 9. https://doi.org/10.1186/s40648-016-0048-0

  6. Azh, M., Zhao, S.: LUI: lip in multimodal mobile GUI interaction. In: Proceedings of the 14th ACM International Conference on Multimodal Interaction, ICMI 2012, pp. 551–554. ACM, New York (2012)

    Google Scholar 

  7. Lyons, M.J., Chan, C.-H., Tetsutani, N.: MouthType: text entry by hand and mouth. In: CHI 2004 Extended Abstracts on Human Factors in Computing Systems, CHI EA 2004, pp. 1383–1386. ACM, New York (2004)

    Google Scholar 

  8. Chan, C., Lyons, M.J., Tetsutani, N.: Mouthbrush: drawing and painting by hand and mouth. In: Proceedings of the 5th International Conference on Multimodal Interfaces, ICMI 2003, pp. 277–280. ACM, New York (2003)

    Google Scholar 

  9. Koguchi, Y., Oharada, K., Takagi, Y., Sawada, Y., Shizuki, B., Takahashi, S.: A mobile command input through vowel lip shape recognition. In: Kurosu, M. (ed.) HCI 2018. LNCS, vol. 10903, pp. 297–305. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91250-9_23

    Chapter  Google Scholar 

  10. Miyauchi, M., Kimura, T., Nojima, T.: A tongue training system for children with down syndrome. In: Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST 2013, pp. 373–376. ACM, New York (2013)

    Google Scholar 

  11. Crawford, C.S., Bailey, S.W., Badea, C., Gilbert, J.E.: Using Cr-Y components to detect tongue protrusion gestures. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA 2015, pp. 1331–1336. ACM, New York (2015)

    Google Scholar 

  12. Zhang, Q., Gollakota, S., Taskar, B., Rao, R.P.N.: Non-intrusive tongue machine interface. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2014, pp. 2555–2558. ACM, New York (2014)

    Google Scholar 

  13. Goel, M., Zhao, C., Vinisha, R., Patel, S.N.: Tongue-in-cheek: using wireless signals to enable non-intrusive and flexible facial gestures detection. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, pp. 255–258. ACM, New York (2015)

    Google Scholar 

  14. Li, Z., Robucci, R., Banerjee, N., Patel, C.: Tongue-n-cheek: non-contact tongue gesture recognition. In: Proceedings of the 14th International Conference on Information Processing in Sensor Networks, IPSN 2015, pp. 95–105. ACM, New York (2015)

    Google Scholar 

  15. Grosse-Puppendahl, T., et al.: Finding common ground: a survey of capacitive sensing in human-computer interaction. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI 2017, pp. 3293–3315. ACM, New York (2017)

    Google Scholar 

  16. Zimmerman, T.G., Smith, J.R., Paradiso, J.A., Allport, D., Gershenfeld, N.: Applying electric field sensing to human-computer interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 1995, pp. 280–287. ACM Press/Addison-Wesley Publishing Co., New York (1995)

    Google Scholar 

  17. Dietz, P., Leigh, D.: DiamondTouch: a multi-user touch technology. In: Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology, UIST 2001, pp. 219–226. ACM, New York (2001)

    Google Scholar 

  18. Hinckley, K., Pausch, R., Goble, J.C., Kassell, N.F.: Passive real-world interface props for neurosurgical visualization. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 1994, pp. 452–458. ACM, New York (1994)

    Google Scholar 

  19. Rekimoto, J.: SmartSkin: an infrastructure for freehand manipulation on interactive surfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2002, pp. 113–120. ACM, New York (2002)

    Google Scholar 

  20. Sato, M., Poupyrev, I., Harrison, C.: Touché: enhancing touch interaction on humans, screens, liquids, and everyday objects. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2012, pp. 483–492. ACM, New York (2012)

    Google Scholar 

  21. Tsuruta, M., Nakamae, S., Shizuki, B.: RootCap: touch detection on multi-electrodes using single-line connected capacitive sensing. In: Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces, ISS 2016, pp. 23–32. ACM, New York (2016)

    Google Scholar 

  22. Wang, E.J., Garrison, J., Whitmire, E., Goel, M., Patel, S.: Carpacio: repurposing capacitive sensors to distinguish driver and passenger touches on in-vehicle screens. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST 2017, pp. 49–55. ACM, New York (2017)

    Google Scholar 

  23. Pourjafarian, N., Withana, A., Paradiso, J.A., Steimle, J.: Multi-touch kit: a do-it-yourself technique for capacitive multi-touch sensing using a commodity microcontroller. In: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST 2019, pp. 1071–1083. ACM, New York (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yutaro Suzuki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Suzuki, Y., Sekimori, K., Yamato, Y., Yamasaki, Y., Shizuki, B., Takahashi, S. (2020). A Mouth Gesture Interface Featuring a Mutual-Capacitance Sensor Embedded in a Surgical Mask. In: Kurosu, M. (eds) Human-Computer Interaction. Multimodal and Natural Interaction. HCII 2020. Lecture Notes in Computer Science(), vol 12182. Springer, Cham. https://doi.org/10.1007/978-3-030-49062-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-49062-1_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-49061-4

  • Online ISBN: 978-3-030-49062-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics