skip to main content
10.1145/3025453.3025692acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

EarFieldSensing: A Novel In-Ear Electric Field Sensing to Enrich Wearable Gesture Input through Facial Expressions

Published: 02 May 2017 Publication History

Abstract

EarFieldSensing (EarFS) is a novel input method for mobile and wearable computing using facial expressions. Facial muscle movements induce both electric field changes and physical deformations, which are detectable with electrodes placed inside the ear canal. The chosen ear-plug form factor is rather unobtrusive and allows for facial gesture recognition while utilizing the close proximity to the face. We collected 25 facial-related gestures and used them to compare the performance levels of several electric sensing technologies (EMG, CS, EFS, EarFS) with varying electrode setups. Our developed wearable fine-tuned electric field sensing employs differential amplification to effectively cancel out environmental noise while still being sensitive towards small facial-movement-related electric field changes and artifacts from ear canal deformations. By comparing a mobile with a stationary scenario, we found that EarFS continues to perform better in a mobile scenario. Quantitative results show EarFS to be capable of detecting a set of 5 facial gestures with a precision of 90% while sitting and 85.2% while walking. We provide detailed instructions to enable replication of our low-cost sensing device. Applying it to different positions of our body will also allow to sense a variety of other gestures and activities.

Supplementary Material

ZIP File (pn2212-file4.zip)
suppl.mov (pn2212-file3.mp4)
Supplemental video
suppl.mov (pn2212p.mp4)
Supplemental video
MP4 File (p1911-matthies.mp4)

References

[1]
Ashbrook, D. (2010). Enabling mobile microinteractions. PhD Thesis, Georgia Institute of Technology
[2]
Ashtiani, B., & MacKenzie, I. S. (2010). BlinkWrite2: an improved text entry method using eye blinks. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, 339--345. ACM. http://dx.doi.org/10.1145/1743666.1743742
[3]
Bartlett, M. S., Littlewort, G., Fasel, I., & Movellan, J. R. (2003, June). Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction. In Conference on Computer Vision and Pattern Recognition. CVPRW'03. (Vol. 5), 53--53. IEEE. http://dx.doi.org/10.1109/CVPRW.2003.10057
[4]
Bulling, A., Roggen, D., & Tröster, G. (2009). Wearable EOG goggles: eye-based interaction in everyday environments, In CHI'09 Extended Abstracts on Human Factors in Computing System, 3259--3264. ACM. http://dx.doi.org/10.1145/1520340.1520468
[5]
Caruana, R., & Freitag, D. (1994). Greedy Attribute Selection. In Proceedings of International Conference on Machine Learning, 28--36.
[6]
Chai, J. X., Xiao, J., & Hodgins, J. (2003). Visionbased control of 3d facial animation. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, 193--206. ACM.
[7]
Cohn, G., Gupta, S., Lee, T. J., Morris, D., Smith, J. R., Reynolds, M. S., ... & Patel, S. N. (2012). An ultralow-power human body motion sensor using static electric field sensing. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, 99--102. ACM. http://dx.doi.org/10.1145/2370216.2370233
[8]
Ekman, P., & Friesen, W. V. (1977). Facial action coding system. Consulting Psychologists Press, Stanford University, Palo Alto.
[9]
Fagan, M. J., Ell, S. R., Gilbert, J. M., Sarrazin, E., & Chapman, P. M. (2008). Development of a (silent) speech recognition system for patients following laryngectomy. In Medical engineering & physics, 30(4), 419--425. http://dx.doi.org/10.1016/j.medengphy.2007.05.003
[10]
Fasel, B., & Luettin, J. (2003). Automatic facial expression analysis: a survey. In Pattern recognition, 36(1), 259--275. Elsevier. http://dx.doi.org/10.1016/S0031-3203(02)00052-
[11]
Gips, J., Olivieri, P., & Tecce, J. (1993). Direct Control of the Computer Through Electrodes Placed Around the Eyes. In Human-Computer Interaction: Applications and Case Studies, 630--635. Elsevier.
[12]
Gips, J. (1998). On building intelligence into EagleEyes. In Assistive technology and artificial intelligence, 50--58. Springer Berlin Heidelberg. http://dx.doi.org/10.1007/BFb0055969
[13]
Grafsgaard, J., Wiggins, J. B., Boyer, K. E., Wiebe, E. N., & Lester, J. (2013). Automatically recognizing facial expression: Predicting engagement and frustration. In Educational Data Mining 2013.
[14]
Grosse-Puppendahl, T., Berghoefer, Y., Braun, A., Wimmer, R., & Kuijper, A. (2013). OpenCapSense: A rapid prototyping toolkit for pervasive interaction using capacitive sensing. In Proceedings of International Conference on Pervasive Computing and Communications, 152--159. IEEE. http://dx.doi.org/10.1109/PerCom.2013.6526726
[15]
Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., & Witten, I. H. (2009). The WEKA data mining software: an update. In ACM SIGKDD explorations newsletter, 11(1), 10--18. http://dx.doi.org/10.1145/1656274.1656278
[16]
Hausen, S. (2014) Peripheral Interaction - Exploring the Design Space, PhD Thesis, Faculty of Mathematics, Computer Science and Statistics, University of Munich.
[17]
Hazlett, R. (2003). Measurement of user frustration: a biologic approach. In CHI'03 extended abstracts on Human factors in computing systems, 734--735. ACM. http://dx.doi.org/10.1145/765891.765958
[18]
Hodgkin, A. L. (1951). The ionic basis of electrical activity in nerve and muscle. In Biological Reviews, 26(4), 339--409. http://dx.doi.org/10.1111/j.1469185X.1951.tb01204.x
[19]
Ishimaru, S., Kunze, K., Uema, Y., Kise, K., Inami, M., & Tanaka, K. (2014). Smarter eyewear: using commercial EOG glasses for activity recognition. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, 239--242. ACM. http://dx.doi.org/10.1145/2638728.2638795
[20]
Majaranta, P., & Räihä, K. J. (2007). Text entry by gaze: Utilizing eye-tracking. In Text entry systems: Mobility, accessibility, universality, 175--187.
[21]
Manabe, H., & Fukumoto, M. (2006). Full-time wearable headphone-type gaze detector. In CHI'06 Extended Abstracts on Human Factors in Computing Systems, 1073--1078. ACM. http://dx.doi.org/10.1145/1125451.1125655
[22]
Manabe, H., Fukumoto, M., & Yagi, T. (2015). Conductive rubber electrodes for earphone-based eye gesture input interface. In Personal and Ubiquitous Computing, 19(1), 143--154. Springer. http://dx.doi.org/10.1007/s00779-014-0818-8
[23]
Mann, S. (1998). Humanistic computing: "WearComp" as a new framework and application for intelligent signal processing. In Proceedings of the IEEE, 86(11), 2123--215. IEEE. http://dx.doi.org/10.1109/5.726784
[24]
Matthies, D. J. C. (2013). InEar BioFeedController: a headset for hands-free and eyes-free interaction with mobile devices. In CHI'13 Extended Abstracts on Human Factors in Computing Systems, 1293--1298. ACM. http://dx.doi.org/10.1145/2468356.2468587
[25]
Matthies, D. J. C., Antons, J. N., Heidmann, F., Wettach, R., & Schleicher, R. (2012). NeuroPad: use cases for a mobile physiological interface. In Proceedings of the 7th Nordic Conference on HumanComputer Interaction: Making Sense Through Design, 795--796. ACM. http://dx.doi.org/10.1145/2399016.2399152
[26]
Matthies, D. J. C., Perrault, S. T., Urban, B., & Zhao, S. (2015). Botential: Localizing On-Body Gestures by Measuring Electrical Signatures on the Human Skin. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, 207--216. ACM. http://dx.doi.org/10.1145/2785830.2785859
[27]
McFarland, D. J., & Wolpaw, J. R. (2011). Braincomputer interfaces for communication and control. In Communications of the ACM, 54(5), 60--66. ACM. http://dx.doi.org/10.1145/1941487.1941506
[28]
Nguyen, A., Alqurashi, R., Raghebi, Z., Banaeikashani, F., Halbower, A. C., & Vu, T. (2016). A Lightweight And Inexpensive In-ear Sensing System For Automatic Whole-night Sleep Stage Monitoring. In Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems 2016, 230--244. ACM. http://dx.doi.org/10.1145/2994551.2994562
[29]
Picard, R. W. (1995). Affective computing. MIT Media Laboratory Perceptual Computing Section Technical Report No. 321. Cambridge: MIT press.
[30]
Rantanen, V., Niemenlehto, P. H., Verho, J., & Lekkala, J. (2010). Capacitive facial movement detection for human-computer interaction to click by frowning and lifting eyebrows. In Medical & biological engineering & computing,48(1), 39--47. http://dx.doi.org/10.1007/s11517-009-0565-6
[31]
Rantanen, V., Venesvirta, H., Spakov, O., Verho, J., Vetek, A., Surakka, V., & Lekkala, J. (2013). Capacitive measurement of facial activity intensity. In IEEE Sensors Journal, 13(11), 4329--4338. http://dx.doi.org/10.1109/JSEN.2013.2269864
[32]
Sahni, H., Bedri, A., Reyes, G., Thukral, P., Guo, Z., Starner, T., & Ghovanloo, M. (2014). The tongue and ear interface: a wearable system for silent speech recognition. In Proceedings of the 2014 ACM International Symposium on Wearable Computers, 4754. ACM. http://dx.doi.org/10.1145/2634317.2634322
[33]
Saponas, T. S., Tan, D. S., Morris, D., Balakrishnan, R., Turner, J., & Landay, J. A. (2009). Enabling always-available input with muscle-computer interfaces. In Proceedings of the 22nd annual ACM symposium on User interface software and technology, 167--176. ACM. http://dx.doi.org/10.1145/1622176.1622208
[34]
San Agustin, J., Hansen, J. P., Hansen, D. W., & Skovsgaard, H. (2009). Low-cost gaze pointing and EMG clicking. In CHI'09 Extended Abstracts on Human Factors in Computing Systems, 3247--3252. ACM. http://dx.doi.org/10.1145/1520340.1520466
[35]
Scharfetter, H., Hartinger, P., Hinghofer-Szalkay, H., & Hutten, H. (1998). A model of artefacts produced by stray capacitance during whole body or segmental bioimpedance spectroscopy. In Physiological measurement, 19(2), 247. http://dx.doi.org/10.1088/0967-3334/19/2/012
[36]
Scheirer, J., Fernandez, R., & Picard, R. W. (1999). Expression glasses: a wearable device for facial expression recognition. In CHI'99 Extended Abstracts on Human Factors in Computing Systems, 262--263. ACM. http://dx.doi.org/10.1145/632716.632878
[37]
Smith, E., & Delargy, M. (2005). Locked-in syndrome. BMJ, 330(7488), 406--409.
[38]
Smith, J. R. (1999). Electric Field Imaging. PhD Thesis, MIT - Massachusetts Institute of Technology.
[39]
Yin, E., Zhou, Z., Jiang, J., Chen, F., Liu, Y., & Hu, D. (2013). A novel hybrid BCI speller based on the incorporation of SSVEP into the P300 paradigm. In Journal of neural engineering, 10(2), 026012. http://dx.doi.org/10.1088/17412560/10/2/026012
[40]
Zhang, Q., Gollakota, S., Taskar, B., & Rao, R. P. (2014). Non-intrusive tongue machine interface. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems, 2555--2558. ACM. http://dx.doi.org/10.1145/2556288.2556981

Cited By

View all
  • (2025)3D Facial Tracking and User Authentication Through Lightweight Single-Ear BiosensorsIEEE Transactions on Mobile Computing10.1109/TMC.2024.347033924:2(749-762)Online publication date: Feb-2025
  • (2025)Earable Multimodal Sensing and Stimulation: A Prospective Toward Unobtrusive Closed-Loop BiofeedbackIEEE Reviews in Biomedical Engineering10.1109/RBME.2024.350871318(5-25)Online publication date: 2025
  • (2024)Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUsProceedings of the ACM on Human-Computer Interaction10.1145/36765038:MHCI(1-23)Online publication date: 24-Sep-2024
  • Show More Cited By

Index Terms

  1. EarFieldSensing: A Novel In-Ear Electric Field Sensing to Enrich Wearable Gesture Input through Facial Expressions

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems
    May 2017
    7138 pages
    ISBN:9781450346559
    DOI:10.1145/3025453
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 May 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Badges

    • Honorable Mention

    Author Tags

    1. body potential sensing
    2. electric field sensing
    3. facial expression control
    4. hands-/eyes-free
    5. wearable computing

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    CHI '17
    Sponsor:

    Acceptance Rates

    CHI '17 Paper Acceptance Rate 600 of 2,400 submissions, 25%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)104
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 13 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)3D Facial Tracking and User Authentication Through Lightweight Single-Ear BiosensorsIEEE Transactions on Mobile Computing10.1109/TMC.2024.347033924:2(749-762)Online publication date: Feb-2025
    • (2025)Earable Multimodal Sensing and Stimulation: A Prospective Toward Unobtrusive Closed-Loop BiofeedbackIEEE Reviews in Biomedical Engineering10.1109/RBME.2024.350871318(5-25)Online publication date: 2025
    • (2024)Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUsProceedings of the ACM on Human-Computer Interaction10.1145/36765038:MHCI(1-23)Online publication date: 24-Sep-2024
    • (2024)EarHover: Mid-Air Gesture Recognition for Hearables Using Sound Leakage SignalsProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676367(1-13)Online publication date: 13-Oct-2024
    • (2024)Body-Area Capacitive or Electric Field Sensing for Human Activity Recognition and Human-Computer InteractionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435558:1(1-49)Online publication date: 6-Mar-2024
    • (2024)UFaceProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435468:1(1-27)Online publication date: 6-Mar-2024
    • (2024)EarSlideProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435158:1(1-29)Online publication date: 6-Mar-2024
    • (2024)EyeEcho: Continuous and Low-power Facial Expression Tracking on GlassesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642613(1-24)Online publication date: 11-May-2024
    • (2024)GazePuffer: Hands-Free Input Method Leveraging Puff Cheeks for VR2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00055(331-341)Online publication date: 16-Mar-2024
    • (2024)Tongue-Jaw Movement Recognition Through Acoustic Sensing on SmartphonesIEEE Transactions on Mobile Computing10.1109/TMC.2022.322259423:1(879-894)Online publication date: Jan-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media