skip to main content
research-article

Evaluation of Facial Expression Recognition by a Smart Eyewear for Facial Direction Changes, Repeatability, and Positional Drift

Published: 13 December 2017 Publication History

Abstract

This article presents a novel smart eyewear that recognizes the wearer’s facial expressions in daily scenarios. Our device uses embedded photo-reflective sensors and machine learning to recognize the wearer’s facial expressions. Our approach focuses on skin deformations around the eyes that occur when the wearer changes his or her facial expressions. With small photo-reflective sensors, we measure the distances between the skin surface on the face and the 17 sensors embedded in the eyewear frame. A Support Vector Machine (SVM) algorithm is then applied to the information collected by the sensors. The sensors can cover various facial muscle movements. In addition, they are small and light enough to be integrated into daily-use glasses. Our evaluation of the device shows the robustness to the noises from the wearer’s facial direction changes and the slight changes in the glasses’ position, as well as the reliability of the device’s recognition capacity.
The main contributions of our work are as follows: (1) We evaluated the recognition accuracy in daily scenes, showing 92.8% accuracy regardless of facial direction and removal/remount. Our device can recognize facial expressions with 78.1% accuracy for repeatability and 87.7% accuracy in case of its positional drift. (2) We designed and implemented the device by taking usability and social acceptability into account. The device looks like a conventional eyewear so that users can wear it anytime, anywhere. (3) Initial field trials in a daily life setting were undertaken to test the usability of the device.
Our work is one of the first attempts to recognize and evaluate a variety of facial expressions with an unobtrusive wearable device.

References

[1]
U. Hess, R. Banse, and A. Kappas. 1995. The intensity of facial expression is determined by underlying affective state and social situation. Journal of Personality and Social Psychology 69, 1 (1995), 280--288.
[2]
R. A. Calvo and S. D’Mello. 2010. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing 1, 1 (Jan. 2010), 18--37.
[3]
S. Du, Y. Tao, and A. M. Martinez. 2014. Compound facial expressions of emotion. Proceedings of the National Academy of Sciences 111, 15 (2014), E1454--E1462. arXiv:http://www.pnas.org/content/111/15/E1454.full.pdf.
[4]
P. Ekman. 1989. The argument and evidence about universals in facial expressions. In Handbook of Social Psychophysiology. 143--164.
[5]
P. Ekman and W. V. Friesen. 1977. Facial action coding system. Consulting Psychologists Press, Stanford University, Palo Alto.
[6]
K. Fukumoto, T. Terada, and M. Tsukamoto. 2013. A smile/laughter recognition mechanism for smile-based life logging. In Proceedings of the 4th Augmented Human International Conference (AH’13). ACM, New York, NY, USA, 213–220.
[7]
J. M. Girard, J. F. Cohn, L. A. Jeni, S. Lucey, and F. D. Torre. 2015. How much training data for facial action unit detection? In 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG’15), Vol. 1. 1--8.
[8]
J. J. Gross. 1988. The emerging field of emotion regulation: An integrative review. Review of General Psychology 2, 3 (1988), 271--299.
[9]
J. J. Gross. 2013. Handbook of Emotion Regulation. Guilford Publications.
[10]
A. Gruebler and K. Suzuki. 2014. Design of a wearable device for reading positive expressions from facial EMG signals. IEEE Transactions on Affective Computing 5, 3 (July 2014), 227--237.
[11]
T. Kanade, J. F. Cohn, and YingLi T. 2000. Comprehensive database for facial expression analysis. In Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, 46--53.
[12]
D. Keltner, P. Ekman, G. C. Gonzaga, and J. Beer. 2003. Expression of emotion. In Handbook of Affective Sciences. 411--414.
[13]
S. Kimura, M. Fukuomoto, and T. Horikoshi. 2013. Eyeglass-based hands-free videophone. In Proceedings of the 2013 International Symposium on Wearable Computers (ISWC’13). ACM, New York, 117--124.
[14]
M. L. Knapp, J. A. Hall, and T. G. Horgan. 2013. Nonverbal Communication in Human Interaction. Cengage Learning.
[15]
H. Li, L. Trutoiu, Olszewski, L. K. Wei, T. Trutna, P. Hsieh, A. Nicholls, and C. Ma. 2015. Facial performance sensing head-mounted display. ACM Trans. Graph. 34, 4, Article 47 (July 2015), 9 pages.
[16]
G. Littlewort, I. Fasel, M. S. Bartlett, and J. R. Movellan. 2002. Fully Automatic Coding of Basic Expressions from Video. University of California, San Diego, CA 92093.
[17]
K. Masai, Y. Sugiura, M. Ogata, K. Kunze, M. Inami, and M. Sugimoto. 2016. Facial expression recognition in daily life by embedded photo reflective sensors on smart eyewear. In Proceedings of the 21st International Conference on Intelligent User Interfaces (IUI’16). ACM, New York, 317--326.
[18]
K. Masai, Y. Sugiura, M. Ogata, K. Suzuki, F. Nakamura, S. Shimamura, K. Kunze, M. Inami, and M. Sugimoto. 2015. AffectiveWear: Toward recognizing facial expression. In ACM SIGGRAPH 2015 Emerging Technologies (SIGGRAPH’15). ACM, New York, Article 4, 1 pages.
[19]
D. Matsumoto. 1992. More evidence for the universality of a contempt expression. Motivation and Emotion 16, 4 (1992), 363--368.
[20]
A. Mehrabian. 1972. Nonverbal Communication. Transaction Publishers.
[21]
H. Nakamura and H. Miyashita. 2010. Control of augmented reality information volume by glabellar fader. In Proceedings of the 1st Augmented Human International Conference (AH’10). ACM, New York, Article 20, 3 pages.
[22]
M. Ogata, Y. Sugiura, Y. Makino, M. Inami, and M. Imai. 2013. SenSkin: Adapting skin as a soft interface. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST’13). ACM, New York, 539--544.
[23]
M. Ogata, Y. Sugiura, H. Osawa, and M. Imai. 2012. iRing: Intelligent ring using infrared reflection. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST’12). ACM, New York, 131--136.
[24]
R. W. Picard and J. Healey. 1997. Affective wearables. Personal Technologies 1, 4 (1997), 231--240.
[25]
R. W. Picard. 1995. Affective computing. Perceptual Computing Section Technical Report No.321, MIT Press, Cambridge.
[26]
J. A. Russell. 1994. Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychological Bulletin 115 (1994), 102--141.
[27]
J. Scheirer, R. Fernandez, and R. W. Picard. 1999. Expression glasses: A wearable device for facial expression recognition. In CHI’99 Extended Abstracts on Human Factors in Computing Systems (CHI EA’99). ACM, New York, 262--263.
[28]
Y. Tian, T. Kanade, and J. F. Cohn. 2001. Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 2 (Feb. 2001), 97--115.
[29]
Y. Tian, T. Kanade, and J. F. Cohn. 2011. Facial expression recognition. In Handbook of Face Recognition, S. Z. Li and A. K. Jain (Eds.). Springer, London, 487--519.
[30]
Z. Zeng, Roisman G. I. Pantic, M., and T. S. Huang. 2009. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 1 (Jan. 2009), 39--58.

Cited By

View all
  • (2024)A Real-Time and Privacy-Preserving Facial Expression Recognition System Using an AI-Powered MicrocontrollerElectronics10.3390/electronics1314279113:14(2791)Online publication date: 16-Jul-2024
  • (2024)Facial Gesture Classification with Few-shot Learning Using Limited Calibration Data from Photo-reflective Sensors on Smart EyewearProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701595(432-438)Online publication date: 1-Dec-2024
  • (2024)Smile analysis in dentistry and orthodontics – a reviewJournal of the Royal Society of New Zealand10.1080/03036758.2024.231622655:1(192-205)Online publication date: 19-Feb-2024
  • Show More Cited By

Index Terms

  1. Evaluation of Facial Expression Recognition by a Smart Eyewear for Facial Direction Changes, Repeatability, and Positional Drift

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Interactive Intelligent Systems
    ACM Transactions on Interactive Intelligent Systems  Volume 7, Issue 4
    Special Issue on IUI 2016 Highlights
    December 2017
    134 pages
    ISSN:2160-6455
    EISSN:2160-6463
    DOI:10.1145/3166060
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 December 2017
    Accepted: 01 June 2017
    Revised: 01 May 2017
    Received: 01 July 2016
    Published in TIIS Volume 7, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Wearable
    2. affective computing
    3. eyewear computing
    4. facial expression

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    • JST CREST

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)34
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 22 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Real-Time and Privacy-Preserving Facial Expression Recognition System Using an AI-Powered MicrocontrollerElectronics10.3390/electronics1314279113:14(2791)Online publication date: 16-Jul-2024
    • (2024)Facial Gesture Classification with Few-shot Learning Using Limited Calibration Data from Photo-reflective Sensors on Smart EyewearProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701595(432-438)Online publication date: 1-Dec-2024
    • (2024)Smile analysis in dentistry and orthodontics – a reviewJournal of the Royal Society of New Zealand10.1080/03036758.2024.231622655:1(192-205)Online publication date: 19-Feb-2024
    • (2024)Cross-centroid ripple pattern for facial expression recognitionMultimedia Tools and Applications10.1007/s11042-024-19364-9Online publication date: 25-May-2024
    • (2024)MeciFace: Mechanomyography and Inertial Fusion-Based Glasses for Edge Real-Time Recognition of Facial and Eating ActivitiesProceedings of the International Conference on Ubiquitous Computing and Ambient Intelligence (UCAmI 2024)10.1007/978-3-031-77571-0_38(393-405)Online publication date: 21-Dec-2024
    • (2023)Analyzing the Effect of Diverse Gaze and Head Direction on Facial Expression Recognition With Photo-Reflective Sensors Embedded in a Head-Mounted DisplayIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.317976629:10(4124-4139)Online publication date: 1-Oct-2023
    • (2023)Survey on Emotion Sensing Using Mobile DevicesIEEE Transactions on Affective Computing10.1109/TAFFC.2022.322048414:4(2678-2696)Online publication date: 1-Oct-2023
    • (2023)Behavioral and Physiological Signals-Based Deep Multimodal Approach for Mobile Emotion RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2021.310086814:2(1082-1097)Online publication date: 1-Apr-2023
    • (2023)InMyFace: Inertial and mechanomyography-based sensor fusion for wearable facial activity recognitionInformation Fusion10.1016/j.inffus.2023.10188699(101886)Online publication date: Nov-2023
    • (2022)Consistent Smile Intensity Estimation from Wearable Optical Sensors2022 10th International Conference on Affective Computing and Intelligent Interaction (ACII)10.1109/ACII55700.2022.9953867(1-8)Online publication date: 18-Oct-2022
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media