skip to main content
10.1145/3386901.3388917acmconferencesArticle/Chapter ViewAbstractPublication PagesmobisysConference Proceedingsconference-collections
research-article

EMO: real-time emotion recognition from single-eye images for resource-constrained eyewear devices

Published: 15 June 2020 Publication History

Abstract

Real-time user emotion recognition is highly desirable for many applications on eyewear devices like smart glasses. However, it is very challenging to enable this capability on such devices due to tightly constrained image contents (only eye-area images available from the on-device eye-tracking camera) and computing resources of the embedded system. In this paper, we propose and develop a novel system called EMO that can recognize, on top of a resource-limited eyewear device, real-time emotions of the user who wears it. Unlike most existing solutions that require whole-face images to recognize emotions, EMO only utilizes the single-eye-area images captured by the eye-tracking camera of the eyewear. To achieve this, we design a customized deep-learning network to effectively extract emotional features from input single-eye images and a personalized feature classifier to accurately identify a user's emotions. EMO also exploits the temporal locality and feature similarity among consecutive video frames of the eye-tracking camera to further reduce the recognition latency and system resource usage. We implement EMO on two hardware platforms and conduct comprehensive experimental evaluations. Our results demonstrate that EMO can continuously recognize seven-type emotions at 12.8 frames per second with a mean accuracy of 72.2%, significantly outperforming the state-of-the-art approach, and consume much fewer system resources.

References

[1]
2018. About UVCCamera. https://github.com/saki4510t/UVCCamera. [Online; accessed 11-December-2018].
[2]
2019. FOVE 0 Eye-tracking VR Devkit. https://www.getfove.com/. [Online; accessed 10-April-2019].
[3]
2019. HiKey. https://www.96boards.org/product/hikey/. [Online; accessed 11-December-2019].
[4]
2019. HTC VIVE Pro Eye Head Mounted Display. https://enterprise.vive.com/us/product/vive-pro-eye/. [Online; accessed 05-December-2019].
[5]
2019. Open-Q 820. https://www.intrinsyc.com/computing-platforms/open-q-820-usom/. [Online; accessed 11-December-2019].
[6]
2019. Pupil. https://pupil-labs.com/pupil/. [Online; accessed 10-December-2019].
[7]
2019. Tobii. https://www.tobii.com/tech/products/vr/. [Online; accessed 10-December-2019].
[8]
2020. HoloLens 2. https://www.microsoft.com/en-us/hololens. [Online; accessed 01-Apr-2020].
[9]
2020. How Eye Tracking is Driving the Next Generation of AR and VR. https://vrscout.com/news/eye-tracking-driving-next-generation-ar-vr/. [Online; accessed 01-Apr-2020].
[10]
Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: a system for large-scale machine learning. In USENIX Symposium on Operating Systems Design and Implementation (OSDI). USENIX Association, 265--283.
[11]
Niki Aifanti, Christos Papachristou, and Anastasios Delopoulos. 2010. The MUG facial expression database. In 11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS 10. IEEE, 1--4.
[12]
Lisa Aziz-Zadeh and Antonio Damasio. 2008. Embodied semantics for actions: Findings from functional brain imaging. Journal of Physiology-Paris 102, 1-3 (2008), 35--39.
[13]
Mihai Bâce, Sander Staal, and Gábor Sörös. 2018. Wearable eye tracker calibration at your fingertips. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications. ACM, 22.
[14]
Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. 2016. Fully-convolutional siamese networks for object tracking. In European conference on computer vision. 850--865.
[15]
Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. 1993. Signature Verification Using a "Siamese" Time Delay Neural Network. In Advances in neural information processing systems. Morgan Kaufmann Publishers Inc., 737--744.
[16]
Guillaume Chanel, Julien Kronegg, Didier Grandjean, and Thierry Pun. 2006. Emotion assessment: Arousal evaluation using EEG's and peripheral physiological signals. In Multimedia Content Representation, Classification and Security. Springer Berlin Heidelberg, 530--537.
[17]
Jason Chang, Donglai Wei, and John W. Fisher, III. 2013. A Video Representation Using Temporal Superpixels. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 2051--2058.
[18]
Warapon Chinsatit and Takeshi Saitoh. 2017. CNN-based pupil center detection for wearable gaze estimation system. Applied Computational Intelligence and Soft Computing 2017 (2017).
[19]
Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 539--546.
[20]
Dahjung Chung, Khalid Tahboub, and Edward J Delp. 2017. A Two Stream Siamese Convolutional Neural Network for Person Re-identification. In IEEE International Conference on Computer Vision (ICCV). IEEE Computer Society, 1992--2000.
[21]
Jean Costa, Alexander T. Adams, Malte F. Jung, François Guimbretière, and Tanzeem Choudhury. 2016. EmotionCheck: Leveraging Bodily Signals and False Feedback to Regulate Our Emotions. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 758--769.
[22]
Roddy Cowie, Ellen Douglas-Cowie, Nicolas Tsapatsoulis, George Votsis, Stefanos Kollias, Winfried Fellenz, and John G Taylor. 2001. Emotion recognition in human-computer interaction. IEEE Signal processing magazine 18, 1 (2001), 32--80.
[23]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 248--255.
[24]
Samira Ebrahimi Kahou, Vincent Michalski, Kishore Konda, Roland Memisevic, and Christopher Pal. 2015. Recurrent neural networks for emotion recognition in video. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, 467--474.
[25]
Samira Ebrahimi Kahou, Vincent Michalski, Kishore Konda, Roland Memisevic, and Christopher Pal. 2015. Recurrent Neural Networks for Emotion Recognition in Video. In ACM on International Conference on Multimodal Interaction. ACM, 467--474.
[26]
Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion 6, 3-4 (1992), 169--200.
[27]
Paul Ekman and Wallace V. Friesen. 1976. Measuring facial movement. Environmental psychology and nonverbal behavior 1, 1 (1976), 56--75.
[28]
Wallace V Friesen and Paul Ekman. 1983. EMFACS-7: Emotional facial action coding system. Unpublished manuscript, University of California at San Francisco (1983), 1.
[29]
Wolfgang Fuhl, Thiago C Santini, Thomas Kübler, and Enkelejda Kasneci. 2016. Else: Ellipse selection for robust pupil detection in real-world environments. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications. ACM, 123--130.
[30]
Kurara Fukumoto, Tsutomu Terada, and Masahiko Tsukamoto. 2013. A smile/laughter recognition mechanism for smile-based life logging. In Proceedings of the 4th Augmented Human International Conference. ACM, 213--220.
[31]
Ian J Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, et al. 2013. Challenges in representation learning: A report on three machine learning contests. In International Conference on Neural Information Processing. Springer, 117--124.
[32]
Qing Guo, Wei Feng, Ce Zhou, Rui Huang, Liang Wan, and Song Wang. 2017. Learning dynamic siamese network for visual object tracking. In IEEE International Conference on Computer Vision (ICCV). IEEE Computer Society, 1781--1789.
[33]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 770--778.
[34]
Steven Hickson, Nick Dufour, Avneesh Sud, Vivek Kwatra, and Irfan Essa. 2019. Eyemotion: Classifying facial expressions in VR using eye-tracking cameras. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 1626--1635.
[35]
Sabrina Hoppe, Tobias Loetscher, Stephanie Morey, and Andreas Bulling. 2015. Recognition of curiosity using eye movement analysis. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers. ACM, 185--188.
[36]
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017). http://arxiv.org/abs/1704.04861
[37]
Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size. arXiv preprint arXiv:1602.07360 (2016). https://arxiv.org/abs/1602.07360
[38]
Spiros V. Ioannou, Amaryllis T. Raouzaiou, Vasilis A. Tzouvaras, Theofilos P. Mailis, Kostas C. Karpouzis, and Stefanos D. Kollias. 2005. Emotion recognition through facial expression analysis based on a neurofuzzy network. Neural Networks 18, 4 (2005), 423--435.
[39]
Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: an open source platform for pervasive eye tracking and mobile gaze-based interaction. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication. ACM, 1151--1160.
[40]
Bo-Kyeong Kim, Hwaran Lee, Jihyeon Roh, and Soo-Young Lee. 2015. Hierarchical committee of deep cnns with exponentially-weighted decision fusion for static facial expression recognition. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, 427--434.
[41]
Bo-Kyeong Kim, Hwaran Lee, Jihyeon Roh, and Soo-Young Lee. 2015. Hierarchical Committee of Deep CNNs with Exponentially-Weighted Decision Fusion for Static Facial Expression Recognition. In ACM on International Conference on Multimodal Interaction. ACM, 427--434.
[42]
Elizabeth S Kim, Adam Naples, Giuliana Vaccarino Gearty, Quan Wang, Seth Wallace, Carla Wall, Michael Perlmutter, Jennifer Kowitt, Linda Friedlaender, Brian Reichow, et al. 2014. Development of an untethered, mobile, low-cost head-mounted eye tracker. In Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 247--250.
[43]
Christian Lander, Sven Gehring, Antonio Krüger, Sebastian Boring, and Andreas Bulling. 2015. Gazeprojector: Accurate gaze estimation and seamless gaze interaction across multiple displays. In Proceedings of the 28th Annual ACM Symposium on User Interface Software &; Technology. ACM, 395--404.
[44]
Christian Lander, Antonio Krüger, and Markus Löchtefeld. 2016. The story of life is quicker than the blink of an eye: using corneal imaging for life logging. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. ACM, 1686--1695.
[45]
Laura Leal-Taixé, Cristian Canton-Ferrer, and Konrad Schindler. 2016. Learning by tracking: Siamese CNN for robust target association. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society, 418--425.
[46]
Joseph LeDoux and Jules R Bemporad. 1997. The emotional brain. Journal of the American Academy of Psychoanalysis 25, 3 (1997), 525--528.
[47]
Uichin Lee, Kyungsik Han, Hyunsung Cho, Kyong-Mee Chung, Hwajung Hong, Sung-Ju Lee, Youngtae Noh, Sooyoung Park, and John M. Carroll. 2019. Intelligent positive computing with mobile, wearable, and IoT devices: Literature review and research directions. Ad Hoc Networks 83 (2019), 8 -- 24.
[48]
Gil Levi and Tal Hassner. 2015. Emotion recognition in the wild via convolutional neural networks and mapped binary patterns. In Proceedings of the 2015 ACM on international conference on multimodal interaction. ACM, 503--510.
[49]
Gil Levi and Tal Hassner. 2015. Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns. In ACM on International Conference on Multimodal Interaction. ACM, 503--510.
[50]
Robert LiKamWa, Yunxin Liu, Nicholas D. Lane, and Lin Zhong. 2013. MoodScope: building a mood sensor from smartphone usage patterns. In Proceedings of the 11th annual international conference on Mobile systems, applications, and services. ACM, 389--402.
[51]
G. Littlewort, J. Whitehill, T. Wu, I. Fasel, M. Frank, J. Movellan, and M. Bartlett. 2011. The computer expression recognition toolbox (CERT). In Face and Gesture 2011. IEEE, 298--305.
[52]
Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. 2008. Isolation forest. In 2008 Eighth IEEE International Conference on Data Mining. IEEE, 413--422.
[53]
Luyang Liu, Hongyu Li, and Marco Gruteser. 2019. Edge Assisted Real-time Object Detection for Mobile Augmented Reality. In In Proceedings of The 25th Annual International Conference on Mobile Computing and Networking. ACM.
[54]
Mengyi Liu, Shaoxin Li, Shiguang Shan, and Xilin Chen. 2013. Au-aware deep networks for facial expression recognition. In International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE Computer Society, 1--6.
[55]
Ping Liu, Shizhong Han, Zibo Meng, and Yan Tong. 2014. Facial Expression Recognition via a Boosted Deep Belief Network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 1805--1812.
[56]
J. MacQueen. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability. University of California Press, 281--297.
[57]
Katsutoshi Masai, Yuta Sugiura, Katsuhiro Suzuki, Sho Shimamura, Kai Kunze, Masa Ogata, Masahiko Inami, and Maki Sugimoto. 2015. Affective wear: towards recognizing affect in real life. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the Proceedings of the 2015 ACM International Symposium on Wearable Computers. ACM, 357--360.
[58]
Daniel McDuff, Abdelrahman Mahmoud, Mohammad Mavadati, May Amr, Jay Turcot, and Rana el Kaliouby. 2016. AFFDEX SDK: A Cross-Platform Real-Time Multi-Face Expression Recognition Toolkit. In CHI Extended Abstracts. 3723--3726.
[59]
Niall McLaughlin, Jesús Martínez del Rincón, and Paul C. Miller. 2016. Recurrent Convolutional Network for Video-Based Person Re-identification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 1325--1334.
[60]
Daniel N. McIntosh RB Zajonc Peter S. Vig Stephen W. Emerick. 1997. Facial movement, breathing, temperature, and affect: Implications of the vascular theory of emotional efference. Cognition & Emotion 11, 2 (1997), 171--196.
[61]
Basilio Noris, Jean-Baptiste Keller, and Aude Billard. 2011. A wearable gaze tracking system for children in unconstrained environments. Computer Vision and Image Understanding (2011), 476--486.
[62]
Sébastien Ouellet. 2014. Real-time emotion recognition for gaming using deep convolutional network features. arXiv preprint arXiv:1408.3750 (2014). https://arxiv.org/abs/1408.3750
[63]
Sébastien Ouellet. 2014. Real-time emotion recognition for gaming using deep convolutional network features. arXiv preprint arXiv:1408.3750 (2014). https://arxiv.org/abs/1408.3750
[64]
Fazlay Rabbi, Taiwoo Park, Biyi Fang, Mi Zhang, and Youngki Lee. 2018. When Virtual Reality Meets Internet of Things in the Gym: Enabling Immersive Interactive Machine Exercises. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 2 (2018), 78:1--78:21.
[65]
Javier San Agustin, Henrik Skovsgaard, Emilie Mollenbach, Maria Barret, Martin Tall, Dan Witzner Hansen, and John Paulin Hansen. 2010. Evaluation of a low-cost open-source gaze tracker. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications. ACM, 77--80.
[66]
Alexandre Schaefer, Frédéric Nils, Xavier Sanchez, and Pierre Philippot. 2010. Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers. Cognition and Emotion (2010), 1153--1172.
[67]
Jocelyn Scheirer, Raul Fernandez, and Rosalind W Picard. 1999. Expression glasses: a wearable device for facial expression recognition. In CHI'99 Extended Abstracts on Human Factors in Computing Systems. ACM, 262--263.
[68]
Caifeng Shan, Shaogang Gong, and Peter W. McOwan. 2009. Facial Expression Recognition Based on Local Binary Patterns: A Comprehensive Study. Image and vision Computing (2009), 803--816.
[69]
Matthew Shreve, Sridhar Godavarthy, Dmitry Goldgof, and Sudeep Sarkar. 2011. Macro-and micro-expression spotting in long videos using spatio-temporal strain. In Face and Gesture. IEEE, 51--56.
[70]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). http://arxiv.org/abs/1409.1556
[71]
Mohammad Soleymani, Maja Pantic, and Thierry Pun. 2012. Multimodal emotion recognition in response to videos. IEEE transactions on affective computing 3, 2 (2012), 211--223.
[72]
Julian Steil and Andreas Bulling. 2015. Discovery of everyday human activities from long-term visual behaviour using topic models. In Proceedings of the 2015 acm international joint conference on pervasive and ubiquitous computing. ACM, 75--85.
[73]
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, et al. 2015. Going deeper with convolutions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 1--9.
[74]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 2818--2826.
[75]
Ran Tao, Efstratios Gavves, and Arnold WM Smeulders. 2016. Siamese instance search for tracking. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 1420--1429.
[76]
Rahul Rama Varior, Mrinal Haloi, and Gang Wang. 2016. Gated siamese convolutional neural network architecture for human re-identification. In European Conference on Computer Vision (ECCV). Springer, 791--808.
[77]
Rahul Rama Varior, Bing Shuai, Jiwen Lu, Dong Xu, and Gang Wang. 2016. A siamese long short-term memory architecture for human re-identification. In ECCV. Springer, 135--153.
[78]
Xi Wang, Xi Zhao, Varun Prakash, Zhimin Gao, Tao Feng, Omprakash Gnawali, and Weidong Shi. 2013. Person-of-interest detection system using cloud-supported computerized-eyewear. In 2013 IEEE International Conference on Technologies for Homeland Security (HST). IEEE, 658--663.
[79]
Wikipedia contributors. 2019. Confusion matrix. https://en.wikipedia.org/w/index.php?title=Confusion_matrix&oldid=870941727. [Online; accessed 6-January-2019].
[80]
Mengwei Xu, Mengze Zhu, Yunxin Liu, Felix Xiaozhu Lin, and Xuanzhe Liu. 2018. DeepCache: Principled Cache for Mobile Deep Vision. In Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. ACM, 129--144.
[81]
Ran Xu, Jinkyu Koo, Rakesh Kumar, Peter Bai, Subrata Mitra, Sasa Misailovic, and Saurabh Bagchi. 2018. VideoChef: Efficient Approximation for Streaming Video Processing Pipelines. In 2018 USENIX Annual Technical Conference (USENIX ATC 18). USENIX Association, 43--56.
[82]
Ekin Yağiş and Mustafa Unel. 2018. Facial Expression Based Emotion Recognition Using Neural Networks. In Image Analysis and Recognition. Springer International Publishing, 210--217.
[83]
Xiaochao Yang, Chuang-Wen You, Hong Lu, Mu Lin, Nicholas D Lane, and Andrew T Campbell. 2012. Visage: A face interpretation engine for smartphone applications. In International Conference on Mobile Computing, Applications, and Services. Springer, 149--168.
[84]
Zhiding Yu and Cha Zhang. 2015. Image based static facial expression recognition with multiple deep network learning. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, 435--442.
[85]
Zhiding Yu and Cha Zhang. 2015. Image Based Static Facial Expression Recognition with Multiple Deep Network Learning. In ACM on International Conference on Multimodal Interaction. ACM, 435--442.
[86]
Ligang Zhang, Brijesh Verma, Dian Tjondronegoro, and Vinod Chandran. 2018. Facial Expression Analysis Under Partial Occlusion: A Survey. ACM Comput. Surv. 51, 2 (2018), 25:1--25:49.
[87]
Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. 2018. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 6848--6856.
[88]
Yongtuo Zhang, Wen Hu, Weitao Xu, Chun Tung Chou, and Jiankun Hu. 2018. Continuous Authentication Using Eye Movement Response of Implicit Visual Stimuli. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 4 (2018), 177:1--177:22.
[89]
Mingmin Zhao, Fadel Adib, and Dina Katabi. 2016. Emotion recognition using wireless signals. In Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking. ACM, 95--108.

Cited By

View all
  • (2024)Apprenticeship-inspired eleganceProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/350(3160-3168)Online publication date: 3-Aug-2024
  • (2024)Artificial Intelligence of Things: A SurveyACM Transactions on Sensor Networks10.1145/369063921:1(1-75)Online publication date: 30-Aug-2024
  • (2024)PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking ServicesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785958:3(1-28)Online publication date: 9-Sep-2024
  • Show More Cited By

Index Terms

  1. EMO: real-time emotion recognition from single-eye images for resource-constrained eyewear devices

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MobiSys '20: Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services
    June 2020
    496 pages
    ISBN:9781450379540
    DOI:10.1145/3386901
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 June 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. deep learning
    2. emotion recognition
    3. eyewear devices
    4. single-eye images
    5. visual sensing

    Qualifiers

    • Research-article

    Conference

    MobiSys '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 274 of 1,679 submissions, 16%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)90
    • Downloads (Last 6 weeks)12
    Reflects downloads up to 02 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Apprenticeship-inspired eleganceProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/350(3160-3168)Online publication date: 3-Aug-2024
    • (2024)Artificial Intelligence of Things: A SurveyACM Transactions on Sensor Networks10.1145/369063921:1(1-75)Online publication date: 30-Aug-2024
    • (2024)PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking ServicesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785958:3(1-28)Online publication date: 9-Sep-2024
    • (2024)Estimating ‘Happy’ Based on Eye-Behavior Collected from HMDProceedings of the 2024 Symposium on Eye Tracking Research and Applications10.1145/3649902.3656364(1-2)Online publication date: 4-Jun-2024
    • (2024)EchoPFLProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435608:1(1-22)Online publication date: 6-Mar-2024
    • (2024)Towards Low-Energy Adaptive Personalization for Resource-Constrained DevicesProceedings of the 4th Workshop on Machine Learning and Systems10.1145/3642970.3655826(73-80)Online publication date: 22-Apr-2024
    • (2024)Low-Energy On-Device Personalization for MCUs2024 IEEE/ACM Symposium on Edge Computing (SEC)10.1109/SEC62691.2024.00012(45-58)Online publication date: 4-Dec-2024
    • (2024)Hierarchical Event-RGB Interaction Network for Single-eye Expression RecognitionInformation Sciences10.1016/j.ins.2024.121539(121539)Online publication date: Oct-2024
    • (2023)In the Blink of an Eye: Event-based Emotion RecognitionACM SIGGRAPH 2023 Conference Proceedings10.1145/3588432.3591511(1-11)Online publication date: 23-Jul-2023
    • (2023)Poster: Real-Time Object Substitution for Mobile Diminished Reality with Edge ComputingProceedings of the Eighth ACM/IEEE Symposium on Edge Computing10.1145/3583740.3628422(279-281)Online publication date: 6-Dec-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    EPUB

    View this article in ePub.

    ePub

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media