skip to main content
research-article

PPGface: Like What You Are Watching? Earphones Can "Feel" Your Facial Expressions

Published: 07 July 2022 Publication History

Abstract

Recognition of facial expressions has been widely explored to represent people's emotional states. Existing facial expression recognition systems primarily rely on external cameras which make it less accessible and efficient in many real-life scenarios to monitor an individual's facial expression in a convenient and unobtrusive manner. To this end, we propose PPGface, a ubiquitous, easy-to-use, user-friendly facial expression recognition platform that leverages earable devices with built-in PPG sensor. PPGface understands the facial expressions through the dynamic PPG patterns resulting from facial muscle movements. With the aid of the accelerometer sensor, PPGface can detect and recognize the user's seven universal facial expressions and relevant body posture unobtrusively. We conducted an user study (N=20) using multimodal ResNet to evaluate the performance of PPGface, and showed that PPGface can detect different facial expressions with 93.5 accuracy and 0.93 fl-score. In addition, to explore the robustness and usability of our proposed platform, we conducted several comprehensive experiments under real-world settings. Overall results of this work validate a great potential to be employed in future commodity earable devices.

References

[1]
Paul S. Addison. 2017. The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance. CRC press.
[2]
Takashi Amesaka, Hiroki Watanabe, and Masanori Sugimoto. 2019. Facial expression recognition using ear canal transfer function. In Proceedings of the 23rd International Symposium on Wearable Computers. 1--9.
[3]
R. R. Anderson, J. Hu, and J. A. Parrish. 1981. Optical radiation transfer in the human skin and applications in in vivo remittance spectroscopy. In Proceedings of the Bioengineering and the Skin. Springer, 253--265.
[4]
Jean J. M. Askenasy. 1989. Is Yawning an Arousal Defense Reflex? The Journal of Psychology 123, 6 (1989), 609--621.
[5]
C. Bradford Barber, David P. Dobkin, and Hannu Huhdanpaa. 1996. The quickhull algorithm for convex hulls. ACM Transactions on Mathematical Software (TOMS) 22, 4 (1996), 469--483.
[6]
Khayrul Bashar. 2018. ECG and EEG based multimodal biometrics for human identification. In Proceedings of the 2018 IEEE International conference on systems, man, and cybernetics (SMC). IEEE, 4345--4350.
[7]
F. Berzin and C. R. H. Fortinguerra. 1993. EMG study of the anterior, superior and posterior auricular muscles in man. Annals of Anatomy-Anatomischer Anzeiger 175, 2 (1993), 195--197.
[8]
Shengjie Bi, Tao Wang, Nicole Tobias, Josephine Nordrum, Shang Wang, George Halvorsen, Sougata Sen, Ronald Peterson, Kofi Odame, Kelly Caine, et al. 2018. Auracle: Detecting eating episodes with an ear-mounted sensor. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 3 (2018), 1--27.
[9]
Maneesh Bilalpur, Seyed Mostafa Kia, Manisha Chawla, Tat-Seng Chua, and Ramanathan Subramanian. 2017. Gender and emotion recognition with implicit user signals. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. 379--387.
[10]
Mehdi Boukhechba, Lihua Cai, Congyu Wu, and Laura E. Barnes. 2019. ActiPPG: using deep neural networks for activity recognition from wrist-worn photoplethysmography (PPG) sensors. Smart Health 14 (2019), 100082.
[11]
Brian Bradke and Bradford Everman. 2020. Investigation of Photoplethysmography Behind the Ear for Pulse Oximetry in Hypoxic Conditions with a Novel Device (SPYDR). 10, 4 (2020).
[12]
K. Budidha and P. A. Kyriacou. 2018. In Vivo Investigation of Ear Canal Pulse Oximetry during Hypothermia. Journal of Clinical Monitoring and Computing 32 (2018), 97--107.
[13]
Nam Bui, Nhat Pham, Jessica Jacqueline Barnitz, Zhanan Zou, Phuc Nguyen, Hoang Truong, Taeho Kim, Nicholas Farrow, Anh Nguyen, Jianliang Xiao, Robin Deterding, Thang Dinh, and Tam Vu. 2019. EBP: A Wearable System For Frequent and Comfortable Blood Pressure Monitoring From User's Ear. Association for Computing Machinery, New York, NY, USA.
[14]
Yetong Cao, Qian Zhang, Fan Li, Song Yang, and Yu Wang. 2020. PPGPass: Nonintrusive and Secure Mobile Two-Factor Authentication via Wearables. IEEE Press.
[15]
Tuochao Chen, Yaxuan Li, Songyun Tao, Hyunchul Lim, Mose Sakashita, Ruidong Zhang, Francois Guimbretiere, and Cheng Zhang. 2021. NeckFace: Continuously Tracking Full Facial Expressions on Neck-mounted Wearables. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 2 (2021), 1--31.
[16]
Tuochao Chen, Benjamin Steeper, Kinan Alsheikh, Songyun Tao, François Guimbretière, and Cheng Zhang. 2020. C-Face: Continuously Reconstructing Facial Expressions by Deep Learning Contours of the Face with Ear-Mounted Miniature Cameras. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 112--125.
[17]
Yanjiao Chen, Runmin Ou, Zhiyang Li, and Kaishun Wu. 2020. WiFace: facial expression recognition using Wi-Fi signals. IEEE Transactions on Mobile Computing (2020).
[18]
Romit Roy Choudhury. 2021. Earable computing: A new area to think about. In Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications. 147--153.
[19]
Sebastian Cotofana, Natalia Lowry, Aditya Devineni, Gianna Rosamilia, Thilo L. Schenck, Konstantin Frank, Sana A. Bautista, Jeremy B. Green, Hassan Hamade, and Robert H Gotkin. 2020. Can smiling influence the blood flow in the facial vein?---An experimental study. Journal of Cosmetic Dermatology 19, 2 (2020), 321--327.
[20]
Sebastian Cotofana, Hanno Steinke, Alexander Schlattau, Markus Schlager, Jonathan M. Sykes, Malcolm Z. Roth, Alexander Gaggl, Riccardo E. Giunta, Robert H. Gotkin, and Thilo L. Schenck. 2017. The anatomy of the facial vein: implications for plastic, reconstructive, and aesthetic procedures. Plastic and Reconstructive Surgery 139, 6 (2017), 1346--1353.
[21]
Antonios Danelakis, Theoharis Theoharis, and Ioannis Pratikakis. 2018. Action unit detection in 3 D facial videos with application in facial expression retrieval and recognition. Multimedia Tools and Applications 77, 19 (2018), 24813--24841.
[22]
John Delaney. 2016. You Can Hear the Future Calling. Communications of the ACM (May 2016). https://cacm.acm.org/news/202492-you-can-hear-the-future- calling/
[23]
Shichuan Du, Yong Tao, and Aleix M. Martinez. 2014. Compound facial expressions of emotion. Proceedings of the National Academy of Sciences 111, 15 (2014), E1454-E1462.
[24]
Xujun Duan, Qian Dai, Qiyong Gong, and Huafu Chen. 2010. Neural mechanism of unconscious perception of surprised facial expression. Neuroimage 52, 1 (2010), 401--407.
[25]
Paul Ekman. [n.d.]. Paul Ekman Group. https://www.paulekman.com/universal-emotions Last accessed: 2021-08-25.
[26]
Paul Ekman. 1989. The argument and evidence about universals in facial expressions. Handbook of Social Psychophysiology 143 (1989), 164.
[27]
Paul Ekman. 2003. Darwin, deception, and facial expression. Annals of the New York Academy of Sciences 1000, 1 (2003), 205--221.
[28]
Paul Ekman and Dacher Keltner. 1970. Universal facial expressions of emotion. California Mental Health Research Digest 8, 4 (1970), 151--158.
[29]
Luke Everson, Dwaipayan Biswas, Madhuri Panwar, Dimitrios Rodopoulos, Amit Acharyya, Chris H Kim, Chris Van Hoof, Mario Konijnenburg, and Nick Van Helleputte. 2018. Biometricnet: Deep learning based biometric identification using wrist-worn PPG. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 1--5.
[30]
Jesse Fine, Kimberly L. Branan, Andres J. Rodriguez, Tananant Boonya-ananta, Ajmal, Jessica C. Ramella-Roman, Michael J. McShane, and Gerard L. Coté. 2021. Sources of Inaccuracy in Photoplethysmography for Continuous Cardiovascular Monitoring. Biosensors 11, 4 (2021), 126:1--36.
[31]
E. Friesen and Paul Ekman. 1978. Facial action coding system: a technique for the measurement of facial movement. Palo Alto 3, 2 (1978), 5.
[32]
Crystal A. Gabert-Quillen, Ellen E. Bartolini, Benjamin T. Abravanel, and Charles A. Sanislow. 2015. Ratings for emotion film clips. Behavior Research Methods 47, 3 (2015), 773--787.
[33]
Yang Gao, Wei Wang, Vir V Phoha, Wei Sun, and Zhanpeng Jin. 2019. EarEcho: Using Ear Canal Echo for Wearable Authentication. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 3 (2019), 1--24.
[34]
Mohammad Ghamari, Denisse Castaneda, Aibhlin Esparza, Cinna Soltanpur, and Homer Nazeran. 2018. A Review on Wearable Photoplethysmography Sensors and Their Potential Future Applications in Health Care. 4, 4 (2018), 195--202.
[35]
T. Lee Gilman, Razan Shaheen, K. Maria Nylocks, Danielle Halachoff, Jessica Chapman, Jessica J. Flynn, Lindsey M. Matt, and Karin G. Coifman. 2017. A film set for the elicitation of emotion in research: A comprehensive catalog derived from four decades of investigation. Behavior Research Methods 49, 6 (2017), 2061--2082.
[36]
Michael Goodman. 2021. U.S. SVOD Forecast (2010 - 2026). Technical Report. Strategy Analytics, Newton, Massachusetts USA. https://www.strategyanalytics.com/ Last accessed: 2021-08-25.
[37]
Valentin Goverdovsky, David Looney, Preben Kidmose, and Danilo P. Mandic. 2016. In-Ear EEG From Viscoelastic Generic Earpieces: Robust and Unobtrusive 24/7 Monitoring. 16, 1 (2016), 271--277.
[38]
Valentin Goverdovsky, Wilhelm von Rosenberg, Takashi Nakamura, David Looney, David J. Sharp, Christos Papavassiliou, Mary J. Morrell, and Danilo P. Mandic. 2017. Hearables: Multimodal physiological in-ear sensing. Scientific reports 7, 1 (2017), 1--10.
[39]
Grand View Research. 2022. Video Streaming Market Size, Share & Trends Analysis Report, 2022 - 2030. Technical Report GVR-2-68038-629-5. San Francisco, CA, USA.
[40]
Malcolm J. Grenness, Jon Osborn, and W. Lee Weller. 2002. Mapping ear canal movement using area-based surface matching. The Journal of the Acoustical Society of America 111, 2 (2002), 960--971.
[41]
James J. Gross and Robert W. Levenson. 1995. Emotion elicitation using films. Cognition & Emotion 9, 1 (1995), 87--108.
[42]
Anna Gruebler and Kenji Suzuki. 2010. Measurement of distal EMG signals using a wearable device for reading facial expressions. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. IEEE, 4594--4597.
[43]
Yu Gu, Xiang Zhang, Zhi Liu, and Fuji Ren. 2020. WiFE: WiFi and Vision based Intelligent Facial-Gesture Emotion Recognition. arXiv preprint arXiv:2004.09889 (2020).
[44]
Y.Y. Gu and Y.T. Zhang. 2003. Photoplethysmographic authentication through fuzzy logic. In Proceedings of the IEEE EMBS Asian-Pacific Conference on Biomedical Engineering, 2003. IEEE, 136--137.
[45]
Mahyar Hamedi, Sh-Hussain Salleh, Tan T. Swee, et al. 2011. Surface electromyography-based facial expression recognition in Bi-polar configuration. Journal of Computer Science 7, 9 (2011), 1407.
[46]
Masaki Hasegawa, Kotaro Hayashi, and Jun Miura. 2019. Fatigue Estimation using Facial Expression features and Remote-PPG Signal. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). IEEE, 1--6.
[47]
Dustin A. Hatfield, Eugene A. Whang, Robert A. Boyd, Duy P. Le, Yi-Fang D. Tsai, David J. Feathers, Shota Aoyagi, and Sean S. Corbin. 2020. Earphones. Apple Inc. https://patents.justia.com/patent/20200314518 US Patent Application Number: 16/564,804.
[48]
Jiesheng He and Wei Wu. 2021. Deformation of Heartbeat Pulse Waveform Caused by Sensor Binding Force. arXiv:2108.10014 [physics.med-ph]
[49]
Earnest Paul Ijjina and C. Krishna Mohan. 2014. Facial expression recognition using kinect depth sensor and convolutional neural networks. In Proceedings of the 2014 13th International Conference on Machine Learning and Applications. IEEE, 392--396.
[50]
Maxim Integrated. 2021. MAXM86161 Single-Supply Integrated Optical Module for HR and SpO2 Measurement. Data Sheet.
[51]
Masood Mehmod Khan, Robert D. Ward, and Michael Ingleby. 2004. Automated classification and recognition of facial expressions using infrared thermal imaging. In Proceedings of the IEEE Conference on Cybernetics and Intelligent Systems, 2004., Vol. 1. IEEE, 202--206.
[52]
Byung S. Kim and Sun K. Yoo. 2006. Motion artifact reduction in photoplethysmography using independent component analysis. IEEE Transactions on Biomedical Engineering 53, 3 (2006), 566--568.
[53]
Jangho Kwon, Da-Hye Kim, Wanjoo Park, and Laehyun Kim. 2016. A wearable device for emotional recognition using facial expression and physiological response. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 5765--5768.
[54]
Han-Wook Lee, Ju-Won Lee, Won-Geun Jung, and Gun-Ki Lee. 2007. The periodic moving average filter for removing motion artifacts from PPG signals. International Journal of Control, Automation, and Systems 5, 6 (2007), 701--706.
[55]
Jongshill Lee, Minseong Kim, Hoon-Ki Park, and In Young Kim. 2020. Motion artifact reduction in wearable photoplethysmography based on multi-channel sensors with multiple wavelengths. Sensors 20, 5 (2020), 1493.
[56]
Min Seop Lee, Yun Kyu Lee, Dong Sung Pae, Myo Taeg Lim, Dong Won Kim, and Tae Koo Kang. 2019. Fast Emotion Recognition Based on Single Pulse PPG Signal with Convolutional Neural Network. Applied Sciences 9, 16 (2019), 3355.
[57]
Seungchul Lee, Chulhong Min, Alessandro Montanari, Akhil Mathur, Youngjae Chang, Junehwa Song, and Fahim Kawsar. 2019. Automatic Smile and Frown Recognition with Kinetic Earables. In Proceedings of the 10th Augmented Human International Conference 2019. 1--4.
[58]
Min Lin, Qiang Chen, and Shuicheng Yan. 2013. Network in network. arXiv preprint arXiv:1312.4400 (2013).
[59]
Yong-Jin Liu, Minjing Yu, Guozhen Zhao, Jinjing Song, Yan Ge, and Yuanchun Shi. 2017. Real-time movie-induced discrete emotion recognition from EEG signals. IEEE Transactions on Affective Computing 9, 4 (2017), 550--562.
[60]
Katsutoshi Masai, Kai Kunze, and Maki Sugimoto. 2020. Eye-based interaction using embedded optical sensors on an eyewear device for facial expression recognition. In Proceedings of the Augmented Humans International Conference. 1--10.
[61]
Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Kai Kunze, Masahiko Inami, and Maki Sugimoto. 2016. Facial expression recognition in daily life by embedded photo reflective sensors on smart eyewear. In Proceedings of the 21st International Conference on Intelligent User Interfaces. 317--326.
[62]
David Matsumoto. 1992. More evidence for the universality of a contempt expression. Motivation and Emotion 16, 4 (1992), 363--368.
[63]
Koichiro Matsuo and Jeffrey B. Palmer. 2008. Anatomy and physiology of feeding and swallowing: normal and abnormal. Physical Mdicine and Rehabilitation Clinics of North America 19, 4 (2008), 691--707.
[64]
Johannes Michalak, Judith Mischnat, and Tobias Teismann. 2014. Sitting posture makes a difference---embodiment effects on depressive memory bias. Clinical Psychology & Psychotherapy 21, 6 (2014), 519--524.
[65]
Jermana L. Moraes, Matheus X. Rocha, Glauber G. Vasconcelos, José E. Vasconcelos Filho, Victor Hugo C. De Albuquerque, and Auzuir R. Alexandria. 2018. Advances in photopletysmography signal analysis for biomedical applications. Sensors 18, 6 (2018), 1894.
[66]
Shohreh Nafisi and Howard I. Maibach. 2018. Skin Penetration of nanoparticles. In Proceedings of the Emerging Nanotechnologies in Immunology. Elsevier, 47--88.
[67]
Takashi Nakamura, Yousef D. Alqurashi, Mary J. Morrell, and Danilo P. Mandic. 2020. Hearables: Automatic Overnight Sleep Monitoring With Standardized In-Ear EEG Sensor. 67, 1 (2020), 203--212.
[68]
Takashi Nakamura, Valentin Goverdovsky, and Danilo P. Mandic. 2018. In-Ear EEG Biometrics for Feasible and Readily Collectable Real-World Person Authentication. 13, 3 (2018), 648--661.
[69]
Maja Pantic. 2009. Machine analysis of facial behaviour: Naturalistic and dynamic behaviour. Philosophical Transactions of the Royal Society B: Biological Sciences 364, 1535 (2009), 3505--3513.
[70]
Vasileios Papapanagiotou, Christos Diou, Lingchuan Zhou, Janet van den Boer, Monica Mars, and Anastasios Delopoulos. 2016. A novel chewing detection system based on PPG, audio, and accelerometry. IEEE Journal of Biomedical and Health Informatics 21, 3 (2016), 607--618.
[71]
Stefanie Passier, Niklas Müller, and Veit Senner. 2019. In-Ear Pulse Rate Measurement: A Valid Alternative to Heart Rate Derived from Electrocardiography? 19, 17 (2019).
[72]
Andrea Pedrana, Daniele Comotti, Valerio Re, and Gianluca Traversi. 2020. Development of a Wearable In-Ear PPG System for Continuous Monitoring. 20, 23 (2020), 14482--14490.
[73]
Y. A. Pinar and F. Govsa. 2006. Anatomy of the superficial temporal artery and its branches: Its importance for surgery. Surgical and Radiologic Anatomy 28, 3 (2006), 248--253.
[74]
Phillip Qian, Edward Siahaan, Scott C. Grinker, and Jason J. LeBlanc. 2017. Earbuds with biometric sensing. https://patents.google.com/patent/US9838775B2 US Patent 9,838,775.
[75]
Phillip Qian, Edward Siahaan, Erik L. Wang, Christopher J. Stringer, Matthew Dean Rohrbach, Daniel Max Strongwater, and Jason L. LeBlanc. 2015. Earbuds with compliant member. Apple Inc. https://patents.google.com/patent/US20180063621A1/ US Patent: 10,856,068.
[76]
M. Raghuram, K. Venu Madhav, E. Hari Krishna, Nagarjuna Reddy Komalla, Kosaraju Sivani, and K. Ashoka Reddy. 2012. Dual-tree complex wavelet transform for motion artifact reduction of PPG signals. In Proceedings of the 2012 IEEE International Symposium on Medical Measurements and Applications. IEEE, 1--4.
[77]
Raj Rakshit, V Ramu Reddy, and Parijat Deshpande. 2016. Emotion detection and recognition using HRV features derived from photoplethysmogram signals. In Proceedings of the 2nd workshop on Emotion Representations and Modelling for Companion Systems. 1--6.
[78]
K. Ashoka Reddy, Boby George, and V. Jagadeesh Kumar. 2008. Use of fourier series analysis for motion artifact reduction and data compression of photoplethysmographic signals. IEEE Transactions on Instrumentation and Measurement 58, 5 (2008), 1706--1711.
[79]
K. Ashoka Reddy and V. Jagadeesh Kumar. 2007. Motion artifact reduction in photoplethysmographic signals using singular value decomposition. In Proceedings of the 2007 IEEE Instrumentation & Measurement Technology Conference IMTC 2007. IEEE, 1--4.
[80]
Catherine L. Reed, Eric J. Moody, Kathryn Mgrublian, Sarah Assaad, Alexis Schey, and Daniel N. McIntosh. 2020. Body matters in emotion: Restricted body movement and posture affect expression and recognition of status-related emotions. Frontiers in Psychology 11 (2020), 1961.
[81]
Salah Rifai, Yoshua Bengio, Aaron Courville, Pascal Vincent, and Mehdi Mirza. 2012. Disentangling factors of variation for facial expression recognition. In Proceedings of the European Conference on Computer Vision. Springer, 808--822.
[82]
Pau Rodriguez, Guillem Cucurull, Jordi Gonzàlez, Josep M. Gonfaus, Kamal Nasrollahi, Thomas B. Moeslund, and F. Xavier Roca. 2017. Deep pain: Exploiting long short-term memory networks for facial expression classification. IEEE transactions on cybernetics (2017).
[83]
Jocelyn Scheirer, Raul Fernandez, and Rosalind W. Picard. 1999. Expression glasses: a wearable device for facial expression recognition. In Proceedings of the CHI'99 Extended Abstracts on Human Factors in Computing Systems. 262--263.
[84]
Jiacheng Shang and Jie Wu. 2019. A Usable Authentication System Using Wrist-worn Photoplethysmography Sensors on Smartwatches. In Proceedings of the 2019 IEEE Conference on Communications and Network Security (CNS). IEEE, 1--9.
[85]
Meike K. Uhrig, Nadine Trautmann, Ulf Baumgärtner, Rolf-Detlef Treede, Florian Henrich, Wolfgang Hiller, and Susanne Marschall. 2016. Emotion elicitation: A comparison of pictures and films. Frontiers in Psychology 7 (2016), 180.
[86]
Terry T. Um, Franz M.J. Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, and Dana Kulić. 2017. Data augmentation of wearable sensor data for Parkinson's disease monitoring using convolutional neural networks. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. 216--220.
[87]
Michel F. Valstar, Maja Pantic, Zara Ambadar, and Jeffrey F. Cohn. 2006. Spontaneous vs. posed facial behavior: automatic analysis of brow actions. In Proceedings of the 8th International Conference on Multimodal Interfaces. 162--170.
[88]
Dhruv Verma, Sejal Bhalla, Dhruv Sahnan, Jainendra Shukla, and Aman Parnami. 2021. ExpressEar: Sensing Fine-Grained Facial Expressions with Earables. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 3 (2021), 1--28.
[89]
Shangfei Wang, Zhilei Liu, Siliang Lv, Yanpeng Lv, Guobing Wu, Peng Peng, Fei Chen, and Xufa Wang. 2010. A natural visible and infrared facial expression database for expression recognition and emotion inference. IEEE Transactions on Multimedia 12, 7 (2010), 682--691.
[90]
Zi Wang, Sheng Tan, Linghan Zhang, Yili Ren, Zhi Wang, and Jie Yang. 2021. EarDynamic: An Ear Canal Deformation Based Continuous User Authentication Using In-Ear Wearables. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 1 (2021), 1--27.
[91]
T. P. Whetzel and S. J. Mathes. 1992. Arterial anatomy of the face: An analysis of vascular territories and perforating cutaneous vessels. Plastic and Reconstructive Surgery 89, 4 (1992), 591--603.
[92]
Wingclips, LLC. [n.d.]. https://www.wingclips.com/ Last accessed: 2021-10-01.
[93]
Wentao Xie, Qian Zhang, and Jin Zhang. 2021. Acoustic-based Upper Facial Action Recognition for Smart Eyewear. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 2 (2021), 1--28.
[94]
Uldis Zarins. 2017. Anatomy of Facial Expression. Anatomy Next Inc.
[95]
Nianyin Zeng, Hong Zhang, Baoye Song, Weibo Liu, Yurong Li, and Abdullah M Dobaie. 2018. Facial expression recognition via learning deep sparse autoencoders. Neurocomputing 273 (2018), 643--649.
[96]
Guoying Zhao, Xiaohua Huang, Matti Taini, Stan Z. Li, and Matti PietikäInen. 2011. Facial expression recognition from near-infrared videos. Image and Vision Computing 29, 9 (2011), 607--619.
[97]
Ke Zhao, Jia Zhao, Ming Zhang, Qian Cui, and Xiaolan Fu. 2017. Neural responses to rapid facial expressions of fear and surprise. Frontiers in Psychology 8 (2017), 761.
[98]
Lei Zhao, Zengcai Wang, Xiaojin Wang, Yazhou Qi, Qing Liu, and Guoxin Zhang. 2016. Human fatigue expression recognition through image-based dynamic multi-information and bimodal deep learning. Journal of Electronic Imaging 25, 5 (2016), 053024.
[99]
Tianming Zhao, Jian Liu, Yan Wang, Hongbo Liu, and Yingying Chen. 2019. Towards Low-cost Sign Language Gesture Recognition Leveraging Wearables. IEEE Transactions on Mobile Computing 20, 4 (2019), 1685--1701.
[100]
Yali Zheng, Tracy C.H. Wong, Billy H.K. Leung, and Carmen C.Y. Poon. 2016. Unobtrusive and multimodal wearable sensing to quantify anxiety. IEEE Sensors Journal 16, 10 (2016), 3689--3696.

Cited By

View all
  • (2024)BreathePulse: Peripheral Guided Breathing via Implicit Airflow Cues for Information WorkProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/37022118:4(1-33)Online publication date: 21-Nov-2024
  • (2024)Facial Gesture Classification with Few-shot Learning Using Limited Calibration Data from Photo-reflective Sensors on Smart EyewearProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701595(432-438)Online publication date: 1-Dec-2024
  • (2024)Evaluate Closed-Loop, Mindless Intervention in-the-Wild: A Micro-Randomized Trial on Offset Heart Rate BiofeedbackCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678373(307-312)Online publication date: 5-Oct-2024
  • Show More Cited By

Index Terms

  1. PPGface: Like What You Are Watching? Earphones Can "Feel" Your Facial Expressions

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
      Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 6, Issue 2
      June 2022
      1551 pages
      EISSN:2474-9567
      DOI:10.1145/3547347
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 July 2022
      Published in IMWUT Volume 6, Issue 2

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Blood Vessel Deformation
      2. Ear Canal
      3. Facial Expression
      4. PPG
      5. Photoplethysmogram

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Funding Sources

      • National Science Foundation

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)184
      • Downloads (Last 6 weeks)25
      Reflects downloads up to 16 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)BreathePulse: Peripheral Guided Breathing via Implicit Airflow Cues for Information WorkProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/37022118:4(1-33)Online publication date: 21-Nov-2024
      • (2024)Facial Gesture Classification with Few-shot Learning Using Limited Calibration Data from Photo-reflective Sensors on Smart EyewearProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701595(432-438)Online publication date: 1-Dec-2024
      • (2024)Evaluate Closed-Loop, Mindless Intervention in-the-Wild: A Micro-Randomized Trial on Offset Heart Rate BiofeedbackCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678373(307-312)Online publication date: 5-Oct-2024
      • (2024)Exploring the Somatic Possibilities of Shape Changing Car SeatsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661518(3354-3371)Online publication date: 1-Jul-2024
      • (2024)UFaceProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435468:1(1-27)Online publication date: 6-Mar-2024
      • (2024)Exploring Uni-manual Around Ear Off-Device Gestures for EarablesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435138:1(1-29)Online publication date: 6-Mar-2024
      • (2024)Investigation of Frequency-Selective Loudness Reduction and Its Recovery Method in HearablesIEEE Access10.1109/ACCESS.2024.338512312(49916-49926)Online publication date: 2024
      • (2024)Soft electrodes for simultaneous bio-potential and bio-impedance study of the faceBiomedical Physics & Engineering Express10.1088/2057-1976/ad28cb10:2(025036)Online publication date: 23-Feb-2024
      • (2024)Metaverse healthcare supply chainEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108113133:PAOnline publication date: 24-Jul-2024
      • (2023)I Am an Earphone and I Can Hear My User’s Face: Facial Landmark Tracking Using Smart EarphonesACM Transactions on Internet of Things10.1145/36144385:1(1-29)Online publication date: 16-Dec-2023
      • Show More Cited By

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media