skip to main content
10.1145/3523089.3523103acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccdaConference Proceedingsconference-collections
research-article

Effective DemeapexNet: Revealing Spontaneous Facial Micro-Expressions

Published: 23 May 2022 Publication History

Abstract

In affective computing, several deep learning-based strategies have been developed to classify facial micro-expression (ME), but the high recognition accuracy is yet to achieve due to some inherent challenges such as the low intensity of facial micro movement, region-specific changes, fraction second longevity, and inconsistency and a limited number of samples in publicly available spontaneous datasets. In this paper, we attempt to address these issues and propose a highly effective end-to-end deep model to recognize micro-expressions based on apex frames. Two-stage transfer learning through Image-Net and four macro expression datasets, and fine-tuning on four spontaneous micro-expression benchmark datasets, namely CASME, CASMEII, CAS(ME)2, and SAMM with four validation protocols have been implemented. Our experimental results surpass the effectiveness of the state- of-the-art methods and express the higher model generalization, which subsequently can expedite the applications such as lie catching, homeland securities, criminal detections, business deal negotiations, and clinical diagnosis through psychoanalysis.

References

[1]
S. Porter and L. Ten Brinke, “Reading between the lies: Identifying concealed and falsified emotions in universal facial expressions,” Psychol. Sci., vol. 19, no. 5, pp. 508–514, May 2008.
[2]
P. Ekman, “Lie Catching and Microexpressions,” in The Philosophy of Deception, Oxford University Press, 2009, pp. 118–136.
[3]
Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, “A survey of affect recognition methods: Audio, visual, and spontaneous expressions,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 1, pp. 39–58, 2009.
[4]
X. B. Shen, Q. Wu, and X. L. Fu, “Effects of the duration of expressions on the recognition of microexpressions,” J. Zhejiang Univ. Sci. B, vol. 13, no. 3, pp. 221–230, Mar. 2012.
[5]
P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion,” J. Pers. Soc. Psychol., vol. 17, no. 2, pp. 124–129, Feb. 1971.
[6]
P. Ekman, Emotions revealed: recognizing faces and feelings to improve communication and emotional life. Owl Books, 2007.
[7]
M. Takalkar, M. Xu, Q. Wu, and Z. Chaczko, “A survey: facial micro-expression recognition,” Multimed. Tools Appl., vol. 77, no. 15, pp. 19301–19325, Aug. 2018.
[8]
F. Seidenstat, Paul, X. Splane, “Protecting Airline Passengers in the Age of Terrorism by Paul Seidenstat and Francis X. Splane, Editors - Praeger - ABC-CLIO.” [Online]. Available: https://www.abc-clio.com/ABC-CLIOCorporate/product.aspx?pc=D6322C. [Accessed: 20-Dec-2019].
[9]
M. O'Sullivan, M. G. Frank, C. M. Hurley, and J. Tiwana, “Police lie detection accuracy: The effect of lie scenario,” Law Hum. Behav., vol. 33, no. 6, pp. 530–538, Dec. 2009.
[10]
T. A. Russell, E. Chu, and M. L. Phillips, “A pilot study to investigate the effectiveness of emotion recognition remediation in schizophrenia using the micro-expression training tool,” Br. J. Clin. Psychol., vol. 45, no. 4, pp. 579–583, Nov. 2006.
[11]
S. Weinberger, “Airport security: Intent to deceive?,” Nature, vol. 465, no. 7297. pp. 412–415, 27-May-2010.
[12]
P. Ekman, Facial action coding system: manual. Palo Alto Calif.: Consulting Psychologists Press, 1978.
[13]
P. Ekman, “The micro-expression training tool,v. 2. (mett2).” www.mettonline.com, 2007.
[14]
X. Li, “Towards reading hidden emotions: A comparative study of spontaneous micro-expression spotting and recognition methods,” IEEE Trans. Affect. Comput., vol. 9, no. 4, pp. 563–577, Oct. 2018.
[15]
Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553. Nature Publishing Group, pp. 436–444, 27-May-2015.
[16]
S. T. Liong, J. See, K. S. Wong, and R. C. W. Phan, “Less is more: Micro-expression recognition from video using apex frame,” Signal Process. Image Commun., vol. 62, pp. 82–92, Mar. 2018.
[17]
M. Peng, Z. Wu, Z. Zhang, and T. Chen, “From macro to micro expression recognition: Deep learning on small datasets using transfer learning,” in Proceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018, 2018, pp. 657–661.
[18]
F. Xu, J. Zhang, and J. Z. Wang, “Microexpression Identification and Categorization Using a Facial Dynamics Map,” IEEE Trans. Affect. Comput., vol. 8, no. 2, pp. 254–267, Apr. 2017.
[19]
M. H. Yap, J. See, X. Hong, and S. J. Wang, “Facial micro-expressions grand challenge 2018 summary,” in Proceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018, 2018, pp. 675–678.
[20]
Y. Li, X. Huang, and G. Zhao, “Can Micro-Expression be Recognized Based on Single Apex Frame?,” in Proceedings - International Conference on Image Processing, ICIP, 2018, pp. 3094–3098.
[21]
L. Itti and C. Koch, “Computational modelling of visual attention,” Nat. Rev. Neurosci., vol. 2, no. 3, pp. 194–203, 2001.
[22]
M. Peng, C. Wang, and T. Chen, “Attention based residual network for micro-gesture recognition,” in Proceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018, 2018, pp. 790–794.
[23]
C. Wang, M. Peng, T. Bi, and T. Chen, “Micro-Attention for Micro-Expression recognition,” Nov. 2018.
[24]
H.-X. Xie, L. Lo, H.-H. Shuai, and W.-H. Cheng, “AU-assisted Graph Attention Convolutional Network for Micro-Expression Recognition,” in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 2871–2880.
[25]
B. Xia, W. Wang, S. Wang, and E. Chen, “Learning from Macro-expression: a micro expression reconition framework,” in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 2936–2944.
[26]
Z. Xia, W. Peng, H.-Q. Khor, X. Feng, and G. Zhao, “Revealing the Invisible with Model and Data Shrinking for Composite-database Micro-expression Recognition,” IEEE Trans. Image Process., vol. 29, pp. 8590–8605, Jun. 2020.
[27]
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW 2010, 2010, pp. 94–101.
[28]
M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with Gabor wavelets,” in Proceedings - 3rd IEEE International Conference on Automatic Face and Gesture Recognition, FG 1998, 1998, pp. 200–205.
[29]
N. Aifanti, C. Papachristou, and A. Delopoulos, “The MUG facial expression database,” in 11th Int. Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), 2010.
[30]
G. Zhao, X. Huang, M. Taini, S. Z. Li, and M. Pietikäinen, “Facial expression recognition from near-infrared videos,” Image Vis. Comput., vol. 29, no. 9, pp. 607–619, 2011.
[31]
W. J. Yan, Q. Wu, Y. J. Liu, S. J. Wang, and X. Fu, “CASME database: A dataset of spontaneous micro-expressions collected from neutralized faces,” in 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013, 2013.
[32]
W. J. Yan, “CASME II: An improved spontaneous micro-expression database and the baseline evaluation,” PLoS One, vol. 9, no. 1, Jan. 2014.
[33]
F. Qu, S. J. Wang, W. J. Yan, H. Li, S. Wu, and X. Fu, “CAS(ME)2): A Database for Spontaneous Macro-Expression and Micro-Expression Spotting and Recognition,” IEEE Trans. Affect. Comput., vol. 9, no. 4, pp. 424–436, Oct. 2018.
[34]
A. K. Davison, C. Lansley, N. Costen, K. Tan, and M. H. Yap, “SAMM: A Spontaneous Micro-Facial Movement Dataset,” IEEE Trans. Affect. Comput., vol. 9, no. 1, pp. 116–129, Jan. 2018.
[35]
T. F. Cooles, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 6, pp. 681–685, Jun. 2001.
[36]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol. 2016-Decem, pp. 770–778.
[37]
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in 32nd International Conference on Machine Learning, ICML 2015, 2015, vol. 1, pp. 448–456.
[38]
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015.
[39]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.
[40]
A. Davison, W. Merghani, and M. Yap, “Objective Classes for Micro-Facial Expression Recognition,” J. Imaging, vol. 4, no. 10, p. 119, Oct. 2018.
[41]
A. Paszke, “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” Dec. 2019.

Cited By

View all
  • (2023)Highly effective end-to-end single-to-multichannel feature fusion and ensemble classification to decode emotional secretes from small-scale spontaneous facial micro-expressionsJournal of King Saud University - Computer and Information Sciences10.1016/j.jksuci.2023.10165335:8(101653)Online publication date: Sep-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICCDA '22: Proceedings of the 2022 6th International Conference on Compute and Data Analysis
February 2022
131 pages
ISBN:9781450395472
DOI:10.1145/3523089
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 May 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Apex frame
  2. Deep learning
  3. End-to-end model
  4. Micro-expression
  5. Micro-expression recognition
  6. Transfer learning

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Science Foundation of Ministry of Education(MOE) of China and China Mobile Communications Corporation
  • Sichuan Science and Technology Major Project

Conference

ICCDA 2022

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)17
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Highly effective end-to-end single-to-multichannel feature fusion and ensemble classification to decode emotional secretes from small-scale spontaneous facial micro-expressionsJournal of King Saud University - Computer and Information Sciences10.1016/j.jksuci.2023.10165335:8(101653)Online publication date: Sep-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media