skip to main content
10.1145/3503161.3551599acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

ABPN: Apex and Boundary Perception Network for Micro- and Macro-Expression Spotting

Authors Info & Claims
Published:10 October 2022Publication History

ABSTRACT

Recently, Micro expression~(ME) has achieved remarkable progress in a wide range of applications, since it's an involuntary facial expression that reflects personal psychological state truly. In the procedure of ME analysis, spotting ME is an essential step, and is non trivial to be detected from a long interval video because of the short duration and low intensity issues. To alleviate this problem, in this paper, we propose a novel Micro- and Macro-Expression~(MaE) Spotting framework based on Apex and Boundary Perception Network~(ABPN), which mainly consists of three parts, i.e., video encoding module ~(VEM), probability evaluation module~(PEM), and expression proposal generation module~(EPGM). Firstly, we adopt Main Directional Mean Optical Flow (MDMO) algorithm and calculate optical flow differences to extract facial motion features in VEM, which can alleviate the impact of head movement and other areas of the face on ME spotting. Then, we extract temporal features with one-dimension convolutional layers and introduce PEM to infer the auxiliary probability that each frame belongs to an apex or boundary frame. With these frame-level auxiliary probabilities, the EPGM further combines the frames from different categories to generate expression proposals for the accurate localization. Besides, we conduct comprehensive experiments on MEGC2022 spotting task, and demonstrate that our proposed method achieves significant improvement with the comparison of state-of-the-art baselines on rm CAS(ME)2 and SAMM-LV datasets. The implemented code is also publicly available at https://github.com/wenhaocold/USTC_ME_Spotting.

References

  1. Jiankang Deng, Jia Guo, Evangelos Ververas, Irene Kotsia, and Stefanos Zafeiriou. 2020. Retinaface: Single-shot multi-level face localisation in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5203--5212.Google ScholarGoogle ScholarCross RefCross Ref
  2. Xuanyi Dong, Yan Yan, Wanli Ouyang, and Yi Yang. 2018. Style aggregated network for facial landmark detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 379--388.Google ScholarGoogle ScholarCross RefCross Ref
  3. Paul Ekman. 2003. Darwin, deception, and facial expression. Annals of the new York Academy of sciences 1000, 1 (2003), 205--221.Google ScholarGoogle ScholarCross RefCross Ref
  4. Paul Ekman. 2009. Telling lies: Clues to deceit in the marketplace, politics, and marriage (revised edition). WW Norton & Company.Google ScholarGoogle Scholar
  5. Paul Ekman and Wallace V Friesen. 1969. Nonverbal leakage and clues to deception. Psychiatry 32, 1 (1969), 88--106.Google ScholarGoogle ScholarCross RefCross Ref
  6. Rosenberg Ekman. 1997. What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS). Oxford University Press, USA.Google ScholarGoogle Scholar
  7. Mark Frank, Malgorzata Herbasz, Kang Sinuk, A Keller, and Courtney Nolan. 2009. I see how you feel: Training laypeople and professionals to recognize fleeting emotions. In The Annual Meeting of the International Communication Association. Sheraton New York, New York City. 1--35.Google ScholarGoogle Scholar
  8. Ying He, Su-JingWang, Jingting Li, and Moi Hoon Yap. 2020. Spotting macro-and micro-expression intervals in long video sequences. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020). IEEE, 742--748.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Huai-Qian Khor, John See, Raphael Chung Wei Phan, and Weiyao Lin. 2018. Enriched long-term recurrent convolutional network for facial micro-expression recognition. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, 667--674.Google ScholarGoogle Scholar
  10. Jingting Li, Zizhao Dong, Shaoyuan Lu, Su-JingWang,Wen-Jing Yan, Yinhuan Ma, Ye Liu, Changbing Huang, and Xiaolan Fu. 2022. CAS (ME) 3: A Third Generation Facial Spontaneous Micro-Expression Database with Depth Information and High Ecological Validity. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022).Google ScholarGoogle Scholar
  11. Tianwei Lin, Xu Zhao, Haisheng Su, Chongjing Wang, and Ming Yang. 2018. Bsn: Boundary sensitive network for temporal action proposal generation. In Proceedings of the European conference on computer vision (ECCV). 3--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Yong-Jin Liu, Jin-Kai Zhang, Wen-Jing Yan, Su-Jing Wang, Guoying Zhao, and Xiaolan Fu. 2015. A main directional mean optical flow feature for spontaneous micro-expression recognition. IEEE Transactions on Affective Computing 7, 4 (2015), 299--310.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Antti Moilanen, Guoying Zhao, and Matti Pietikäinen. 2014. Spotting rapid facial movements from videos using appearance-based feature difference analysis. In 2014 22nd international conference on pattern recognition. IEEE, 1722--1727.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Yee-Hui Oh, John See, Anh Cat Le Ngo, Raphael C-W Phan, and Vishnu M Baskaran. 2018. A survey of automatic facial micro-expression analysis: databases, methods, and challenges. Frontiers in psychology 9 (2018), 1128.Google ScholarGoogle Scholar
  15. Stephen Porter and Leanne Ten Brinke. 2008. Reading between the lies: Identifying concealed and falsified emotions in universal facial expressions. Psychological science 19, 5 (2008), 508--514.Google ScholarGoogle ScholarCross RefCross Ref
  16. Fangbing Qu, Su-Jing Wang, Wen-Jing Yan, He Li, Shuhang Wu, and Xiaolan Fu. 2017. CAS (ME) 2: a database for spontaneous macro-expression and microexpression spotting and recognition. IEEE Transactions on Affective Computing 9, 4 (2017), 424--436.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Javier Sánchez Pérez, Enric Meinhardt-Llopis, and Gabriele Facciolo. 2013. TV-L1 Optical Flow Estimation. Image Processing On Line 3 (2013), 137--150. https://doi.org/10.5201/ipol.2013.26.Google ScholarGoogle ScholarCross RefCross Ref
  18. Thuong-Khanh Tran, Quang-Nhat Vo, Xiaopeng Hong, Xiaobai Li, and Guoying Zhao. 2021. Micro-expression spotting: A new benchmark. Neurocomputing 443 (2021), 356--368.Google ScholarGoogle ScholarCross RefCross Ref
  19. Su-Jing Wang, Ying He, Jingting Li, and Xiaolan Fu. 2021. MESNet: A convolutional neural network for spotting multi-scale micro-expression intervals in long videos. IEEE Transactions on Image Processing 30 (2021), 3956--3969.Google ScholarGoogle ScholarCross RefCross Ref
  20. Sharon Weinberger. 2010. Intent to deceive? Can the science of deception detection help to catch terrorists? Sharon Weinberger takes a close look at the evidence for it. Nature 465, 7297 (2010), 412--416.Google ScholarGoogle Scholar
  21. Wanchuang Xia, Wenming Zheng, Yuan Zong, and Xingxun Jiang. 2021. Motion Attention Deep Transfer Network for Cross-database Micro-expression Recognition. In International Conference on Pattern Recognition. Springer, 679--693.Google ScholarGoogle Scholar
  22. Bo Yang, Jianming Wu, Zhiguang Zhou, Megumi Komiya, Koki Kishimoto, Jianfeng Xu, Keisuke Nonaka, Toshiharu Horiuchi, Satoshi Komorita, Gen Hattori, et al. 2021. Facial action unit-based deep learning framework for spotting macroand micro-expressions in long video sequences. In Proceedings of the 29th ACM International Conference on Multimedia. 4794--4798.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Chuin Hong Yap, Connah Kendrick, and Moi Hoon Yap. 2020. Samm long videos: A spontaneous facial micro-and macro-expressions dataset. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020). IEEE, 771--776.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Chuin Hong Yap, Moi Hoon Yap, Adrian K Davison, and Ryan Cunningham. 2021. 3d-cnn for facial micro-and macro-expression spotting on long video sequences using temporal oriented reference frame. arXiv preprint arXiv:2105.06340 (2021).Google ScholarGoogle Scholar
  25. Wang-Wang Yu, Jingwen Jiang, and Yong-Jie Li. 2021. LSSNet: A two-stream convolutional neural network for spotting macro-and micro-expression in long videos. In Proceedings of the 29th ACM International Conference on Multimedia. 4745--4749.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. He Yuhong. 2021. Research on micro-expression spotting method based on optical flow features. In Proceedings of the 29th ACM International Conference on Multimedia. 4803--4807.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Li-Wei Zhang, Jingting Li, Su-Jing Wang, Xian-Hua Duan, Wen-Jing Yan, Hai-Yong Xie, and Shu-Cheng Huang. 2020. Spatio-temporal fusion for macro-and micro-expression spotting in long video sequences. In 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020). IEEE, 734--741.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Sirui Zhao, Huaying Tang, Shifeng Liu, Yangsong Zhang, Hao Wang, Tong Xu, Enhong Chen, and Cuntai Guan. 2022. ME-PLAN: A deep prototypical learning with local attention network for dynamic micro-expression recognition. Neural Networks (2022). https://doi.org/10.1016/j.neunet.2022.06.024Google ScholarGoogle Scholar
  29. Sirui Zhao, Hanqing Tao, Yangsong Zhang, Tong Xu, Kun Zhang, Zhongkai Hao, and Enhong Chen. 2021. A two-stage 3D CNN based learning method for spontaneous micro-expression recognition. Neurocomputing 448 (2021), 276--289.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. ABPN: Apex and Boundary Perception Network for Micro- and Macro-Expression Spotting

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      MM '22: Proceedings of the 30th ACM International Conference on Multimedia
      October 2022
      7537 pages
      ISBN:9781450392037
      DOI:10.1145/3503161

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 10 October 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate995of4,171submissions,24%

      Upcoming Conference

      MM '24
      MM '24: The 32nd ACM International Conference on Multimedia
      October 28 - November 1, 2024
      Melbourne , VIC , Australia

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader