skip to main content
10.1145/3331453.3361634acmotherconferencesArticle/Chapter ViewAbstractPublication PagescsaeConference Proceedingsconference-collections
research-article

Experimental Analysis of the Facial Expression Recognition of Male and Female

Published: 22 October 2019 Publication History

Abstract

With the development of deep learning, people have paid more and more attention to the research of facial expression recognition (FER), and obtained decent results in the laboratory. However, some studies have pointed out the defects of FER system itself based on the universal theory of expression and believed that human expression is specific. The purpose of this study is to analyze the influence of different gender data on the recognition rate of FER classification system. This study needs to prove that the recognition rate of different gender data in the existing FER system is different. In addition, it is necessary to confirm that there is a population recognition advantage between different gender groups in the experiment. Experiments construct a classification system by Inception V3 and transfer learning methods and design a comparative experiment. It was found that data sets with different gender ratios did influence the experimental results to some extent, and the recognition rate of female data was slightly higher than that of male data. Finally, it is concluded that models trained by male data have a higher rate of expression recognition for male group, as is the case with female data, which is similar to the situation of different cultural groups.

References

[1]
Paul Ekman and Wallace V Friesen (1971). Constants across cultures in the face and emotion. Journal of personality social psychology 17, 2, 124.
[2]
Paul Ekman (1994). Strong evidence for universals in facial expressions: a reply to Russell's mistaken critique.
[3]
Shan Li and Weihong Deng (2018). Deep facial expression recognition: A survey. arXiv preprint arXiv:.08348.
[4]
Rachael E Jack, Oliver GB Garrod, Hui Yu, Roberto Caldara, and Philippe G Schyns (2012). Facial expressions of emotion are not culturally universal. Proceedings of the National Academy of Sciences 109, 19, 7241--7244.
[5]
Matthew N Dailey, Carrie Joyce, Michael J Lyons, Miyuki Kamachi, Hanae Ishi, Jiro Gyoba, and Garrison W Cottrell (2010). Evidence and a computational explanation of cultural differences in facial expression recognition. Emotion 10, 6, 874.
[6]
Julian Thayer and BjÃÿrn Helge Johnsen (2000). Sex differences in judgement of facial affect: A multivariate analysis of recognition errors. Scandinavian journal of psychology 41, 3, 243--246.
[7]
Charles Darwin and Phillip Prodger (1998). The expression of the emotions in man and animals. Oxford University Press, USA.
[8]
David Matsumoto (1992). More evidence for the universality of a contempt expression. Motivation Emotion 16, 4, 363--368.
[9]
Jennifer Tehan Stanley, Xin Zhang, Helene H Fung, and Derek M Isaacowitz (2013). Emotion. Cultural differences in gaze and emotion recognition: Americans contrast more than Chinese. Emotion 13, 1, 36.
[10]
Gibran Benitez-Garcia, Tomoaki Nakamura, and Masahide Kaneko (2017). [n. d.]. Analysis of in-and out-group differences between Western and East-Asian facial expression recognition. In 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA). IEEE, 402--405.
[11]
Beat Fasel (2002). Robust face analysis using convolutional neural networks. In Object recognition supported by user interaction for service robots, Vol. 2. IEEE, 40--43.
[12]
Dan Claudiu Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and JÃijrgen Schmidhuber (2001). Flexible, high performance convolutional neural networks for image classifcation. In Twenty-Second International Joint Conference on Artifcial Intelligence.
[13]
Roberto Brunelli and Tomaso Poggio (1993). Face recognition: Features versus templates. IEEE transactions on pattern analysis machine intelligence 15, 10, 1042--1052.
[14]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2818--2826.
[15]
Yi Yao and Gianfranco Doretto (2010). Boosting for transfer learning with multiple sources. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 1855--1862.
[16]
Matthew E Taylor and Peter Stone (2007). Cross-domain transfer for reinforcement learning. In Proceedings of the 24th international conference on Machine learning. ACM, 879--886.
[17]
Sinno Jialin Pan and Qiang Yang (2010). A survey on transfer learning. IEEE Transactions on knowledge data engineering 22, 10, 1345--1359.
[18]
Xiangyun Zhao, Xiaodan Liang, Luoqi Liu, Teng Li, Yugang Han, Nuno Vasconcelos, and Shuicheng Yan (2016). Peak-piloted deep network for facial expression recognition. In European conference on computer vision. Springer, 425--442.
[19]
Caifeng Shan, Shaogang Gong, and Peter W McOwan (2009). Facial expression recognition based on local binary patterns: A comprehensive study. Image vision Computing 27, 6, 803--816.
[20]
Valentin Vielzeuf, StÃlphane Pateux, and FrÃldÃlric Jurie (2017). Temporal multimodal fusion for video emotion classifcation in the wild. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. ACM, 569--576.

Cited By

View all
  • (2023)A Novel DAAM-DCNNs Hybrid Approach to Facial Expression Recognition to Enhance Learning ExperienceComputational Science – ICCS 202310.1007/978-3-031-36027-5_11(140-154)Online publication date: 26-Jun-2023

Index Terms

  1. Experimental Analysis of the Facial Expression Recognition of Male and Female

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    CSAE '19: Proceedings of the 3rd International Conference on Computer Science and Application Engineering
    October 2019
    942 pages
    ISBN:9781450362948
    DOI:10.1145/3331453
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 October 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Convolutional Neural Network
    2. Facial expressions recognition (FER)

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    CSAE 2019

    Acceptance Rates

    Overall Acceptance Rate 368 of 770 submissions, 48%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)6
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 18 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)A Novel DAAM-DCNNs Hybrid Approach to Facial Expression Recognition to Enhance Learning ExperienceComputational Science – ICCS 202310.1007/978-3-031-36027-5_11(140-154)Online publication date: 26-Jun-2023

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media