skip to main content
10.1145/3387168.3387175acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicvispConference Proceedingsconference-collections
research-article

Low Complexity Deep Learning for Mobile Face Expression Recognition

Published:25 May 2020Publication History

ABSTRACT

The problem of Face Expression Recognition (FER) remains a challenging one due to variations in illumination and pose as well as partial occlusion of the face. Deep neural networks have been increasingly applied to this problem and have achieved excellent recognition results, especially on challenging datasets such as FER2013. However, the trend has been towards more complex networks to increase performance. In this paper, we develop a low complexity model, and we experiment with a variety of parameters to determine the performance of these models on the FER2013 dataset relative to the complexity of the models. We show that we are able to obtain an accuracy of 70.86% on the test FER images which approximately matches the winning entry to the FER2013 competition but our model is 5 times smaller in size. We show that we are able to reduce the model size 5 times more, resulting in a model with fewer than 500,000 parameters, and still maintain an excellent accuracy of 68.43% which would make this model ideal for resource constrained environments.

References

  1. "Applications - Keras Documentation." https://keras.io/applications/#available-models.Google ScholarGoogle Scholar
  2. A. G. Howard et al., "Mobilenets: Efficient convolutional neural networks for mobile vision applications," ArXiv Prepr. ArXiv170404861, 2017.Google ScholarGoogle Scholar
  3. A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097--1105.Google ScholarGoogle Scholar
  4. C. Darwin, The expression of the emotions in man and animals. London, England: John Murray, 1872.Google ScholarGoogle Scholar
  5. C. Pramerdorfer and M. Kampel, "Facial Expression Recognition using Convolutional Neural Networks: State of the Art," ArXiv161202903 Cs, Dec. 2016.Google ScholarGoogle Scholar
  6. C. Szegedy et al., "Going deeper with convolutions," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1--9.Google ScholarGoogle Scholar
  7. E. Sariyanidi, H. Gunes, and A. Cavallaro, "Automatic analysis of facial affect: A survey of registration, representation, and recognition," IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 6, pp. 1113--1133, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size," ArXiv Prepr. ArXiv160207360, 2016.Google ScholarGoogle Scholar
  9. I. J. Goodfellow et al., "Challenges in representation learning: A report on three machine learning contests," in International Conference on Neural Information Processing, 2013, pp. 117--124.Google ScholarGoogle Scholar
  10. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770--778.Google ScholarGoogle Scholar
  11. K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," ArXiv14091556 Cs, Sep. 2014.Google ScholarGoogle Scholar
  12. M. Lyons et al., "Coding facial expressions with gabor wavelets," in Proceedings Third IEEE international conference on automatic face and gesture recognition, 1998, pp. 200--205.Google ScholarGoogle Scholar
  13. M. Sandler et al., "Mobilenetv2: Inverted residuals and linear bottlenecks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510--4520.Google ScholarGoogle Scholar
  14. P. Ekman and W. Friesen, The Facial Action Coding System: A technique for the measurement of facial movement. Palo Alto, CA, USA: Consulting Psychologists Press, 1978.Google ScholarGoogle Scholar
  15. P. Lucey et al., "The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression," in 2010 IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 94--101.Google ScholarGoogle Scholar
  16. W. Liu et al., "A survey of deep neural network architectures and their applications," Neurocomputing, vol. 234, pp. 11--26, 2017.Google ScholarGoogle Scholar
  17. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proc. IEEE, vol. 86, no. 11, pp. 2278--2324, 1998.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Y. Tang, "Deep learning using linear support vector machines," ArXiv Prepr. ArXiv13060239, 2013.Google ScholarGoogle Scholar

Index Terms

  1. Low Complexity Deep Learning for Mobile Face Expression Recognition

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICVISP 2019: Proceedings of the 3rd International Conference on Vision, Image and Signal Processing
      August 2019
      584 pages
      ISBN:9781450376259
      DOI:10.1145/3387168

      Copyright © 2019 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 May 2020

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      ICVISP 2019 Paper Acceptance Rate126of277submissions,45%Overall Acceptance Rate186of424submissions,44%
    • Article Metrics

      • Downloads (Last 12 months)6
      • Downloads (Last 6 weeks)0

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader