Skip to main content
Log in

Detecting Affect States Using VGG16, ResNet50 and SE-ResNet50 Networks

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

A Publisher Correction to this article was published on 28 September 2023

This article has been updated

Abstract

Affect detection is a key component in developing intelligent human computer interface systems. State-of-the-art affect detection systems assume the availability of full un-occluded face images. This work uses convolutional neural networks with transfer learning to detect 7 basic affect states, viz. Angry, Contempt, Disgust, Fear, Happy and Sad. The paper compares three pre-trained networks, viz. VGG16, ResNet50 and a SE-ResNet50, in which a new architectural block of squeeze and excitation has been integrated with ResNet50. Modified VGG-16, ResNet50 and SE-ResNet50 networks are trained on images from the dataset, and the results are compared. We have been able to achieve validation accuracies of 96.8%, 99.47%, and 97.34% for VGG16, ResNet50 and SE-ResNet50, respectively. Apart from accuracy, the other performance matrices used in this work are precision and recall. Our evaluation, based on these performance matrices, shows that accurate affect detection is obtained from all the three networks with Resnet50 being the most accurate.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Change history

References

  1. Calvo RA, D’Mello S. Affect detection: an interdisciplinary review of models, methods, and their applications. IEEE Trans Affect Comput. 2010;1(1):18–37.

    Article  Google Scholar 

  2. Ciregan D, Meier U, Schmidhuber J. Multi-column deep neural networks for image classification. In: 2012 IEEE conference on computer vision and pattern recognition 2012 Jun 16, IEEE, pp 3642–3649.

  3. Dachapally PR. Facial emotion detection using convolutional neural networks and representational autoencoder units. arXiv preprint arXiv:1706.01509 (2017).

  4. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems 2012, pp 1097–1105.

  5. https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/. Accessed 4 May 2019.

  6. Ekman P, Friesen W. Facial action coding system: a technique for the measurement of facial movement. Palo Alto: Consulting Psychologists Press; 1978.

    Google Scholar 

  7. Raghuvanshi A, Choksi V. Facial expression recognition with convolutional neural networks. CS231n Course Projects (2016).

  8. Happy SL, Routray A. Automatic facial expression recognition using features of salient facial patches. IEEE Trans Affect Comput. 2015;6(1):1–12.

    Article  Google Scholar 

  9. Hoque ME, McDuff DJ, Picard RW. Exploring temporal patterns in classifying frustrated and delighted smiles. IEEE Trans Affect Comput. 2012;3(3):323–34.

    Article  Google Scholar 

  10. Mahbub U, Sarkar S, Chellappa R. Segment-based methods for facial attribute detection from partial faces. IEEE Transactions on Affective Computing. 2018 Mar 27.

  11. Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. In: European conference on computer vision 2014 Sep 6. Cham: Springer; 2014. pp 818–833.

  12. Simonyan K, Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).

  13. Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2018.

  14. Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on computer vision and pattern recognition workshops (CVPRW). IEEE, pp. 94–101.

  15. Viola P, Jones MJ. Robust real-time face detection. Int J Comput Vision. 2004;57(2):137–54.

    Article  Google Scholar 

  16. Chollet F. Keras. GitHub; 2015.

  17. Pal S. Transfer learning and fine tuning for cross domain image classification with Keras. GitHub: transfer learning and fine tuning for cross domain image classification with Keras; 2016.

  18. Deng J, et al. Imagenet: a large-scale hierarchical image database. In: IEEE conference on computer vision and pattern recognition, 2009. CVPR 2009. IEEE; 2009.

  19. Gopalakrishnan K, et al. Deep convolutional Neural Networks with transfer learning for computer vision-based data-driven pavement distress detection. Constr Build Mater. 2017;157:322–30.

    Article  Google Scholar 

  20. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778; 2016.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dhananjay Theckedath.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Advances in Computational Intelligence, Paradigms and Applications” guest edited by Young Lee and S. Meenakshi Sundaram.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Theckedath, D., Sedamkar, R.R. Detecting Affect States Using VGG16, ResNet50 and SE-ResNet50 Networks. SN COMPUT. SCI. 1, 79 (2020). https://doi.org/10.1007/s42979-020-0114-9

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s42979-020-0114-9

Keywords

Navigation