Skip to main content

Spatial Attention Network with High Frequency Component for Facial Expression Recognition

  • Conference paper
  • First Online:
Frontiers of Computer Vision (IW-FCV 2024)

Abstract

Indeed, object classification is one of the most advanced fields in computer vision today, and there are ongoing efforts to classify datasets used in real-world industries, beyond just public experimental data. Facial expression recognition is indeed one of the most prominent examples of such tasks, closely related to the Human-Computer Interaction (HCI) industry. Unfortunately, facial expression classification tasks are often more challenging compared to classifying public benchmark datasets. This paper aimed to address these challenges by mimicking human facial expression recognition processes and proposed an attention network that leverages high-frequency components to recognize expressions, inspired by how humans perceive emotions. The presented attention module vectorizes the singular value matrices of the query (the high-frequency component of the 1-channel input tensor) and the key (the 1-channel input tensor) and prepares a pairwise cross-correlation matrix by performing an outer product between them to create the attention scores. The correlation matrix is transformed into an attention score by passing through a convolution layer and sigmoid function. After that, it is used for element-wise multiplication with the value (input tensor) to perform attention. This paper conducted experiments using the ResNet18 and MobileNetV2 models along with the FER2013, JAFFE, and CK+ datasets to demonstrate the significant impact of the proposed attention module. The experimental results in this study have demonstrated the effectiveness of the proposed attention network and suggest its potential significance in real-time facial expression recognition tasks.

This result was supported by “Regional Innovation Strategy (RIS)" through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(MOE)(2021RIS-003).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  2. Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., Le, Q.V.: Autoaugment: learning augmentation policies from data. arXiv preprint arXiv:1805.09501 (2018)

  3. Dagli, R.: Astroformer: more data might not be all you need for classification. arXiv preprint arXiv:2304.05350 (2023)

  4. El Boudouri, Y., Bohi, A.: Emonext: an adapted convnext for facial emotion recognition. In: 2023 IEEE 25th International Workshop on Multimedia Signal Processing (MMSP), pp. 1–6 (2023). https://doi.org/10.1109/MMSP59012.2023.10337732

  5. Fard, A.P., Mahoor, M.H.: Ad-corre: adaptive correlation-based loss for facial expression recognition in the wild. IEEE Access 10, 26756–26768 (2022). https://doi.org/10.1109/ACCESS.2022.3156598

    Article  Google Scholar 

  6. Foret, P., Kleiner, A., Mobahi, H., Neyshabur, B.: Sharpness-aware minimization for efficiently improving generalization. arXiv preprint arXiv:2010.01412 (2020)

  7. Georgescu, M.I., Ionescu, R.T., Popescu, M.: Local learning with deep and handcrafted features for facial expression recognition. arXiv preprint arXiv:1804.10892 (2018)

  8. Gesmundo, A., Dean, J.: An evolutionary approach to dynamic introduction of tasks in large-scale multitask learning systems. arXiv preprint arXiv:2205.12755 (2022)

  9. Goodfellow, I.J., et al.: Challenges in representation learning: a report on three machine learning contests. In: Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M. (eds.) ICONIP 2013. LNCS, vol. 8228, pp. 117–124. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-42051-1_16

    Chapter  Google Scholar 

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  11. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)

    Google Scholar 

  12. Krizhevsky, A.: Learning multiple layers of features from tiny images (2009). https://api.semanticscholar.org/CorpusID:18268744

  13. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV) (2015)

    Google Scholar 

  14. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, pp. 94–101 (2010). https://doi.org/10.1109/CVPRW.2010.5543262

  15. Lyons, M.J.: “excavating AI” re-excavated: debunking a fallacious account of the jaffe dataset. arXiv preprint arXiv:2107.13998 (2021)

  16. Lyons, M.J., Kamachi, M., Gyoba, J.: Coding facial expressions with gabor wavelets (ivc special issue). arXiv preprint arXiv:2009.05938 (2020)

  17. Minaee, S., Abdolrashidi, A.: Deep-emotion: facial expression recognition using attentional convolutional network. arXiv preprint arXiv:1902.01019 (2019)

  18. Park, J., Woo, S., Lee, J.Y., Kweon, I.S.: Bam: bottleneck attention module. arXiv preprint arXiv:1807.06514 (2018)

  19. Park, M., Lee, J.N., Cho, J., Kim, Y.J., Yoon, J., Whang, M.: Facial vibration analysis for emotion recognition (2016). https://api.semanticscholar.org/CorpusID:137695591

  20. Park, S., Jung, W.: The effect of spatial frequency filtering on facial expression recognition and age perception. Korean J. Cogn. Biol. Psychol. 18(4), 311–324 (2006)

    Google Scholar 

  21. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32 (2019)

    Google Scholar 

  22. Pecoraro, R., Basile, V., Bono, V., Gallo, S.: Local multi-head channel self-attention for facial expression recognition. arXiv preprint arXiv:2111.07224 (2021)

  23. Ridnik, T., Sharir, G., Ben-Cohen, A., Ben-Baruch, E., Noy, A.: Ml-decoder: scalable and versatile classification head. arXiv preprint arXiv:2111.12933 (2021)

  24. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

    Google Scholar 

  25. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  26. Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)

    Google Scholar 

  27. Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kanghyun Jo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kim, S., Jo, K. (2024). Spatial Attention Network with High Frequency Component for Facial Expression Recognition. In: Irie, G., Shin, C., Shibata, T., Nakamura, K. (eds) Frontiers of Computer Vision. IW-FCV 2024. Communications in Computer and Information Science, vol 2143. Springer, Singapore. https://doi.org/10.1007/978-981-97-4249-3_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-4249-3_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-4248-6

  • Online ISBN: 978-981-97-4249-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics