Skip to main content

Affective Prior Topology Graph Guided Facial Expression Recognition

  • Conference paper
  • First Online:
Biometric Recognition (CCBR 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14463))

Included in the following conference series:

  • 384 Accesses

Abstract

Facial expression recognition (FER) aims to comprehend human emotional states by analyzing facial features. However, previous studies have predominantly concentrated on emotion classification or sentiment levels, disregarding the crucial dependencies between these factors that are vital for perceiving human emotions. To address this problem, we propose a novel affective priori topology graph network (AptGATs). AptGATs explicitly captures the topological relationship between the two labels and predicts both emotional categories and sentiment estimation for robust multi-task learning of FER. Specifically, we first constructed an Affective Priori Topology Graph (AptG) to elucidate the topological relationships between affective labels. It employs different affective labels as nodes and establishes edges from the level of cognitive psychology. We then introduced a graph attention network based on AptG that models the relationships within the affective labels. Moreover, we propose a parallel superposition mechanism to obtain a richer information representation. Experiments on the wild datasets AffectNet and Aff-Wild2 validate the effectiveness of our method. The results of public benchmark tests show that our model outperforms the current state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Li, S., Deng, W.: Deep facial expression recognition: a survey. IEEE Trans. Affect. Comput. 13(3), 1195–1215 (2022)

    Article  Google Scholar 

  2. Kim, D., Song, B.C.: Contrastive adversarial learning for person independent facial emotion recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 7, pp. 5948–5956 (2021)

    Google Scholar 

  3. Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161 (1980)

    Article  Google Scholar 

  4. Christ, L., et al.: The muse 2022 multimodal sentiment analysis challenge: humor, emotional reactions, and stress. In: Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge, pp. 5–14 (2022)

    Google Scholar 

  5. Toisoul, A., Kossaifi, J., Bulat, A., Tzimiropoulos, G., Pantic, M.: Estimation of continuous valence and arousal levels from faces in naturalistic conditions. Nat. Mach. Intell. 3(1), 42–50 (2021)

    Article  Google Scholar 

  6. Wang, Y., et al.: Multi-label classification with label graph superimposing. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 0), pp. 12265–12272 (2020)

    Google Scholar 

  7. Lee, C.W., Fang, W., Yeh, C.K., Wang, Y.C.F.: Multi-label zero-shot learning with structured knowledge graphs. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, 18–22, June 2018, pp. 1576–1585 (2018)

    Google Scholar 

  8. Farzaneh, A.H., Qi, X.: Facial expression recognition in the wild via deep attentive center loss. In: IEEE Winter Conference on Applications of Computer Vision, WACV 2021, Waikoloa, HI, USA, 3–8 January 2021, pp. 2402–2411 (2021)

    Google Scholar 

  9. Jang, Y., Gunes, H., Patras, I.: Registration-free face-SSD: single shot analysis of smiles, facial attributes, and affect in the wild. Comput. Vis. Image Underst. 182, 17–29 (2019)

    Article  Google Scholar 

  10. Xue, F., Wang, Q., Guo, G.: Transfer: learning relation-aware facial expression representations with transformers. In: 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, 10–17 October 2021, pp. 3581–3590 (2021)

    Google Scholar 

  11. Xue, F., Wang, Q., Tan, Z., Ma, Z., Guo, G.: Vision transformer with attentive pooling for robust facial expression recognition. CoRR abs/2212.05463 (2022)

    Google Scholar 

  12. Huang, Z., Zhang, J., Shan, H.: When age-invariant face recognition meets face age synthesis: a multi-task learning framework. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, 19–25 June 2021, pp. 7282–7291 (2021)

    Google Scholar 

  13. He, J., Yu, X., Sun, B., Yu, L.: Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks. J. Multimodal User Interfaces 15(4), 429–440 (2021)

    Article  Google Scholar 

  14. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. CoRR abs/1710.10903 (2017)

    Google Scholar 

  15. Wu, Z., Pan, S., Long, G., Jiang, J., Zhang, C.: Graph waveNet for deep spatial-temporal graph modeling. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, 10–16 August 2019, pp. 1907–1913 (2019)

    Google Scholar 

  16. Guo, D., Shao, Y., Cui, Y., Wang, Z., Zhang, L., Shen, C.: Graph attention tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, 19–25 June 2021, 9543–9552 (2021)

    Google Scholar 

  17. Liu, Z., Zhang, H., Chen, Z., Wang, Z., Ouyang, W.: Disentangling and unifying graph convolutions for skeleton-based action recognition. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, 13–19 June 2020, pp. 140–149 (2020)

    Google Scholar 

  18. Kumar, A.J.R., Bhanu, B.: Micro-expression classification based on landmark relations with graph attention convolutional network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 1511–1520 (2021)

    Google Scholar 

  19. Panagiotis, A., Filntisis, P.P., Maragos, P.: Exploiting emotional dependencies with graph convolutional networks for facial expression recognition. In: 16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021, Jodhpur, India, 15–18 December 2021, pp. 1–8 (2021)

    Google Scholar 

  20. Gebhard, P.: ALMA: a layered model of affect. In: 4th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2005), 25–29 July 2005, Utrecht, The Netherlands, pp. 29–36 (2005)

    Google Scholar 

  21. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, 25–29 October 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1532–1543 (2014)

    Google Scholar 

  22. Mollahosseini, A., Hasani, B., Mahoor, M.H.: AffectNet: a database for facial expression, valence, and arousal computing in the wild. IEEE Trans. Affect. Comput. 10(1), 18–31 (2019)

    Article  Google Scholar 

  23. Kollias, D., Zafeiriou, S.: Aff-wild2: extending the Aff-wild database for affect recognition. CoRR abs/1811.07770 (2018)

    Google Scholar 

  24. Kollias, D., Cheng, S., Ververas, E., Kotsia, I., Zafeiriou, S.: Deep neural network augmentation: generating faces for affect analysis. Int. J. Comput. Vis. 128(5), 1455–1484 (2020)

    Article  Google Scholar 

  25. Li, H., Sui, M., Zhao, F., Zha, Z., Wu, F.: MViT: mask vision transformer for facial expression recognition in the wild. CoRR abs/2106.04520 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiao Sun .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, R., Sun, X. (2023). Affective Prior Topology Graph Guided Facial Expression Recognition. In: Jia, W., et al. Biometric Recognition. CCBR 2023. Lecture Notes in Computer Science, vol 14463. Springer, Singapore. https://doi.org/10.1007/978-981-99-8565-4_17

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8565-4_17

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8564-7

  • Online ISBN: 978-981-99-8565-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics