Skip to main content

ACLM: Adaptive Compensatory Label Mining forĀ Facial Expression Recognition

  • Conference paper
  • First Online:
Image and Graphics (ICIG 2023)

Abstract

Label ambiguity is one of the key issues in Facial Expression Recognition (FER). Previous works tackle this issue by modifying the original annotations to another one or characterizing them with soft labels, but this is often insufficient or redundant. Different from these methods, we analyze ambiguous samples from the perspective of label compensation with the subjectivity of FER into consideration. To this end, we propose an Adaptive Compensatory Label Mining model (ACLM), which adaptively learns compensatory labels for ambiguous samples while remaining original labels. The Compensated Label Mining (CLM) module is used to evaluate the confidence and importance of the learned compensatory labels. Qualitative and quantitative experiments have demonstrated the superiority of using an adaptive combination of original labels and compensatory labels to guide FER models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Tian, Y., Kanade, T.: Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 97ā€“115 (2001)

    ArticleĀ  Google ScholarĀ 

  2. Shan, L., Deng, W.: Deep facial expression recognition: a survey. IEEE Trans. Affect. Comput. 13(3), 1195ā€“1215 (2018)

    Google ScholarĀ 

  3. Li, S., Deng, W., Du, J.: Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2852ā€“2861 (2017)

    Google ScholarĀ 

  4. Mollahosseini, A., Hasani, B., Mahoor, M.H.: Affectnet: a database for facial expression, valence, and arousal computing in the wild. IEEE Trans. Affect. Comput. 10(1), 18ā€“31 (2017)

    ArticleĀ  Google ScholarĀ 

  5. Chen, S., Wang, J., Chen, Y., Shi, Z., Geng, X., Rui, Y.: Label distribution learning on auxiliary label space graphs for facial expression recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13984ā€“13993 (2020)

    Google ScholarĀ 

  6. Zhang, S., Huang, Z., Paudel, D.P., Van Gool, L.: Facial emotion recognition with noisy multi-task annotations. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 21ā€“31 (2021)

    Google ScholarĀ 

  7. Li, S., Xia, X., Ge, S., Liu, T.: Selective-supervised contrastive learning with noisy labels. arXiv preprint arXiv:2203.04181 (2022)

  8. Chen, L.H., Li, H., Yang, W.: Anomman: detect anomaly on multi-view attributed networks (2022)

    Google ScholarĀ 

  9. Wang, K., Peng, X., Yang, J., Lu, S., Qiao, Y.: Suppressing uncertainties for large-scale facial expression recognition. IEEE (2020)

    Google ScholarĀ 

  10. Zhang, Z., Sabuncu, M.: Generalized cross entropy loss for training deep neural networks with noisy labels. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google ScholarĀ 

  11. Gan, Y., Chen, J., Xu, L.: Facial expression recognition boosted by soft label with a diverse ensemble. Pattern Recogn. Lett. 125, 105ā€“112 (2019)

    ArticleĀ  Google ScholarĀ 

  12. Du, S., Tao, Y., Martinez, A.M.: Compound facial expressions of emotion. Proc. Natl. Acad. Sci. 111(15), E1454ā€“E1462 (2014)

    ArticleĀ  Google ScholarĀ 

  13. Zeng, J., Shan, S., Chen, X.: Facial expression recognition with inconsistently annotated datasets. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 222ā€“237 (2018)

    Google ScholarĀ 

  14. Yi, L., Liu, S., She, Q., Mcleod, A.I., Wang, B.: On learning contrastive representations for learning with noisy labels. arXiv e-prints (2022)

    Google ScholarĀ 

  15. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  16. She, J., Hu, Y., Shi, H., Wang, J., Shen, Q., Mei, T.: Dive into ambiguity: latent distribution mining and pairwise uncertainty estimation for facial expression recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6248ā€“6257 (2021)

    Google ScholarĀ 

  17. Goldberger, J., Ben-Reuven, E.: Training deep neural-networks using a noise adaptation layer (2016)

    Google ScholarĀ 

  18. Dhall, A., Goecke, R., Lucey, S., Gedeon, T.: Static facial expression analysis in tough conditions: data, evaluation protocol and benchmark. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 2106ā€“2112. IEEE (2011)

    Google ScholarĀ 

  19. Lee, J., Kim, S., Kim, S., Park, J., Sohn, K.: Context-aware emotion recognition networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10143ā€“10152 (2019)

    Google ScholarĀ 

  20. Deng, J., Guo, J., Zhou, Y., Yu, J., Kotsia, I., Zafeiriou, S.: Retinaface: single-stage dense face localisation in the wild. arXiv preprint arXiv:1905.00641 (2019)

  21. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770ā€“778 (2016)

    Google ScholarĀ 

  22. Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)

    Google ScholarĀ 

  23. Wang, K., Peng, X., Yang, J., Lu, S., Qiao, Y.: Suppressing uncertainties for large-scale facial expression recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6897ā€“6906 (2020)

    Google ScholarĀ 

  24. Zhao, Z., Liu, Q., Zhou, F.: Robust lightweight facial expression recognition network with label distribution training. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 3510ā€“3519 (2021)

    Google ScholarĀ 

  25. Zhang, Y., Wang, C., Deng, W.: Relative uncertainty learning for facial expression recognition. Adv. Neural. Inf. Process. Syst. 34, 17616ā€“17627 (2021)

    Google ScholarĀ 

  26. Ruan, D., Mo, R., Yan, Y., Chen, S., Xue, J.H., Wang, H.: Adaptive deep disturbance-disentangled learning for facial expression recognition. Int. J. Comput. Vis. 130, 1ā€“23 (2022)

    ArticleĀ  Google ScholarĀ 

  27. Wen, Z., Lin, W., Wang, T., Xu, G.: Distract your attention: multi-head cross attention network for facial expression recognition. arXiv preprint arXiv:2109.07270 (2021)

  28. Cai, J., Meng, Z., Khan, A.S., Li, Z., Oā€™Reilly, J., Tong, Y.: Island loss for learning discriminative features in facial expression recognition. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 302ā€“309. IEEE (2018)

    Google ScholarĀ 

  29. Acharya, D., Huang, Z., Pani Paudel, D., Van Gool, L.: Covariance pooling for facial expression recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 367ā€“374 (2018)

    Google ScholarĀ 

  30. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv 2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510ā€“4520 (2018)

    Google ScholarĀ 

  31. Gao, S., Cheng, M.M., Zhao, K., Zhang, X.Y., Yang, M.H., Torr, P.H.: Res2net: a new multi-scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell. 43(2), 652ā€“662 (2019)

    ArticleĀ  Google ScholarĀ 

  32. Li, H., Wang, N., Ding, X., Yang, X., Gao, X.: Adaptively learning facial expression representation via CF labels and distillation. IEEE Trans. Image Process. 30, 2016ā€“2028 (2021)

    ArticleĀ  Google ScholarĀ 

  33. Farzaneh, A.H., Qi, X.: Facial expression recognition in the wild via deep attentive center loss. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2402ā€“2411 (2021)

    Google ScholarĀ 

  34. Savchenko, A.V.: Facial expression and attributes recognition based on multi-task learning of lightweight neural networks. In: 2021 IEEE 19th International Symposium on Intelligent Systems and Informatics (SISY), pp. 119ā€“124. IEEE (2021)

    Google ScholarĀ 

  35. Vo, T.H., Lee, G.S., Yang, H.J., Kim, S.H.: Pyramid with super resolution for in-the-wild facial expression recognition. IEEE Access 8, 131988ā€“132001 (2020)

    ArticleĀ  Google ScholarĀ 

  36. Gross, R., Matthews, I., Cohn, J., Kanade, T., Baker, S.: Multi-pie. Image Vis. Comput. 28(5), 807ā€“813 (2010)

    ArticleĀ  Google ScholarĀ 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qingshan Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, C., Wang, S., Shuai, H., Liu, Q. (2023). ACLM: Adaptive Compensatory Label Mining forĀ Facial Expression Recognition. In: Lu, H., et al. Image and Graphics . ICIG 2023. Lecture Notes in Computer Science, vol 14358. Springer, Cham. https://doi.org/10.1007/978-3-031-46314-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-46314-3_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-46313-6

  • Online ISBN: 978-3-031-46314-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics