Skip to main content

Federated Scaling of Pre-trained Models for Deep Facial Expression Recognition

  • Conference paper
  • First Online:
Computer Vision and Image Processing (CVIP 2023)

Abstract

Building an efficient deep learning-based Facial Expression Recognition (FER) system is challenging due to the requirements of large amounts of personal data and the rise in data privacy concerns. Federated learning has emerged as a promising solution for such problems, which however is communication-inefficient. Recently, pre-trained models have shown effective performance in federated learning setups regarding convergence. In this paper, we extend the traditional FER towards a new paradigm, where we study the performance of federated fine-tuning of standard vision pre-trained models for FER. More specifically, we propose a Federated Deep Facial Expression Recognition (FedFER) framework, where clients jointly learn to fuse the representations generated by pre-trained deep learning models rather than training a large-scale model from scratch without sharing any data. With the help of extensive experimentation using standard pre-trained vision models (ResNet-50, VGG-16, Xception, Vision Transformers) and benchmark datasets (CK+, FERG, FER-2013, JAFFE, MUG), this paper presents interesting perspectives for future research in the direction of federated Deep FER.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data.

  2. 2.

    http://vasc.ri.cmu.edu/idb/html/face/facial_expression/.

  3. 3.

    https://grail.cs.washington.edu/projects/deepexpr/ferg-2d-db.html.

  4. 4.

    https://www.kasrl.org/jaffe_download.html.

  5. 5.

    https://mug.ee.auth.gr/fed/.

References

  1. Alom, M.Z., et al.: The history began from alexnet: a comprehensive survey on deep learning approaches. arXiv preprint arXiv:1803.01164 (2018)

  2. Bandyopadhyay, S., Thakur, S.S., Mandal, J.K.: Online recommendation system using human facial expression based emotion detection: a proposed method. In: Mandal, J.K., Buyya, R., De, D. (eds.) Proceedings of International Conference on Advanced Computing Applications. AISC, vol. 1406, pp. 459–468. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-5207-3_38

    Chapter  Google Scholar 

  3. Bonawitz, K., et al.: Towards federated learning at scale: system design. Proc. Mach. Learn. Syst. 1, 374–388 (2019)

    Google Scholar 

  4. Chen, F., Long, G., Wu, Z., Zhou, T., Jiang, J.: Personalized federated learning with graph. arXiv preprint arXiv:2203.00829 (2022)

  5. Chen, H.Y., Tu, C.H., Li, Z., Shen, H.W., Chao, W.L.: On pre-training for federated learning. arXiv preprint arXiv:2206.11488 (2022)

  6. Chen, J., Xu, W., Guo, S., Wang, J., Zhang, J., Wang, H.: Fedtune: a deep dive into efficient federated fine-tuning with pre-trained transformers. arXiv preprint arXiv:2211.08025 (2022)

  7. Chollet, F.: Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)

    Google Scholar 

  8. Deng, J., Pang, G., Zhang, Z., Pang, Z., Yang, H., Yang, G.: CGAN based facial expression recognition for human-robot interaction. IEEE Access 7, 9848–9859 (2019)

    Article  Google Scholar 

  9. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  10. Gupta, S., Kumar, P., Tekchandani, R.K.: Facial emotion recognition based real-time learner engagement detection system in online learning context using deep learning models. Multimedia Tools Appl. 82(8), 11365–11394 (2023)

    Article  Google Scholar 

  11. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  12. Huang, Q., Huang, C., Wang, X., Jiang, F.: Facial expression recognition with grid-wise attention and visual transformer. Inf. Sci. 580, 35–54 (2021)

    Article  MathSciNet  Google Scholar 

  13. Ji, X., Dong, Z., Han, Y., Lai, C.S., Zhou, G., Qi, D.: EMSN: an energy-efficient memristive sequencer network for human emotion classification in mental health monitoring. IEEE Trans. Consum. Electron. 69, 1005–1016 (2023)

    Article  Google Scholar 

  14. Kahou, S.E., et al.: Emonets: multimodal deep learning approaches for emotion recognition in video. J. Multimodal User Interfaces 10 (2015). https://doi.org/10.1007/s12193-015-0195-2

  15. Kim, T., Yu, C., Lee, S.: Facial expression recognition using feature additive pooling and progressive fine-tuning of CNN. Electron. Lett. 54(23), 1326–1328 (2018)

    Article  Google Scholar 

  16. Knyazev, B., Shvetsov, R., Efremova, N., Kuharenko, A.: Convolutional neural networks pretrained on large face recognition datasets for emotion classification from video. arXiv preprint arXiv:1711.04598 (2017)

  17. Konečnỳ, J., McMahan, H.B., Ramage, D., Richtárik, P.: Federated optimization: distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 (2016)

  18. Li, L., Fan, Y., Tse, M., Lin, K.Y.: A review of applications in federated learning. Comput. Ind. Eng. 149, 106854 (2020). https://doi.org/10.1016/j.cie.2020.106854, https://www.sciencedirect.com/science/article/pii/S0360835220305532

  19. Li, S., Deng, W.: Deep facial expression recognition: a survey. IEEE Trans. Affect. Comput. 13, 1195–1215 (2020)

    Article  MathSciNet  Google Scholar 

  20. Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. IEEE Signal Process. Mag. 37(3), 50–60 (2020)

    Article  Google Scholar 

  21. Liu, Z., Peng, Y., Hu, W.: Driver fatigue detection based on deeply-learned facial expression representation. J. Vis. Commun. Image Represent. 71, 102723 (2020)

    Article  Google Scholar 

  22. Luo, C., Fan, X., Yan, Y., Jin, H., Wang, X.: Optimization of three-dimensional face recognition algorithms in financial identity authentication. Int. J. Comput. Commun. Control 17(3) (2022)

    Google Scholar 

  23. Ma, F., Sun, B., Li, S.: Robust facial expression recognition with convolutional visual transformers. arXiv preprint arXiv:2103.16854 (2021)

  24. Mandal, M., Verma, M., Mathur, S., Vipparthi, S.K., Murala, S., Kranthi Kumar, D.: Regional adaptive affinitive patterns (RADAP) with logical operators for facial expression recognition. IET Image Proc. 13(5), 850–861 (2019)

    Article  Google Scholar 

  25. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.V.: Communication-efficient learning of deep networks from decentralized data. In: Singh, A., Zhu, J. (eds.) Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 54, pp. 1273–1282. PMLR, 20–22 April 2017. https://proceedings.mlr.press/v54/mcmahan17a.html

  26. Meena, G., Mohbey, K.K.: Sentiment analysis on images using different transfer learning models. Procedia Comput. Sci. 218, 1640–1649 (2023)

    Article  Google Scholar 

  27. Meng, Q., Zhou, F., Ren, H., Feng, T., Liu, G., Lin, Y.: Improving federated learning face recognition via privacy-agnostic clusters. arXiv preprint arXiv:2201.12467 (2022)

  28. Mohan, K., Seal, A., Krejcar, O., Yazidi, A.: Facial expression recognition using local gravitational force descriptor-based deep convolution neural networks. IEEE Trans. Instrum. Meas. 70, 1–12 (2020)

    Article  Google Scholar 

  29. Nguyen, J., Malik, K., Sanjabi, M., Rabbat, M.: Where to begin? exploring the impact of pre-training and initialization in federated learning. arXiv preprint arXiv:2206.15387 (2022)

  30. Pávez, R., Díaz, J., Arango-López, J., Ahumada, D., Méndez, C., Moreira, F.: Emotion recognition in children with autism spectrum disorder using convolutional neural networks. In: Rocha, Á., Adeli, H., Dzemyda, G., Moreira, F., Ramalho Correia, A.M. (eds.) WorldCIST 2021. AISC, vol. 1365, pp. 585–595. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72657-7_56

    Chapter  Google Scholar 

  31. Putro, M.D., Nguyen, D.L., Jo, K.H.: A fast CPU real-time facial expression detector using sequential attention network for human-robot interaction. IEEE Trans. Industr. Inf. 18(11), 7665–7674 (2022)

    Article  Google Scholar 

  32. Salman, A., Busso, C.: Privacy preserving personalization for video facial expression recognition using federated learning. In: Proceedings of the 2022 International Conference on Multimodal Interaction, pp. 495–503 (2022)

    Google Scholar 

  33. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

    Google Scholar 

  34. Shahzad, T., Iqbal, K., Khan, M.A., Iqbal, N., et al.: Role of zoning in facial expression using deep learning. IEEE Access 11, 16493–16508 (2023)

    Article  Google Scholar 

  35. Shao, R., Perera, P., Yuen, P.C., Patel, V.M.: Federated face presentation attack detection. arXiv preprint arXiv:2005.14638 (2020)

  36. Shehada, D., Turky, A., Khan, W., Khan, B., Hussain, A.: A lightweight facial emotion recognition system using partial transfer learning for visually impaired people. IEEE Access 11, 36961–36969 (2023)

    Article  Google Scholar 

  37. Shome, D., Kar, T.: Fedaffect: few-shot federated learning for facial expression recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4168–4175 (2021)

    Google Scholar 

  38. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  39. Sun, G., Mendieta, M., Yang, T., Chen, C.: Exploring parameter-efficient fine-tuning for improving communication efficiency in federated learning. arXiv preprint arXiv:2210.01708 (2022)

  40. Sun, M., et al.: Attention-rectified and texture-enhanced cross-attention transformer feature fusion network for facial expression recognition. IEEE Trans. Ind. Inf. 19, 11823–11832 (2023)

    Article  Google Scholar 

  41. Weller, O., Marone, M., Braverman, V., Lawrie, D., Van Durme, B.: Pretrained models for multilingual federated learning. arXiv preprint arXiv:2206.02291 (2022)

  42. Zang, H., Foo, S.Y., Bernadin, S., Meyer-Baese, A.: Facial emotion recognition using asymmetric pyramidal networks with gradient centralization. IEEE Access 9, 64487–64498 (2021)

    Article  Google Scholar 

  43. Zhang, L., Shen, L., Ding, L., Tao, D., Duan, L.Y.: Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10174–10183 (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mridula Verma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Srihitha, P.V.N.P., Verma, M., Prasad, M.V.N.K. (2024). Federated Scaling of Pre-trained Models for Deep Facial Expression Recognition. In: Kaur, H., Jakhetiya, V., Goyal, P., Khanna, P., Raman, B., Kumar, S. (eds) Computer Vision and Image Processing. CVIP 2023. Communications in Computer and Information Science, vol 2011. Springer, Cham. https://doi.org/10.1007/978-3-031-58535-7_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-58535-7_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-58534-0

  • Online ISBN: 978-3-031-58535-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics