Skip to main content
Log in

Deep subspace learning for expression recognition driven by a two-phase representation classifier

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Recent research has shown that the deep subspace learning (DSL) method can extract high-level features and better represent abstract semantics of data for facial expression recognition. While significant advances have been made in this area, traditional sparse representation classifiers or collaborative representation classifiers are still predominantly used for classification purposes. In this paper, we propose a two-phase representation classifier (TPRC)-driven DSL model for robust facial expression recognition. First, the DSL-based principal component analysis network is used to extract high-level features of training and query samples. Then, the proposed TPRC uses the Euclidean distance as a measure to determine the optimal training sample features (TSFs) for the query sample feature (QSF). Finally, the TPRC represents the QSF as a linear combination of all optimal TSFs and uses the representation result to perform classification. Experiments based on several benchmark datasets confirm that the proposed model exhibits highly competitive performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Gupta, O., Raviv, D., Raskar, R.: Illumination invariants in deep video expression recognition. Pattern Recognit. 76, 25–35 (2018)

    Article  Google Scholar 

  2. Wang, S.J., Li, B.J., Liu, Y.J., et al.: Micro-expression recognition with small sample size by transferring long-term convolutional neural network. Neurocomputing 312, 251–262 (2018)

    Article  Google Scholar 

  3. Perveen, N., Roy, D., Mohan, C.K.: Spontaneous expression recognition using universal attribute model. IEEE Trans. Image Process. 27, 5575–5584 (2018)

    Article  MathSciNet  Google Scholar 

  4. Mohanty, A., Sahay, R.R.: Rasabodha: understanding Indian classical dance by recognizing emotions using deep learning. Pattern Recognit. 79, 97–113 (2018)

    Article  Google Scholar 

  5. Salmam, F.Z., Madani, A., Kissi, M.: Fusing multi-stream deep neural networks for facial expression recognition. Signal Image Video Process. 13, 609–616 (2019)

    Article  Google Scholar 

  6. Zeng, N.Y., Zhang, H., Song, B., Liu, W., Li, Y., Dobaie, A.M.: Facial expression recognition via learning deep sparse autoencoders. Neurocomputing 273, 643–649 (2018)

    Article  Google Scholar 

  7. Barros, P., Parisi, G.I., Weber, C., Wermter, S.: Emotion-modulated attention improves expression recognition: a deep learning model. Neurocomputing 253, 104–114 (2017)

    Article  Google Scholar 

  8. He, X., Zhang, W.: Emotion recognition by assisted learning with convolutional neural networks. Neurocomputing 291, 187–194 (2018)

    Article  Google Scholar 

  9. Silver, D., Huang, A., Maddison, C.J.: Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016)

    Article  Google Scholar 

  10. Sun, Y., Wang, X., Tang, X.: Deeply learned face representations are sparse, selective, and robust. In: IEEE Computer Vision and Pattern Recognition, 07–12 June 2015, Boston, MA, USA, pp. 2892–2900. IEEE

  11. Wright, J., Yang, A., Ganesh, A., Shastry, S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31, 210–227 (2009)

    Article  Google Scholar 

  12. Yan, Q., Song, N., Huang, R.: Accurate and robust facial expressions recognition by fusing multiple sparse representation based classifiers. Neurocomputing 149, 71–78 (2015)

    Article  Google Scholar 

  13. Shi, Q., Eriksson, A., Hengel, A. V. D., Shen C.: Is face recognition really a compressive sensing problem? In: 2011 IEEE Conference on Computer Vision and Pattern Recognition, vol. 42, pp. 553–560 (2011)

  14. Waqas, J., Yi, Z., Lei, Z.: Collaborative neighbor representation based classification using l2-minimization approach. Pattern Recognit. Lett. 34, 201–208 (2013)

    Article  Google Scholar 

  15. Sun, Z., Hu, Z.P., Wang, M., Zhao, S.H.: Individual-free representation based classification for facial expression recognition. Signal Image Video Process. 11, 597–604 (2017)

    Article  Google Scholar 

  16. Sun, Z., Hu, Z.P., Chiong, R., Wang, M.: An adaptive weighted fusion model with two subspaces for facial expression recognition. Signal Image Video Process. 12, 835–843 (2018)

    Article  Google Scholar 

  17. Fan, X., Tjahjadi, T.: A spatial-temporal framework based on histogram of gradients and optical flow for facial expression recognition in video sequences. Pattern Recognit. 48, 3407–3416 (2015)

    Article  Google Scholar 

  18. Andre, T.L., Edilson, A., Alberto, F.D.S., Thiago, O.S.: Facial expression recognition with convolutional neural network: coping with few data and the training sample. Pattern Recognit. 61, 610–628 (2017)

    Article  Google Scholar 

  19. Zeng, N., Zhang, H., Song, B., Liu, W., et al.: Facial expression recognition via learning deep sparse autoencoders. Neurocomputing 273, 643–649 (2018)

    Article  Google Scholar 

  20. Chan, T.H., Jia, K., Gao, S., et al.: PCANet: a simple deep learning baseline for image classification. IEEE Trans. Image Process. 24, 5017–5032 (2015)

    Article  MathSciNet  Google Scholar 

  21. Sun, Z., Hu, Z.P., Chiong, R., Wang, M., He, W.: Combining the kernel collaboration representation and deep subspace learning for facial expression recognition. J. Circuits Syst. Comput. 27, 1850121 (2018)

    Article  Google Scholar 

  22. Sun, Z., Chiong, R., Hu, Z.P.: An extended dictionary representation approach with deep subspace learning for facial expression recognition. Neurocomputing 316, 1–9 (2018)

    Article  Google Scholar 

  23. Zhang, H., Nasrabadi, N.M., Zhang, Y., Huang, T.S.: Multi-view automatic target recognition using joint sparse representation. IEEE Trans. Aerosp. Electron. Syst. 48, 2481–2497 (2012)

    Article  Google Scholar 

  24. Wang, D., Lu, H., Yang, M.H.: Kernel collaborative face recognition. Pattern Recognit. 48, 3025–3037 (2015)

    Article  Google Scholar 

  25. Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with Gabor wavelets. In: IEEE International Conference on Automatic Face and Gesture Recognition, 14–16 April 1998, Nara, Japan, pp. 200–205. IEEE

  26. Lucey, P., Jeffrey, F. C., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn–Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: IEEE Computer Vision and Pattern Recognition, 13–18 June 2010, San Francisco, CA, USA, pp. 94–101. IEEE

  27. Goeleven, E., De Raedt, R., Leyman, L., Verschuere, B.: The Karolinska directed emotional faces: a validation study. Cogn. Emot. 22, 1094–1118 (2008)

    Article  Google Scholar 

  28. Gross, R., Matthews, I., Cohn, J., Kanade, T., Baker, S.: Multi-PIE. Image Vis. Comput. 28, 807–813 (2010)

    Article  Google Scholar 

  29. Tian, Y.: Evaluation of face resolution for expression analysis. In: IEEE Computer Vision and Pattern, 27 June 2004–2 July 2004, Yorktown Heights, NY, USA, pp. 82–82. IEEE

  30. Lee, S.H., Plataniotis, K.N., Ro, Y.M.: Intra-class variation reduction using training expression images for sparse representation based facial expression recognition. IEEE Trans. Affect. Comput. 5(3), 340–351 (2014)

    Article  Google Scholar 

  31. Mohammadi, M.R., Fatemizadeh, E., Mahoor, M.H.: PCA-based dictionary building for accurate facial expression recognition via sparse representation. J. Vis. Commun. Image Represent. 25(5), 1082–1092 (2014)

    Article  Google Scholar 

  32. Martin, O., Kotsia, I., Macq, B., Pitas, I.: The eNTERFACE’05 audio-visual emotion database. In: 22nd International Conference on Data Engineering Workshops (ICDEW 2006), 3–7 April 2006, Atlanta, USA

  33. Pfister, T., Li, X., Zhao, G. et al.: Recognising spontaneous facial micro-expression. In: IEEE International Conference on Computer Vision, ICCV 2011, Barcelona, Spain, November 6–13

  34. Dhall, A., Goecke, R., Lucey, S., Gedeon, T.: Acted facial expressions in the wild database: collecting large, richly annotated facial-expression databases from movies. IEEE Multimed. 19(3), 34–41 (2012)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grants 61071199 and 61771420, as well as the China Postdoctoral Science Foundation Grant 2018M641674 and Doctoral Foundation in Yanshan University of China under Grants BL18033.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhengping Hu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, Z., Chiong, R., Hu, Z. et al. Deep subspace learning for expression recognition driven by a two-phase representation classifier. SIViP 14, 437–444 (2020). https://doi.org/10.1007/s11760-019-01568-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-019-01568-4

Keywords

Navigation