Skip to main content
Log in

Influenced factors reduction for robust facial expression recognition

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The performance of facial expression recognition (FER) would be degraded due to the influenced factors such as individual differences and limited number of training samples. Therefore, reducing the influenced factors in facial images may be useful for improving the performances of FER. In this paper, we propose to reduce the influenced factors for robust FER. First, we reduce the influences of individual differences by the auxiliary neutral dictionary and obtain the feature space which highlights the expression features. Then we exploit the difference training samples to synthesize the virtual training samples to alleviate the influenced factors of the limited training samples. Third, we combine the difference dictionary with virtual training samples to form the extended dictionary and select the optimal training samples from the extended dictionary. Finally, we exploit the optimal training samples based 2-norm representation algorithm for the classification.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Blockmans B, Tamarozzi T, Naets F, Desmet W (2015) A nonlinear parametric model reduction method for efficient fear contact simulations. Int J Numer Methods Eng 102:1162–1191

    Article  MATH  Google Scholar 

  2. Cai S, Zhang L, Zuo W, Feng X (2016) A probabilistic collaborative representation based approach for pattern classification. In: IEEE Conference on Computer Vision and Pattern Recognition. CVPR, pp 2950–2959

  3. Cotter SF (2010) Sparse representation for accurate classification of corrupted and occluded facial expressions. In: IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp 838–841

  4. Ekman P, Friesen WV (1986) A new pan-cultural facial expression of emotion. Motiv Emot 10:159–168

    Article  Google Scholar 

  5. Ekman P, Friesen WV, Ellsworth P (1998) Sampling emotion words, categories, or dimensions in judgment studies-emotion in the human face-CHAPTER XI. Emotion in the Human Face 43:45–47

  6. Geng Y, Liang RZ, Li W, Wang J, Liang G, Xu C, Wang J (2016) Learning convolutional neural network to maximize Pos@Top performance measure. arXiv preprint arXiv:1609.08417

  7. Goeleven E, De Raedt R, Leyman L, Verschuere B (2008) The Karolinska directed emotional faces: a validation study. Cognit Emot 22:1094–1118

    Article  Google Scholar 

  8. Happy SL, Routray A (2015) Automatic facial expression recognition using features of salient facial patches. IEEE Trans Affect Comput 6:1–12

    Article  Google Scholar 

  9. Kapoor R, Gupta R (2015) Morphological mapping for non-linear dimensionality reduction. IET Comput Vis 9:226–232

    Article  Google Scholar 

  10. Koc M, Barkana A (2014) Application of linear regression classification to low-dimensional datasets. Neurocomputing 131:331–335

    Article  Google Scholar 

  11. Lee SH, Kostas Plataniotis KN, Yong MR (2014) Intra-class variation reduction using training expression images for sparse representation based facial expression recognition. IEEE Trans Affect Comput 5:340–351

    Article  Google Scholar 

  12. Lee SH, Baddar WJ, Yong MR (2016) Collaborative expression representation using peak expression and intra class variation face images for practical subject-independent emotion recognition in videos. Pattern Recogn 54:52–67

    Article  Google Scholar 

  13. Li Q, Zhou X, Gu A, Li Z, Liang RZ (2016) Nuclear norm regularized convolutional Max Pos@Top machine. Neural Comput & Applic: 1–10. https://doi.org/10.1007/s00521-016-2680-2

  14. Liu J, Chen S, Zhou ZH, Tan X (2007) Single image subspace for face recognition. In: Analysis and modeling of faces and gestures. Springer, Berlin, Heidelberg, pp 205–219

  15. Liu W, Song C, Wang Y, Jia L (2012) Facial expression recognition based on gabor features and sparse representation. In: International Conference on Control Automation Robotics and Vision (ICARCV), pp 1402–1406

  16. Liu W, Lu L, Li H, Wang W, Zou Y (2014) A novel kernel collaborative representation approach for image classification. In: IEEE International Conference on Image Processing, pp 4241–4245

  17. Lucey P, Jeffrey FC, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. CVPRW, pp 94–101

  18. Lyons M, Akamatsu S, Kamachi M, Gyoba J (1998) Codi ng facial expressions with Gabor wavelets. In: IEEE International Conference on Automatic Face and Gesture Recognition. FG, pp 200–205

  19. Ma D, Li M, Nian FZ, Kong CC (2015) Facial expression recognition based on characteristics of block LGBP and sparse representation. J Comput Methods Sci Eng 15:537–547

    Google Scholar 

  20. Mery D, Bowyer K (2015) Automatic facial attribute analysis via adaptive sparse representation of random patches. Pattern Recogn Lett 68:260–269

    Article  Google Scholar 

  21. Min X, Wang H, Yang Z, Ge S, Zhang J, Shao N (2015) Relevant component locally embedding dimensionality reduction for gene expression data analysis. Metall Min Ind 7:186–194

    Google Scholar 

  22. Ouyang Y, Sang N, Huang R (2013) Robust automatic facial expression detection method based on sparse representation plus LBP map. Optik 124:6827–6833

    Article  Google Scholar 

  23. Pablos SM, Garc’ıa-Bermejo JG, Casanova EZ, Lopez J (2015) Dynamic facial emotion recognition oriented to HCI applications. Interact Comput 27:99–199

    Article  Google Scholar 

  24. Quamane A, Benakcha A, Belahcene M, Taleb-Ahmed A (2015) Multimodal depth and intensity face verification approach using LBP, SLF, BSIF and LPQ local features fusion. Pattern Recognit Image Anal 25:603–620

    Article  Google Scholar 

  25. Ruan J (2014) Facial expression recognition based on Gabor wavelet transform and relevance vector machine. J Inf Comput Sci 11:295–302

    Article  Google Scholar 

  26. Shao J, Gori I, Wan S, Aggarwal JK (2015) 3D Dynamic Facial Expression Recognition using Low-Resolution Videos. Pattern Recogn Lett 6:157–162

    Article  Google Scholar 

  27. Sharma A, Dubey A, Tripathi P, Kumar V (2010) Pose invariant virtual classifiers from single training image using novel hybrid-eigenfaces. Neurocomputing 73:1868–1880

    Article  Google Scholar 

  28. Shikkenawis G, Mitra SK (2016) On some variants of locality preserving projection. Neurocomputing 173:196–211

    Article  Google Scholar 

  29. Siddiqi MH, Ali R, Khan AM, Park YT, Lee S (2015) Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields. IEEE Trans Image Process 24:1386–1398

    Article  MathSciNet  Google Scholar 

  30. Tian Y (2004) Evaluation of face resolution for expression analysis. In: IEEE Computer Society Conference on Computer Vision and Pattern Workshops, vol 1, pp 82–82

  31. Vetter T (1998) Synthesis of novel views from a single face image. Int J Comput Vis 28:103–116

    Article  Google Scholar 

  32. Wang Z, Ying Z (2012) Facial expression recognition based on local phase quantization and sparse representation. IEEE Int Conf Nat Comput 8:222–225

    Google Scholar 

  33. Wang QW, Ying ZL (2014) Facial expression recognition algorithm based on Gabor texture features and Adaboost feature selection via sparse representation. Appl Mech Mater 511-512:433–436

    Article  Google Scholar 

  34. Wang S, Yan W, Zhao G, Fu X, Zhou C (2015) Micro-expression recognition using robust principle component analysis and local spatiotemporal directional features. Lect Notes Comput Sci 8925:325–338

    Article  Google Scholar 

  35. Waqas J, Zhang Y, Lei Z (2013) Collaborative neighbor representation based classification using l2-minimization approach. Pattern Recogn Lett 34:201–208

    Article  Google Scholar 

  36. Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31:210–227

    Article  Google Scholar 

  37. Xu Y, Zhang D, Yang J, Yang JY (2011) A two-phase test sample sparse representation method for use with face recognition. IEEE Trans Circuits Syst Video Technol 21:1255–1262

    Article  MathSciNet  Google Scholar 

  38. Yang AY, Ganesh A, Sastry SS, Ma Y (2010) Fast l1-minimization algorithms and an application in robust face recognition. Tech. Rep. No. UCB/EECS-2010-13, EECS Dept., University of CA, Berkeley

  39. Yusuf R, Sharma DG, Tanev I, Shimohara K (2016) Evolving an emotion recognition module for an intelligent agent using genetic programming and a genetic algorithm. Artif Life Robotics 21:85–90

    Article  Google Scholar 

  40. Zhang L, Yang M, Feng XC (2011) Sparse representation or collaborative representation: Which helps face recognition? In: ICCV, vol 11, pp 471–478

  41. Zhang S, Zhao X, Lei B (2012) Facial expression recognition using sparse representation. WSEAS Trans Syst 11:440–452

    Google Scholar 

Download references

Acknowledgments

This work is supported by National Natural Science Foundation of China (No. 61071199), National Natural Science Foundation of China under Grant (No. 61771420), Natural Science Foundation of Hebei Province of China (No. F2016203422), and Postgraduate Innovation Project of Hebei Province (No. CXZZBS2017051). The authors declare that there is no conflict of interests regarding the publication of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zheng-ping Hu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, Z., Hu, Zp. & Wang, M. Influenced factors reduction for robust facial expression recognition. Multimed Tools Appl 77, 16947–16963 (2018). https://doi.org/10.1007/s11042-017-5264-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-017-5264-y

Keywords