Skip to main content

Facial Expression Recognition Based on Quaternion-Space and Multi-features Fusion

  • Conference paper
  • First Online:
Rough Sets and Knowledge Technology (RSKT 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9436))

Included in the following conference series:

  • 1057 Accesses

Abstract

There is an increasing trend of using feature fusion technique in facial expression recognition. However, when traditional serial or parallel feature fusion methods are used, the problem of highly dimensional features and insufficient fusion of possible feature categories always exist. In order to solve these problems, a novel facial expression recognition method based on quaternion-space and multi-features fusion is proposed. Firstly, four different kinds of expression features are extracted such as Gabor wavelet, LBP, LPQ and DCT features, then PCA+CCA framework is proposed and used to reduce the dimensions of the four original features. Secondly, quaternion is used to construct the combinative features. Thirdly, a novel quaternion-space HDA method is proposed and used as the dimensional reduction method of the combinative features. Finally, SVM is used and set as the classifier. Experimental results indicate that the proposed method is capable of fusing four kinds of features more effectively while it achieves higher recognition rates than the traditional feature fusion methods.

This paper is partially supported by The National Natural Science Foundation of China under Grant No.61472056 and No.61300059, and the Ministry of Science, ICT & Future Planning(MSIP) of Korea in the ICT R & D Program 2013 under Grant No.10039149.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Mehrabian, A.: Silent Messages: Implicit Communication of Emotions and Attitudes. Wadsworth, Belmont (1981)

    Google Scholar 

  2. Abidi, M.A., Gonzalez, R.C.: Data Fusion in Robotics and Machine Intelligence. Academic Press, San Diego (1992)

    MATH  Google Scholar 

  3. Yang, J., Yang, J.Y., Zhang, D., et al.: Feature fusion: parallel strategy vs. serial strategy. J Pattern Recogn. 36(6), 1369–1381 (2003)

    Article  Google Scholar 

  4. Liu, C.J., Wechsler, H.: A shape- and texture-based enhanced Fisher classifier for face recognition. J. IEEE Trans. Image Process. 10(4), 598–608 (2001)

    Article  Google Scholar 

  5. Kotsia, I., Nikolaidis, N., Pitas, I.: Fusion of geometrical and texture information for facial expression recognition. In: 2006 IEEE International Conference on Image Processing, pp. 2649–2652. IEEE Press, Atlanta (2006)

    Google Scholar 

  6. Luo, Y., Wu, C.M., Zhang, Y.: Facial expression feature extraction using hybrid PCA and LBP. J. China Univ. Posts Telecommun. 20(2), 120–124 (2013)

    Article  Google Scholar 

  7. Yang, J., Yang, J.Y., Wang, Z.Q., et al.: A novel method of combined feature extraction. J. Chinese J. Comput. 25(6), 570–575 (2002)

    Google Scholar 

  8. Luo, F., Wang, G.Y., Yang, Y., et al.: Facial expression recognition based on improved parallel features fusion. J. Guangxi Univ. Nat. Sci. Ed. 34(5), 700–703 (2009)

    Google Scholar 

  9. Bai, G., Jia, W.H., Jin, Y.: Facial expression recognition based on fusion features of lbp and gabor with lda. In: 2nd International Congress on Image and Signal Processing(CISP2 009), pp. 1–5. IEEE Press, Tianjin (2009)

    Google Scholar 

  10. Yang, Y., Cai, S.B.: Facial expression recognition method based on two-steps dimensionality reduction and parallel feature fusion. J. Chongqing Univ. Posts Telecomm. Nat. Sci. Ed. 27(3), 377–385 (2015)

    Google Scholar 

  11. Lang, F.N., Zhou, J.L., Zhong, F., et al.: Quaternion based image information parallel fusion. J. Acta Automatica Sinica 33(11), 1136–1143 (2008)

    MATH  Google Scholar 

  12. Hamilton, W.R.: Elements of Quaternions. Longmans Green and Company, London (1866)

    Google Scholar 

  13. Ruan, J.X., Yin, J.X., Chen, Q., et al.: Facial expression recognition based on gabor wavelet transform and relevance vector machine. J. Inf. Comput. Sci. 11(1), 295–302 (2014)

    Article  Google Scholar 

  14. Verma, R., Dabbagh, M.Y.: Fast facial expression recognition based on local binary patterns. In: 2013 26th Annual IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1–4. IEEE Press, Regina (2013)

    Google Scholar 

  15. Wang, Z., Ying, Z.L.: Facial Expression Recognition Based on Local Phase Quantization and Sparse Representation. In: 2012 Eighth International Conference on Natural Computation (ICNC), pp. 222–225. IEEE Press, Chongqing (2012)

    Google Scholar 

  16. Kharat, G.U., Dudul, S.V.: Neural network classifier for human emotion recognition from facial expressions using discrete cosine transform. In: First International Conference on Emerging Trends in Engineering and Technology (ICETET 2008), pp. 653–658. IEEE Press, Nagpur (2008)

    Google Scholar 

  17. Xanthopoulos, P., Pardalos, P.M., Trafalis, T.B.: Principal component analysis. Robust Data Mining, pp. 21–26. Springer Press, New York (2013)

    Chapter  Google Scholar 

  18. Hardoon, D., Szedmak, S., Shawe-Taylor, J.: Canonical correlation analysis: an overview with application to learning methods. Neural Comput. 16(12), 2639–2664 (2004)

    Article  Google Scholar 

  19. Yu, J., Tian, Q., Rui, T., et al.: Integrating discriminant and descriptive information for dimension reduction and classification. IEEE Trans. Circuits Sys. Video Technol. 17(3), 372–377 (2007)

    Article  Google Scholar 

  20. Ekman, P., Friesen, W.V.: Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 17(2), 124 (1971)

    Article  Google Scholar 

  21. Lyons, M., Akamatsu, S., Kamachi, M., et al.: Coding facial expressions with gabor wavelets. In: Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205. IEEE Press, Nara (1998)

    Google Scholar 

  22. Lucey, P., Cohn, J.F., Kanade, T., et al.: The Extended cohn-kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 94–101. IEEE Press, San Francisco (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Yang .

Editor information

Editors and Affiliations

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 2.5 International License (http://creativecommons.org/licenses/by-nc/2.5/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Yang, Y., Cai, S., Zhang, Q. (2015). Facial Expression Recognition Based on Quaternion-Space and Multi-features Fusion. In: Ciucci, D., Wang, G., Mitra, S., Wu, WZ. (eds) Rough Sets and Knowledge Technology. RSKT 2015. Lecture Notes in Computer Science(), vol 9436. Springer, Cham. https://doi.org/10.1007/978-3-319-25754-9_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-25754-9_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-25753-2

  • Online ISBN: 978-3-319-25754-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics