Skip to main content

Cross-Database Facial Expression Recognition via Unsupervised Domain Adaptive Dictionary Learning

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9948))

Included in the following conference series:

Abstract

Dictionary learning based methods have achieved state-of-the-art performance in the task of conventional facial expression recognition (FER), where the distributions between training and testing data are implicitly assumed to be matched. But in the practical scenes this assumption is usually broken, especially when testing samples and training samples come from different databases, a.k.a. the cross-database FER problem. To address this problem, we propose a novel method called unsupervised domain adaptive dictionary learning (UDADL) to deal with the unsupervised case that all samples in target database are completely unlabeled. In UDADL, to obtain more robust representations of facial expressions and to reduce the time complexity in training and testing phases, we introduce a dual dictionary pair consisting of a synthesis one and an analysis one to mutually bridge the samples and their codes. Meanwhile, to relieve the distribution disparity of source and target samples, we further integrate the learning of unlabeled testing data into UDADL to adaptively adjust the misaligned distribution in an embedded space, where geometric structures of both domains are also encourage to be preserved. The UDADL model can be solved by an iterate optimization strategy with each sub-optimization in a closed analytic form. The extensive experiments on Multi-PIE and BU-3DFE databases demonstrate that the proposed UDADL is superior over most widely-used domain adaptation methods in dealing with cross-database FER, and achieves the state-of-the-art performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chu, W.S., Torre, F., Cohn, J.: Selective transfer machine for personalized facial action unit detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3515–3522 (2013)

    Google Scholar 

  2. Gross, R., Matthews, I., Cohn, J., Kanade, T., Baker, S.: Multi-pie. Image Vis. Comput. 28(5), 807–813 (2010)

    Article  Google Scholar 

  3. Gu, S., Zhang, L., Zuo, W., Feng, X.: Projective dictionary pair learning for pattern classification. In: Advances in Neural Information Processing Systems, pp. 793–801 (2014)

    Google Scholar 

  4. Hassan, A., Damper, R., Niranjan, M.: On acoustic emotion recognition: compensating for covariate shift. IEEE Trans. Audio Speech Lang. Process. 21(7), 1458–1468 (2013)

    Article  Google Scholar 

  5. Huang, J., Gretton, A., Borgwardt, K.M., Schölkopf, B., Smola, A.J.: Correcting sample selection bias by unlabeled data. In: Advances in Neural Information Processing Systems, pp. 601–608 (2006)

    Google Scholar 

  6. Kan, M., Wu, J., Shan, S., Chen, X.: Domain adaptation for face recognition: targetize source domain bridged by common subspace. Int. J. Comput. Vis. 109(1–2), 94–109 (2014)

    Article  MATH  Google Scholar 

  7. Kanamori, T., Hido, S., Sugiyama, M.: A least-squares approach to direct importance estimation. J. Mach. Learn. Res. 10, 1391–1445 (2009)

    MathSciNet  MATH  Google Scholar 

  8. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Article  Google Scholar 

  9. Rubinstein, R., Bruckstein, A.M., Elad, M.: Dictionaries for sparse representation modeling. Proc. IEEE 98(6), 1045–1057 (2010)

    Article  Google Scholar 

  10. Sangineto, E., Zen, G., Ricci, E., Sebe, N.: We are not all equal: personalizing models for facial expression analysis with transductive parameter transfer. In: Proceedings of the ACM International Conference on Multimedia, pp. 357–366. ACM (2014)

    Google Scholar 

  11. Sugiyama, M., Nakajima, S., Kashima, H., Buenau, P.V., Kawanabe, M.: Direct importance estimation with model selection and its application to covariate shift adaptation. In: Advances in Neural Information Processing Systems, pp. 1433–1440 (2008)

    Google Scholar 

  12. Yin, L., Wei, X., Sun, Y., Wang, J., Rosato, M.J.: A 3D facial expression database for facial behavior research. In: 7th International Conference on Automatic Face and Gesture Recognition, FGR 2006, pp. 211–216. IEEE (2006)

    Google Scholar 

  13. Zhang, C., Liu, J., Tian, Q., Xu, C., Lu, H., Ma, S.: Image classification by non-negative sparse coding, low-rank and sparse decomposition. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1673–1680. IEEE (2011)

    Google Scholar 

  14. Zheng, W., Tang, H., Lin, Z., Huang, T.S.: A novel approach to expression recognition from non-frontal face images. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 1901–1908. IEEE (2009)

    Google Scholar 

  15. Zheng, W., Zhou, X.: Cross-pose color facial expression recognition using transductive transfer linear discriminat analysis. In: IEEE International Conference on Image Processing, pp. 1935–1939. IEEE (2015)

    Google Scholar 

Download references

Acknowledgement

This work was supported in part by the National Basic Research Program of China under Grant 2015CB351704, in part by the National Natural Science Foundation of China (NSFC) under Grants 61231002 and 61572009, and in part by the Natural Science Foundation of Jiangsu Province under Grant BK20130020.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Wenming Zheng or Zhen Cui .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Yan, K., Zheng, W., Cui, Z., Zong, Y. (2016). Cross-Database Facial Expression Recognition via Unsupervised Domain Adaptive Dictionary Learning. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9948. Springer, Cham. https://doi.org/10.1007/978-3-319-46672-9_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46672-9_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46671-2

  • Online ISBN: 978-3-319-46672-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics