Skip to main content

Supervised Deep Canonical Correlation Analysis for Multiview Feature Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10639))

Abstract

Recently, a new feature representation method called deep canonical correlation analysis (DCCA) has been proposed with high learning performance for multiview feature extraction of high dimensional data. DCCA is an effective approach to learn the nonlinear mappings of two sets of random variables that make the resulting DNN representations highly correlated. However, the DCCA learning process is unsupervised and thus lacks the class label information of training samples on the two views. In order to take full advantage of the class information of training samples, we propose a discriminative version of DCCA referred to as supervised DCCA (SDCCA) for feature learning, which explicitly considers the class information of samples. Compared with DCCA, the SDCCA method can not only guarantee the nonlinear maximal correlation between two views, but also minimize within-class scatter of the samples. With supervision, SDCCA can extract more discriminative features for pattern classification tasks. We test SDCCA on the handwriting recognition and speech recognition using two popular MNIST and XRMB datasets. Experimental results show that SDCCA gets higher performance than several related algorithms.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Here DNNs f and g are regarded as two nonlinear mappings. Thus, f(X) and g(Y) denote the DNN outputs (top-level representations) on the two views.

  2. 2.

    http://yann.lecun.com/exdb/mnist/.

  3. 3.

    http://ttic.uchicago.edu/~klivescu/XRMB_data/full/README.

References

  1. Hotelling, H.: Relations between two sets of variates. Biometrika 28(3/4), 321–377 (1936)

    Article  MATH  Google Scholar 

  2. Sun, Q.S., Liu, Z.-D., Heng, P.-A., Xia, D.-S.: A theorem on the generalized canonical projective vectors. Pattern Recogn. 38(3), 449–452 (2005)

    Article  MATH  Google Scholar 

  3. Lai, P.L., Fyfe, C.: Kernel and nonlinear canonical correlation analysis. Int. J. Neural Syst. 10(5), 365 (2000)

    Article  Google Scholar 

  4. Andrew, G., Arora, R., Bilmes, J., Livescu, K.: Deep canonical correlation analysis. In: ICML, pp. III-1247–III-1255. JMLR: W&CP, Atlanta (2013)

    Google Scholar 

  5. Yan, F., Mikolajczyk, K.: Deep correlation for matching images and text. In: CVPR, pp. 3441–3451. IEEE, Boston (2015)

    Google Scholar 

  6. Wang, W., Yan, X., Lee, H., Livescu, K.: Deep variational canonical correlation analysis arXiv preprint arXiv:1610.03454v3 (2017)

  7. Wang, W., Arora, R., Livescu, K., Bilmes, J.: On deep multi-view representation learning. In: ICML, pp. 1083–1092. JMLR: W&CP, Lille (2015)

    Google Scholar 

  8. Melzer, T., Reiter, M., Bischof, H.: Appearance models based on kernel canonical correlation analysis. Pattern Recogn. 36(9), 1961–1971 (2003)

    Article  MATH  Google Scholar 

  9. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.-A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)

    MATH  MathSciNet  Google Scholar 

  10. Michaeli, T., Wang, W., Livescu, K.: Nonparametric canonical correlation analysis. In: ICML, pp. 1967–1976. JMLR: W&CP, New York (2016)

    Google Scholar 

  11. Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. ACM TIST 2(3) (2011). Article No. 27

    Google Scholar 

  12. Wang, W., Arora, R., Livescu, K., Bilmes, J.: Unsupervised learning of acoustic features via deep canonical correlation analysis. In: ICASSP, pp. 4590–4594. IEEE, Brisbane (2015)

    Google Scholar 

Download references

Acknowledgements

This work is supported by National Natural Science Foundation of China under Grant No. 61402203. In addition, it is also supported in part by the National Natural Science Foundation of China under Grant Nos. 61472344, 61611540347, the Natural Science Foundation of Jiangsu Province of China under Grant Nos. BK20161338, BK20170513, and sponsored by Excellent Young Backbone Teacher Project.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yun Li or Yun-Hao Yuan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Liu, Y., Li, Y., Yuan, YH., Qiang, JP., Ruan, M., Zhang, Z. (2017). Supervised Deep Canonical Correlation Analysis for Multiview Feature Learning. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10639. Springer, Cham. https://doi.org/10.1007/978-3-319-70136-3_61

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70136-3_61

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70135-6

  • Online ISBN: 978-3-319-70136-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics