Skip to main content
Log in

Nonnegative Constrained Graph Based Canonical Correlation Analysis for Multi-view Feature Learning

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Understanding and analyzing multi-view data is a fundamental research topic of feature learning for a wide range of practical applications such as image classification. Canonical correlation analysis (CCA) is a popular unsupervised method of analyzing multi-view data, which captures common subspace of two groups of variable sets by maximizing the correlations between them. However, traditional CCA ignores the underlying geometric structure within dataset, which shows great power in describing data distribution, and thus fails in some tasks such as classification. To handle this limitation, this paper proposes an improved CCA variant of Nonnegative Constrained Graph regularized CCA (NCGCCA). Specifically, we improve CCA to NCGCCA with the following two contributions. Firstly, we develop a nonnegative constrained graph based self-representation to explore the underlying group-wise structure within dataset. Secondly, based on the proposed informative representation, we offer a graph embedding schema to incorporate the underlying structure into CCA. Experiments of image classification on four face datasets including Yale, ORL, UMIST, and YaleB demonstrate the efficacy of NCGCCA compared with existing baseline CCA methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Wu F, Jing XY, Yue D (2017) Multi-view discriminant dictionary learning via learning view-specific and shared structured dictionaries for image classification. Neural Process Lett 45(2):649–666

    Article  Google Scholar 

  2. Wan J, Wang H, Yang M (2017) Cost sensitive semi-supervised canonical correlation analysis for multi-view dimensionality reduction. Neural Process Lett 45(2):411–430

    Article  Google Scholar 

  3. Sun S, Zhang Q (2011) Multiple-view multiple-learner semi-supervised learning. Neural Process Lett 34(3):229–240

    Article  Google Scholar 

  4. Wang F, Zhang D (2013) A new locality-preserving canonical correlation analysis algorithm for multi-view dimensionality reduction. Neural Process Lett 37(2):135–146

    Article  MathSciNet  Google Scholar 

  5. Hotelling H (1936) Relations between two sets of variates. Biometrika 28(3/4):321–377

    Article  Google Scholar 

  6. Yuan YH, Sun QS, Ge HW (2014) Fractional-order embedding canonical correlation analysis and its applications to multi-view dimensionality reduction and recognition. Pattern Recognit 47(3):1411–1424

    Article  Google Scholar 

  7. Farquhar J, Hardoon D, Meng H, Shawe-taylor JS, Szedmak S (2006) Two view learning: Svm-2k, theory and practice. In: Advances in neural information processing systems, pp 355–362

  8. Yang J, Sun QS (2017) A novel generalized fuzzy canonical correlation analysis framework for feature fusion and recognition. Neural Process Lett 46(2):521–536

    Article  Google Scholar 

  9. Kakade SM, Foster DP (2007) Multi-view regression via canonical correlation analysis. In: International conference on computational learning theory. Springer, pp 82–96

  10. Blaschko MB, Lampert CH (2008) Correlational spectral clustering. In: Computer vision and pattern recognition. IEEE, pp 1–8

  11. Chaudhuri K, Kakade SM, Livescu K, Sridharan K (2009) Multi-view clustering via canonical correlation analysis. In: Machine learning. ACM, pp 129–136

  12. Lai PL, Fyfe C (2000) Kernel and nonlinear canonical correlation analysis. Int J Neural Syst 10(05):365–377

    Article  Google Scholar 

  13. Luo Y, Tao D, Ramamohanarao K, Xu C, Wen Y (2015) Tensor canonical correlation analysis for multi-view dimension reduction. IEEE Trans Knowl Data Eng 27(11):3111–3124

    Article  Google Scholar 

  14. Wang SJ, Yan WJ, Sun T, Zhao G, Fu X (2016) Sparse tensor canonical correlation analysis for micro-expression recognition. Neurocomputing 214:218–232

    Article  Google Scholar 

  15. Andrew G, Arora R, Bilmes J, Livescu K (2013) Deep canonical correlation analysis. In: International conference on machine learning, pp 1247–1255

  16. Srivastava N, Salakhutdinov RR (2012) Multimodal learning with deep boltzmann machines. In: Advances in neural information processing systems, pp 2222–2230

  17. Ngiam J, Khosla A, Kim M, Nam J, Lee H, Ng AY (2011) Multimodal deep learning. In: International conference on machine learning, pp 689–696

  18. Liu B, Jing L, Yu J, Li J (2016) Robust graph learning via constrained elastic-net regularization. Neurocomputing 171:299–312

    Article  Google Scholar 

  19. Hu R, Zhu X, Cheng D, He W, Yan Y, Song J, Zhang S (2017) Graph self-representation method for unsupervised feature selection. Neurocomputing 220:130–137

    Article  Google Scholar 

  20. Peng C, Kang Z, Cheng Q (2017) Integrating feature and graph learning with low-rank representation. Neurocomputing 249:106–116

    Article  Google Scholar 

  21. Li S, Zeng C, Fu Y, Liu S (2017) Optimizing multi-graph learning based salient object detection. Signal Process Image Commun 55:93–105

    Article  Google Scholar 

  22. Lou S, Zhao X, Chuang Y, Yu H, Zhang S (2016) Graph regularized sparsity discriminant analysis for face recognition. Neurocomputing 173:290–297

    Article  Google Scholar 

  23. Peng Y, Wang S, Long X, Lu BL (2015) Discriminative graph regularized extreme learning machine and its application to face recognition. Neurocomputing 149:340–353

    Article  Google Scholar 

  24. Sun T, Chen S (2007) Locality preserving cca with applications to data visualization and pose estimation. Image Vis Comput 25(5):531–543

    Article  Google Scholar 

  25. Peng Y, Zhang D, Zhang J (2010) A new canonical correlation analysis algorithm with local discrimination. Neural Process Lett 31(1):1–15

    Article  Google Scholar 

  26. Zhang X, Guan N, Luo Z, Lan L (2012) Discriminative locality preserving canonical correlation analysis. In: Chinese conference on pattern recognition. Springer, pp 341–349

  27. Guan N, Zhang X, Luo Z, Lan L (2012) Sparse representation based discriminative canonical correlation analysis for face recognition. In: Machine learning and applications. IEEE, vol 1, pp 51–56

  28. Zu C, Zhang D (2016) Canonical sparse cross-view correlation analysis. Neurocomputing 191:263–272

    Article  Google Scholar 

  29. Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227

    Article  Google Scholar 

  30. Yan S, Xu D, Zhang B, Zhang HJ, Yang Q, Lin S (2007) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29(1):40–51

    Article  Google Scholar 

  31. He X, Niyogi P (2003) Locality preserving projections. In: Neural information processing systems, vol 16

  32. Qiao L, Chen S, Tan X (2010) Sparsity preserving projections with applications to face recognition. Pattern Recognit 43(1):331–341

    Article  Google Scholar 

  33. Gui J, Sun Z, Jia W, Hu R, Lei Y, Ji S (2012) Discriminant sparse neighborhood preserving embedding for face recognition. Pattern Recognit 45(8):2884–2893

    Article  Google Scholar 

  34. Lu CY, Min H, Gui J, Zhu L, Lei YK (2013) Face recognition via weighted sparse representation. J Vis Commun Image Represent 24(2):111–116

    Article  Google Scholar 

  35. Cheng B, Yang J, Yan S, Fu Y, Huang TS (2010) Learning with \({{l}_{1}}\)-graph for image analysis. IEEE Trans Image Process 19(4):858–866

    Google Scholar 

  36. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500):2323–2326

    Article  Google Scholar 

  37. Wang J, Wang F, Zhang C, Shen HC, Quan L (2009) Linear neighborhood propagation and its applications. IEEE Trans Pattern Anal Mach Intell 31(9):1600–1615

    Article  Google Scholar 

  38. He R, Zheng WS, Hu BG, Kong XW (2011) Nonnegative sparse coding for discriminative semi-supervised learning. In: Computer vision and pattern recognition. IEEE, pp 2849–2856

  39. Vo N, Moran B, Challa S (2009) Nonnegative-least-square classifier for face recognition. In: International symposium on neural networks. Springer, pp 449–456

  40. Liu G, Lin Z, Yu Y (2010) Robust subspace segmentation by low-rank representation. In: International conference on machine learning, pp 663–670

  41. Belhumeur PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720

    Article  Google Scholar 

  42. Samaria FS, Harter AC (1994) Parameterisation of a stochastic model for human face identification. In: Applications of computer vision. IEEE, pp 138–142

  43. Graham DB, Allinson NM (1998) Characterising virtual eigensignatures for general purpose face recognition. In: Face recognition. Springer, pp 446–456

  44. Lee KC, Ho J, Kriegman DJ (2005) Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell 27(5):684–698

    Article  Google Scholar 

  45. Hoegaerts L, Suykens JA, Vandewalle J, De Moor B (2005) Subset based least squares subspace regression in rkhs. Neurocomputing 63:293–323

    Article  Google Scholar 

  46. Lu X, Li X (2014) Group sparse reconstruction for image segmentation. Neurocomputing 136:41–48

    Article  Google Scholar 

  47. Elad M, Figueiredo MA, Ma Y (2010) On the role of sparse and redundant representations in image processing. Proc IEEE 98(6):972–982

    Article  Google Scholar 

  48. Bruckstein AM, Donoho DL, Elad M (2009) From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev 51(1):34–81

    Article  MathSciNet  Google Scholar 

  49. Xu Y, Zhang D, Yang J, Yang JY (2011) A two-phase test sample sparse representation method for use with face recognition. IEEE Trans Circuits Syst Video Technol 21(9):1255–1262

    Article  MathSciNet  Google Scholar 

  50. Wright J, Ma Y, Mairal J, Sapiro G, Huang TS, Yan S (2010) Sparse representation for computer vision and pattern recognition. Proc IEEE 98(6):1031–1044

    Article  Google Scholar 

  51. Guan N, Tao D, Luo Z, Yuan B (2011) Non-negative patch alignment framework. IEEE Trans Neural Netw 22(8):1218–1230

    Article  Google Scholar 

  52. Fazel M (2002) Matrix rank minimization with applications. PhD thesis, Stanford University

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China [61806213, U1435222] and the National High-tech R&D Program [2015AA020108].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huibin Tan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tan, H., Zhang, X., Lan, L. et al. Nonnegative Constrained Graph Based Canonical Correlation Analysis for Multi-view Feature Learning. Neural Process Lett 50, 1215–1240 (2019). https://doi.org/10.1007/s11063-018-9904-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-018-9904-7

Keywords

Navigation