Abstract:
Canonical correlation analysis (CCA) is a well-appreciated linear subspace method to leverage hidden sources common to two or more datasets. CCA benefits are documented i...Show MoreMetadata
Abstract:
Canonical correlation analysis (CCA) is a well-appreciated linear subspace method to leverage hidden sources common to two or more datasets. CCA benefits are documented in various applications, such as dimensionality reduction, blind source separation, classification, and data fusion. However, the standard CCA does not exploit the geometry of common sources, which may be deduced from (cross-) correlations, or, inferred from the data. In this context, the prior information provided by the common source is encoded here through a graph, and is employed as a CCA regularizer. This leads to what is termed here as graph CCA (gCCA), which accounts for the graph-induced knowledge of common sources, while maximizing the linear correlation between the canonical variables. When the dimensionality of data vectors is high relative to the number of vectors, the dual formulation of the novel gCCA is also developed. Tests on two real datasets for facial image classification showcase the merits of the proposed approaches relative to their competing alternatives.
Published in: 2018 IEEE Statistical Signal Processing Workshop (SSP)
Date of Conference: 10-13 June 2018
Date Added to IEEE Xplore: 30 August 2018
ISBN Information: