Abstract
The recent years have witnessed a surge of interests of learning high-dimensional correspondence, which is important for both machine learning and neural computation community. Manifold learning–based researches have been considered as one of the most promising directions. In this paper, by analyzing traditional methods, we summarized a new framework for high-dimensional correspondence learning. Within this framework, we also presented a new approach, Local Approximation Maximum Variance Unfolding. Compared with other machine learning–based methods, it could achieve higher accuracy. Besides, we also introduce how to use the proposed framework and methods in a concrete application, cross-system personalization (CSP). Promising experimental results on image alignment and CSP applications are proposed for demonstration.






Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6):1373–1396
Fukunaga K (1990) Introduction to statistical pattern recognition, 2nd edn. Academic Press, Boston
Ham JH, Lee DD, Saul LK (2003) Learning high dimensional correspondences from low dimensional manifolds. In: ICML 2003 workshop on the continuum from labeled to unlabeled data in machine learning and data mining, pp 34–41
Ham JH, Lee DD, Saul LK (2005) Semisupervised alignment of manifolds. In: Proceedings of the 10th international workshop on artificial intelligence and statistics, pp 120–127
He X, Cai D, Yan S, Zhang H (2005) Neighborhood preserving embedding. In: ICCV, pp 1208–1213
He X, Niyogi P (2003) Locality preserving projections. In: NIPS
Jolliffe IT (2002) Principal component analysis, 2nd edn. Springer, New York
Li J, Tao D (2012) On preserving original variables in bayesian pca with application to image analysis. IEEE Trans Image Process 21(12):4830–4843
Li M, Xue XB, Zhou ZH (2009) Exploiting multi-modal interactions: a unified framework. In: IJCAI, pp 1120–1125
Li X, Lin S, Yan S, Xu D (2008) Discriminant locally linear embedding with high-order tensor data. IEEE Trans Syst Man Cybernet Part B 38(2):342–352
Mehta B (2008) Cross system personalization: enabling personalization across multiple systems. Ph.D. thesis, University of Duisburg
Roscher R, Schindler F, Förstner W (2010) High dimensional correspondences from low dimensional manifolds—an empirical comparison of graph-based dimensionality reduction algorithms. In: ACCV workshops, no 2, pp 334–343
Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326
Shon AP, Grochow K, Hertzmann A, Rao RPN (2006) Learning shared latent structure for image synthesis and robotic imitation. In: In Proceedings of NIPS. MIT Press, pp 1233–1240
Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319–2323
Verbeek J (2006) Learning nonlinear image manifolds by global alignment of local linear models. IEEE Trans Pattern Anal Mach Intell 28:1236–1250
Wang C, Mahadevan S (2008) Manifold alignment using procrustes analysis. In: ICML, pp 1120–1127
Wang L (2008) Feature selection with kernel class separability. IEEE Trans Pattern Anal Mach Intell 30(9):1534–1546
Weinberger KQ, Packer BD, Saul LK (2005) Nonlinear dimensionality reduction by semidefinite programming and kernel matrix factorization. In: Proceedings of the 10th international workshop on artificial intelligence and statistics, pp 381–388
Weinberger KQ, Saul LK (2006) An introduction to nonlinear dimensionality reduction by maximum variance unfolding. In: AAAI
Weinberger KQ, Sha F, Zhu Q, Saul LK (2006) Graph laplacian regularization for large-scale semidefinite programming. In: NIPS, pp 1489–1496
Xiong L, Wang F, Zhang C (2007) Semi-definite manifold alignment. In: ECML, pp 773–781
Yan S, Xu D, Zhang B, Zhang H, Yang Q, Lin S (2007) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans PAMI 29(1):40–51
Yang J, Zhang D, Yang Jy, Niu B (2007) Globally maximizing, locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics. IEEE Trans Pattern Anal Mach Intell 29:650–664
Zhai D, Li B, Chang H, Shan S, Chen X, Gao W (2010) Manifold alignment via corresponding projections. In: BMVC, pp 1–11
Zhan Y, Yin J, Liu X. Nonlinear discriminant clustering based on spectral regularization. Neural Comput Appl
Zhang J, Marszalek M, Lazebnik S, Schmid C (2006) Local features and kernels for classification of texture and object categories: a comprehensive study. In: Computer vision and pattern recognition workshop, 2006. CVPRW’06. Conference on, Ieee. pp 13–13
Author information
Authors and Affiliations
Corresponding author
Additional information
We thank the National Natural Science Foundation of China, under Grant No. 61005003, 60975038, for their supports.
Rights and permissions
About this article
Cite this article
Hou, C., Nie, F., Wang, H. et al. Learning high-dimensional correspondence via manifold learning and local approximation. Neural Comput & Applic 24, 1555–1568 (2014). https://doi.org/10.1007/s00521-013-1369-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-013-1369-z