Abstract
Parametric PCA is a dimensionality reduction based metric learning method that uses the Bhattacharrya and Hellinger distances between Gaussian distributions estimated from local patches of the KNN graph to build the parametric covariance matrix. Later on, PCA-KL, an entropic PCA version using the symmetrized KL-divergence (relative entropy) was proposed. In this paper, we extend this method by replacing the Gaussian distribution by a Gaussian-Markov random field model. The main advantage is the incorporation of the spatial dependence by means of the inverse temperature parameter. A closed form expression for the KL-divergence is derived, allowing fast computation and avoiding numerical simulations. Results with several real datasets show that the proposed method is capable of improving the average classification performance in comparison to PCA-KL and some state-of-the-art manifold learning algorithms, such as t-SNE and UMAP.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15(6), 1373–1396 (2003)
Besag, J.: Spatial interaction and the statistical analysis of lattice systems. J. Royal Stat. Soc. Ser. B (Methodological) 36(2), 192–236 (1974)
Cunningham, J.P., Ghahramani, Z.: Linear dimensionality reduction: survey, insights, and generalizations. J. Mach. Learn. Res. 16, 2859–2900 (2015)
Hammersley, J.M., Clifford, P.: Markov field on finite graphs and lattices (preprint) (1971). www.statslab.cam.ac.uk/~grg/books/hammfest/hamm-cliff.pdf
Jolliffe, I.T.: Principal Component Analysis. 2 edn. Springer, New York (2002). https://doi.org/10.1007/b98835
Levada, A.L.M.: Parametric PCA for unsupervised metric learning. Pattern Recogn. Lett. 135, 425–430 (2020)
Levada, A.L.M.: Information geometry, simulation and complexity in gaussian random fields. Monte Carlo Methods Appl. 22, 81–107 (2016)
Levada, A.L.M.: PCA-KL: a parametric dimensionality reduction approach for unsupervised metric learning. Adv. Data Anal. Classif. 1–40 (2021). https://doi.org/10.1007/s11634-020-00434-3
Li, D., Tian, Y.: Survey and experimental study on metric learning methods. Neural Netw. 105, 447–462 (2018)
McClurkin, J., L.M. Optican, B., Gawne, T.: Concurrent processing and complexity of temporally encoded neuronal messages in visual perception. Science 253, 675–677 (1991)
McInnes, L., Healy, J., Melville, J.: UMAP: Uniform manifold approximation and projection for dimension reduction. arXiv arXiv:1802.03426 (2020)
Murase, H., Nayar, S.: Visual learning and recognition of 3d objects from appearance. Int. J. Comput. Vis. 14, 5–24 (1995)
Roweis, S., Saul, L.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)
Schölkopf, B., Smola, A., Müller, K.-R.: Kernel principal component analysis. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, J.-D. (eds.) ICANN 1997. LNCS, vol. 1327, pp. 583–588. Springer, Heidelberg (1997). https://doi.org/10.1007/BFb0020217
Seung, H.S., Lee, D.D.: The manifold ways of perception. Science 290, 2268–2269 (2000)
Tasoulis, S., Pavlidis, N.G., Roos, T.: Nonlinear dimensionality reduction for clustering. Pattern Recog. 107, 107508 (2020)
Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000)
Van Der Maaten, L., Hinton, G.E.: Visualizing high-dimensional data using t-sne. J. Mach. Learn. Res. 9, 2579–2605 (2008)
Van Der Maaten, L., Postma, E., Van den Herik, J.: Dimensionality reduction: a comparative review. J. Mach. Learn. Res. 10, 66–71 (2009)
Wang, F., Sun, J.: Survey on distance metric learning and dimensionality reduction in data mining. Data Min. Knowl. Disc. 29(2), 534–564 (2014). https://doi.org/10.1007/s10618-014-0356-z
Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Levada, A.L.M. (2021). Improving Parametric PCA Using KL-divergence Between Gaussian-Markov Random Field Models. In: Gervasi, O., et al. Computational Science and Its Applications – ICCSA 2021. ICCSA 2021. Lecture Notes in Computer Science(), vol 12950. Springer, Cham. https://doi.org/10.1007/978-3-030-86960-1_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-86960-1_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86959-5
Online ISBN: 978-3-030-86960-1
eBook Packages: Computer ScienceComputer Science (R0)