Abstract
Locally-linear embedding (LLE) is a prominent dimension reduction method by exploiting the local symmetries of linear reconstructions. Recently, auto-encoders have achieved great success in learning data representation via the deep neural networks (DNN). It is interesting to get the best of both worlds by implementing LLE with DNN. To this end, we introduce an extra fully-connected layer whose weight works as a reconstruction coefficient (i.e., relation among the samples). Consequently, the latent representation can well preserve the neighborhood structure. Experiments on dimension reduction and classification have validated the superiority of the proposed method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Burges, C.J., et al.: Dimension reduction: a guided tour. Found. Trends® Mach. Learn. 2(4), 275–365 (2010)
Charte, D., Charte, F., GarcÃa, S., del Jesus, M.J., Herrera, F.: A practical tutorial on autoencoders for nonlinear feature fusion: taxonomy, models, software and guidelines. Inf. Fus. 44, 78–96 (2018)
Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. Wiley, Hoboken (2012)
Elhamifar, E., Vidal, R.: Sparse subspace clustering: algorithm theory and applications. IEEE Trans. Pattern Anal. Mach. Intell. 35(11), 2765–2781 (2013)
He, X., Cai, D., Yan, S., Zhang, H.J.: Neighborhood preserving embedding. In: Tenth IEEE International Conference on Computer Vision, 2005. ICCV 2005, vol. 2, pp. 1208–1213. IEEE (2005)
He, X., Niyogi, P.: Locality preserving projections. In: Advances in Neural Information Processing Systems, pp. 153–160 (2004)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
Hou, C., Zhang, C., Wu, Y., Jiao, Y.: Stable local dimensionality reduction approaches. Pattern Recogn. 42(9), 2054–2066 (2009)
Huang, J., Nie, F., Huang, H.: A new simplex sparse learning model to measure data similarity for clustering. In: IJCAI, pp. 3569–3575 (2015)
Huang, S., Kang, Z., Xu, Z.: Auto-weighted multi-view clustering via deep matrix decomposition. Pattern Recogn. 97, 107015 (2020)
Ji, P., Salzmann, M., Li, H.: Efficient dense subspace clustering. In: 2014 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 461–468. IEEE (2014)
Jolliffe, I.: Principal component analysis. In: International Encyclopedia of Statistical Science, pp. 1094–1096. Springer, New York (2011). https://doi.org/10.1007/b98835
Kang, Z., Lu, X., Lu, Y., Peng, C., Chen, W., Xu, Z.: Structure learning with similarity preserving. Neural Netw. 129, 138–148 (2020)
Kang, Z., Pan, H., Hoi, S.C., Xu, Z.: Robust graph learning from noisy data. IEEE Trans. Cybern. 50(5), 1833–1843 (2020)
Kang, Z., Peng, C., Cheng, Q.: Robust PCA via nonconvex rank approximation. In: 2015 IEEE International Conference on Data Mining, pp. 211–220. IEEE (2015)
Kang, Z., Peng, C., Cheng, Q.: Top-n recommender system via matrix completion. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 179–184. AAAI Press (2016)
Kang, Z., Peng, C., Cheng, Q.: Twin learning for similarity and clustering: a unified kernel approach. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)
Kang, Z., Peng, C., Cheng, Q., Xu, Z.: Unified spectral clustering with optimal graph. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Kang, Z., Wen, L., Chen, W., Xu, Z.: Low-rank kernel learning for graph-based clustering. Knowl. Based Syst. 163, 510–517 (2019)
Kang, Z., Xu, H., Wang, B., Zhu, H., Xu, Z.: Clustering with similarity preserving. Neurocomputing 365, 211–218 (2019)
Kang, Z., et al.: Partition level multiview subspace clustering. Neural Networks 122, 279–288 (2020)
Kingma, D.P., Ba, J.L.: Adam: a method for stochastic optimization. In: Proceedings 3rd International Conference Learning Representations (2014)
Lange, S., Riedmiller, M.: Deep auto-encoder neural networks in reinforcement learning. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2010)
Li, A., Chen, D., Wu, Z., Sun, G., Lin, K.: Self-supervised sparse coding scheme for image classification based on low rank representation. PLoS One 13(6), 450–462 (2018)
Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., Ma, Y.: Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 171–184 (2013)
Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)
Masci, J., Meier, U., Cireşan, D., Schmidhuber, J.: Stacked convolutional auto-encoders for hierarchical feature extraction. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds.) ICANN 2011. LNCS, vol. 6791, pp. 52–59. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21735-7_7
Nie, F., Xu, D., Tsang, I.W.H., Zhang, C.: Flexible manifold embedding: a framework for semi-supervised and unsupervised dimension reduction. IEEE Trans. Image Process. 19(7), 1921–1932 (2010)
Peng, C., Chen, C., Kang, Z., Li, J., Cheng, Q.: Res-PCA: a scalable approach to recovering low-rank matrices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7317–7325 (2019)
Peng, C., Chen, Y., Kang, Z., Chen, C., Cheng, Q.: Robust principal component analysis: a factorization-based approach with linear complexity. Inf. Sci. 513, 581–599 (2020)
Peng, C., Kang, Z., Cheng, Q.: A fast factorization-based approach to robust PCA. In: 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 1137–1142. IEEE (2016)
Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)
Shakhnarovich, G.: Learning task-specific similarity. Ph.D. thesis, Massachusetts Institute of Technology (2005)
Tang, C., Zhu, X., Liu, X., Li, M., Wang, P., Zhang, C., Wang, L.: Learning joint affinity graph for multi-view subspace clustering. In: IEEE Transactions on Multimedia (2018)
Tenenbaum, J.B., De Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)
Wang, R., Nie, F., Hong, R., Chang, X., Yang, X., Yu, W.: Fast and orthogonal locality preserving projections for dimensionality reduction. IEEE Trans. Image Process. 26(10), 5019–5030 (2017)
Xie, J., Girshick, R., Farhadi, A.: Unsupervised deep embedding for clustering analysis. In: International Conference on Machine Learning, pp. 478–487 (2016)
Yan, S., Xu, D., Zhang, B., Zhang, H.J., Yang, Q., Lin, S.: Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 40–51 (2007)
Zhang, C., Liu, Y., Fu, H.: Ae2-nets: autoencoder in autoencoder networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2577–2585 (2019)
Zhang, J., et al.: Self-supervised convolutional subspace clustering network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5473–5482 (2019)
Zhou, P., Hou, Y., Feng, J.: Deep adversarial subspace clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1596–1604 (2018)
Zhou, P., Du, L., Fan, M., Shen, Y.D.: An LLE based heterogeneous metric learning for cross-media retrieval. In: Proceedings of the 2015 SIAM International Conference on Data Mining, pp. 64–72. SIAM (2015)
Acknowledgments
This paper was in part supported by Grants from the Natural Science Foundation of China (No. 61806045), the National Key R&D Program of China (No. 2018YFC0807500), the Fundamental Research Fund for the Central Universities under Project ZYGX2019Z015, the Sichuan Science and Techology Program (Nos. 2020YFS0057, 2019YFG0202), the Ministry of Science and Technology of Sichuan Province Program (Nos. 2018GZDZX0048, 20ZDYF0343, 2018GZDZX0014, 2018GZDZX0034).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Lu, X., Kang, Z., Tang, J., Xie, S., Su, Y. (2020). Generalized Locally-Linear Embedding: A Neural Network Implementation. In: Zhang, H., Zhang, Z., Wu, Z., Hao, T. (eds) Neural Computing for Advanced Applications. NCAA 2020. Communications in Computer and Information Science, vol 1265. Springer, Singapore. https://doi.org/10.1007/978-981-15-7670-6_9
Download citation
DOI: https://doi.org/10.1007/978-981-15-7670-6_9
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-7669-0
Online ISBN: 978-981-15-7670-6
eBook Packages: Computer ScienceComputer Science (R0)