Abstract
In this paper, we propose a novel manifold learning method named topology learning embedding (TLE). The key issue of manifold learning is studying data’s structure. Instead of blindly calculating the relations between each pair of available data, TLE learns data’s internal structure model in a smarter way: it constructs a topology preserving network rapidly and incrementally through online input data; then with the Isomap-based embedding strategy, it achieves out-of-sample data embedding efficiently. Experiments on synthetic data and real-world handwritten digit data demonstrate that TLE is a promising method for dimensionality reduction.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Tenenbaum, J.B., Silva, V.D., Langford, J.C.: A Global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)
Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)
Belkin, M., Niyogi, P.: Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Advances in Neural Information Processing Systems, vol. 14, pp. 586–691. MIT Press (2001)
He, X.F., Niyogi, P.: Locality preserving projections. Adv. Neural Inf. Process. Syst. 45(1), 186–197 (2005)
Wang, J.Z.: Local tangent space alignment. In: Geometric Structure of High-Dimensional Data and Dimensionality Reduction, pp. 211–234. Springer, Berlin, Heidelberg (2012). doi:10.1007/978-3-642-27497-8_11
Bellman, R.: Adaptative Control Processes: A Guided Tour. Princeton University, Princeton (1961)
Gan, Q., Shen, F.R., Zhao, J.X.: Improved Manifold Learning with competitive Hebbian rule. In: International Joint Conference on Neural Networks 2015, pp. 1–6 (2015)
Shen, F.R., Hasegawa, O.: An incremental network for on-line unsupervised classification and topology learning. Neural Networks 19(1), 90–106 (2006)
Xing, Y.L., Shi, X.F., Shen, F.R., Zhou, K., Zhao, J.X.: A self-organizing incremental neural network based on local distribution learning. Neural Networks 84, 143–160 (2016)
Martinetz, T., Schulten, K.: Topology Representing Networks. Neural Networks 7(3), 507–522 (1994)
Silva, V.D., Tenenbaum, J.B.: Sparse Multidimensional Scaling using Landmark Points (2004)
Bengio, Y., Paiement, J.F., Vincent, P., Delalleau, O., Roux, N.L., Ouimet, M.: Out-of-sample extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering. Adv. Neural Inf. Process. Syst. 16, 177–184 (2004)
Acknowledgements
This work is supported in part by the National Science Foundation of China under Grant Nos. (61373130, 61375064, 61373001), and Jiangsu NSF grant (BK20141319).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Zhu, T., Shen, F., Zhao, J., Liang, Y. (2017). Topology Learning Embedding: A Fast and Incremental Method for Manifold Learning. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10634. Springer, Cham. https://doi.org/10.1007/978-3-319-70087-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-70087-8_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-70086-1
Online ISBN: 978-3-319-70087-8
eBook Packages: Computer ScienceComputer Science (R0)