Skip to main content

Unsupervised Local Linear Preserving Manifold Reduction with Uncertainty Pretraining for Image Recognition

  • Conference paper
  • First Online:
  • 4104 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10528))

Abstract

Manifold learning is an efficient dimensionalilty reduction algorithm. But in real applications, difficulty lies in learning the parameters with limited supervised samples. Our proposed algorithm focuses on sparse representation of local linear preserving manifold dimensionality reduction algorithm and can solve the problem of unsupervised clustering. The manifold preserving methods take use of labeled data in manifold reduction except for the final classifier which produces unsupervised manifold reduction algorithm. Another solution for limited data is a novel proposed pretraining using Bayesian nets to construct the initial parameters for manifold learning, which is also robust to data w.r.t. uncertain perturbations. Then we show its validation in experiments and finally apply the algorithm for real world data. The algorithm performs better in noisy input with limited labeled data.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Tenebaum, J.B., Silvam, V.D., Longford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000)

    Article  Google Scholar 

  2. Rowels, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)

    Article  Google Scholar 

  3. Zhang, Z., Zha, H.: Principal manifolds and nonlinear dimension reduction via local tangent space alignment. SIAM J. Sci. Comput. 26, 313–338 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  4. He, X., Niyongi, P.: Locality preserving projection. Proc. Neural Inf. Process. Syst. 14, 153 (2004)

    Google Scholar 

  5. Chen, S., Zhao, H., Kong, M., Luo, B.: 2D-LPP: a two-dimensional extension of locality pre-serving projections. Neurocomputing 70(4–6), 912–921 (2007)

    Article  Google Scholar 

  6. Silva, J., Marques, J., Lemos, J.: Selecting landmark points for sparse manifold learning. In: Advances in neural information processing systems, pp. 1241–1248 (2005)

    Google Scholar 

  7. Law, M.H.C., Jain, A.K.: Incremental nonlinear dimensionality reduction by manifold learning. IEEE Trans. Pattern Anal. Mach. Intell. 28(3), 377–391 (2006)

    Article  Google Scholar 

  8. Yan, S., Xu, D., Zhang, B., et al.: Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 40–51 (2007)

    Article  Google Scholar 

  9. Zhang, J., Huang, H., Wang, J.: Manifold learning for visualizing and analyzing high-dimensional data. IEEE Intell. Syst. 25(4), 54–61 (2010)

    Google Scholar 

  10. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)

    Article  Google Scholar 

  11. Hinton, G.E., Roweis, S.T.: Stochastic neighbor embedding. In: Advances in Neural Information Processing Systems, vol. 15, pp. 833–840, Cambridge, MA, USA, The MIT Press (2002)

    Google Scholar 

  12. Usman, M., Vaillant, G., Atkinson, D., et al.: Compressive manifold learning: estimating one-dimensional respiratory motion directly from undersampled k-space data. Megnetic Reson. Med. 72(4), 1130–1140 (2014)

    Article  Google Scholar 

  13. van der Maaten, L., Hinton, G.E.: Visualizing high-dimensional data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    MATH  Google Scholar 

  14. van der Maaten, L.: Learning a parametric embedding by preserving local structure. In: Proceedings of the Conference on Artificial Intelligence and Statistics (2009)

    Google Scholar 

  15. Yu, K., Zhang, T., Gong, Y.: Nonlinear learning using local coordinate coding. In: Proceedings of the Neural Information and Processing Systems (2009)

    Google Scholar 

  16. Yu, K., Zhang, T.: Improved local coordinate coding using local tangents. In: Proceedings of the International Conference on Machine Learning (2010)

    Google Scholar 

  17. Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Proceedings of the Neural Information and Processing Systems (2012)

    Google Scholar 

  18. Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qianwen Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Yang, Q., Sun, F. (2017). Unsupervised Local Linear Preserving Manifold Reduction with Uncertainty Pretraining for Image Recognition. In: Liu, M., Chen, H., Vincze, M. (eds) Computer Vision Systems. ICVS 2017. Lecture Notes in Computer Science(), vol 10528. Springer, Cham. https://doi.org/10.1007/978-3-319-68345-4_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-68345-4_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-68344-7

  • Online ISBN: 978-3-319-68345-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics