Skip to main content
Log in

Graph Regularized Sparse Autoencoders with Nonnegativity Constraints

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Unsupervised feature learning with deep networks has been widely studied in recent years. Among these networks, deep autoencoders have shown a decent performance in discovering hidden geometric structure of the original data. Both nonnegativity and graph constraints show the effectiveness in representing intrinsic structures in the high dimensional ambient space. This paper combines the nonnegativity and graph constraints to find the original geometrical information intrinsic to high dimensional data, keeping it in a dimensionality reduced space. In the experiments, we test the proposed networks on several standard image data sets. The results demonstrate that they outperform existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Bengio Y (2009) Learning deep architectures for AI. Found Trends Mach Learn 2:1–127

    Article  MATH  Google Scholar 

  2. Hutchinson B, Deng L, Yu D (2013) Tensor deep stacking networks. IEEE Trans Pattern Anal Mach Intell 35:1944–1957

    Article  Google Scholar 

  3. Yu J, Zhang B, Kuang Z, Dan L, Fan J (2017) Iprivacy: image privacy protection by identifying sensitive objects via deep multi-task learning. IEEE Trans Inform Forensics Secur 12:1005–1016

    Article  Google Scholar 

  4. Yu J, Yang X, Gao F, Tao D (2016) Deep multimodal distance metric learning using click constraints for image ranking. IEEE Trans Cybern 47:1–11

    Google Scholar 

  5. Razavian AS, Azizpour H, Sullivan J, Carlsson S (2014) CNN features off-the-shelf: an astounding baseline for recognition. arXiv:1403.6382

  6. Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedylayer-wise training of deep networks. In: Advances in Neural Information Processing Systems, British Columbia, Canada, pp 153–160

  7. Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18:1527–1554

    Article  MathSciNet  MATH  Google Scholar 

  8. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313:504–507

    Article  MathSciNet  MATH  Google Scholar 

  9. Kavukcuoglu K, Ranzato MA, Fergus R, Lecun Y (2009) Learninginvariant features through topographic filter maps. In: IEEEConference on Computer Vision and Pattern Recognition, Miami Beach,USA, pp. 1605-1612

  10. Kavukcuoglu K, Ranzato MA, Fergus R, Lecun Y (2009) Learninginvariant features through topographic filter maps. In: IEEE conference on computer vision and pattern recognition, Miami Beach, USA, pp 1605–1612

  11. Hosseini-Asl E, Zurada JM, Nasraoui O (2016) Deep learning of part-based representation of data using sparse autoencoders with nonnegativity constraints. IEEE Trans Neural Netw Learn Syst 27:2486–2498

    Article  Google Scholar 

  12. Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv:1312.6114

  13. Vincent P, Larochelle H, Bengio Y, Manzagol PA (2008) Extracting and composing robustfeatures with denoising autoencoders. In: International conference on machine learning

  14. Boureau Y, Boureau YL, LeCun Y (2007) Sparse feature learning fordeep belief networks. In: Advances in neural information processing systems, British Columbia, Canada, pp 1185–1192

  15. Lee H, Ekanadham C, Ng AY (2008) Sparse deep belief net model forvisual area. In: Advances in neural information processing systems, British Columbia, Canada, pp 873–880

  16. Nguyeny TD, Tranyz T, Phungy D, Venkateshy S (2013) Learning parts-based representations with nonnegative restricted boltzmann machine. J Mach Learn Res 29:133–148

    Google Scholar 

  17. Wachsmuth E, Oram MW, Perrett DI (1994) Recognition of objects and their component parts: responses of single units in the temporal cortex of the macaque. Cerebral Cortex 4:509–522

    Article  Google Scholar 

  18. Rifai S, Mesnil G, Vincent P, Muller X, Bengio Y, Dauphin Y, GlorotX (2011) Higher order contractive auto-encoder. In: European conference on machine learning and knowledge discovery in databases, Dublin, Ireland, pp 645–660

  19. Cai D, He X, Han J, Huang TS (2001) Graph regularized nonnegative matrix factorization for data representation. IEEE Trans Pattern Anal Mach Intell 33:1548–1560

    Google Scholar 

  20. Alain G, Bengio Y (2014) What regularized auto-encoders learn from the data-generating distributio. arXiv:1211.4246

  21. Tenenbaum JB, Silva BD, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290:2319–2323

    Article  Google Scholar 

  22. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326

    Article  Google Scholar 

  23. Liao Y, Wang Y, Liu (2017) Graph regularized auto-encoders for image representation. IEEE Trans Image Process 26:2839–2852

    Article  MathSciNet  MATH  Google Scholar 

  24. Wang N, Gao X, Sun L, Li J (2018) Anchored neighborhood index for face sketch synthesis. IEEE Trans Circuits Syst Video Tehcnol 28:2154–2163

    Article  Google Scholar 

  25. Wang N, Gao X, Sun L, Li J (2017) Bayesian face sketch synthesis. IEEE Trans Image Process 26:1264–1274

    Article  MathSciNet  MATH  Google Scholar 

  26. Wang N, Gao X, Jie J (2017) Random sampling for fast face sketch synthesis. Pattern Recognit 76:215–227

    Article  Google Scholar 

  27. Hong C, Yu J, Wan J, Tao D, Wang M (2015) Multimodal deep autoencoder for human pose recovery. IEEE Trans Image Process 24:5659–5670

    Article  MathSciNet  MATH  Google Scholar 

  28. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, USA http://www.deeplearningbook.org

    MATH  Google Scholar 

  29. Krogh A, Hertz JA (1991) A simple weight decay can improve generalization. In: Advances in neural information processing systems, Colorado, USA, pp 950–957

  30. Tang Z, Ding S (2012) Nonnegative dictionary learning by nonnegative matrix factorization with a sparsity constraint. Berlin, German. https://doi.org/10.1007/978-3-642-31362-2-11

  31. Chung FR (1997) Spectral graph theory. American Mathematical Socity, Philadelphia

    MATH  Google Scholar 

  32. Hecht-Nielsen R (1998) Theory of the backpropagation neural network. Neural Netw 1:445–448

    Article  Google Scholar 

  33. Byrd RH, Lu P, Nocedal J, Zhu C (1995) A limited memory algorithm for bound constrained optimization. SIAM J Sci Comput 16:1190–1208

    Article  MathSciNet  MATH  Google Scholar 

  34. Wei Q, Hong B, Deng C, He X, Li X (2016) Non-negative matrix factorization with Sinkhorn distance. In: International joint conference on artificial intelligence, New York, USA, pp 1960–1966

  35. Cai D, He X, Han J (2005) Document clustering using locality preserving indexing. IEEE Trans Knowl Data Eng 17:1624–1637

    Article  Google Scholar 

  36. Lovász L, Plummer MD (2009) Matching theory. Elsvier press, The Netherland

    MATH  Google Scholar 

  37. Maaten L, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9:2579–2605

    MATH  Google Scholar 

  38. Hoyer P (2004) Non-negative matrix factorization with sparseness constraints. J Mach Learn Res 5:1457–1469

    MathSciNet  MATH  Google Scholar 

  39. Fan F, Shan H, Wang G (2019) Quadratic autoencoder for low-dose CT denoising. arXiv:1901.05593

Download references

Acknowledgements

This work was supported by the National Natural Science Foundations of China (81671773), Fundamental Research Funds for the Central Universities (N171903004) and Natural Science Foundation of Liaoning Province of China (20170540321).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yueyang Teng.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Teng, Y., Liu, Y., Yang, J. et al. Graph Regularized Sparse Autoencoders with Nonnegativity Constraints. Neural Process Lett 50, 247–262 (2019). https://doi.org/10.1007/s11063-019-10039-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-019-10039-3

Keywords

Navigation