Abstract
Unsupervised feature learning with deep networks has been widely studied in recent years. Among these networks, deep autoencoders have shown a decent performance in discovering hidden geometric structure of the original data. Both nonnegativity and graph constraints show the effectiveness in representing intrinsic structures in the high dimensional ambient space. This paper combines the nonnegativity and graph constraints to find the original geometrical information intrinsic to high dimensional data, keeping it in a dimensionality reduced space. In the experiments, we test the proposed networks on several standard image data sets. The results demonstrate that they outperform existing methods.











Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Bengio Y (2009) Learning deep architectures for AI. Found Trends Mach Learn 2:1–127
Hutchinson B, Deng L, Yu D (2013) Tensor deep stacking networks. IEEE Trans Pattern Anal Mach Intell 35:1944–1957
Yu J, Zhang B, Kuang Z, Dan L, Fan J (2017) Iprivacy: image privacy protection by identifying sensitive objects via deep multi-task learning. IEEE Trans Inform Forensics Secur 12:1005–1016
Yu J, Yang X, Gao F, Tao D (2016) Deep multimodal distance metric learning using click constraints for image ranking. IEEE Trans Cybern 47:1–11
Razavian AS, Azizpour H, Sullivan J, Carlsson S (2014) CNN features off-the-shelf: an astounding baseline for recognition. arXiv:1403.6382
Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedylayer-wise training of deep networks. In: Advances in Neural Information Processing Systems, British Columbia, Canada, pp 153–160
Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18:1527–1554
Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313:504–507
Kavukcuoglu K, Ranzato MA, Fergus R, Lecun Y (2009) Learninginvariant features through topographic filter maps. In: IEEEConference on Computer Vision and Pattern Recognition, Miami Beach,USA, pp. 1605-1612
Kavukcuoglu K, Ranzato MA, Fergus R, Lecun Y (2009) Learninginvariant features through topographic filter maps. In: IEEE conference on computer vision and pattern recognition, Miami Beach, USA, pp 1605–1612
Hosseini-Asl E, Zurada JM, Nasraoui O (2016) Deep learning of part-based representation of data using sparse autoencoders with nonnegativity constraints. IEEE Trans Neural Netw Learn Syst 27:2486–2498
Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv:1312.6114
Vincent P, Larochelle H, Bengio Y, Manzagol PA (2008) Extracting and composing robustfeatures with denoising autoencoders. In: International conference on machine learning
Boureau Y, Boureau YL, LeCun Y (2007) Sparse feature learning fordeep belief networks. In: Advances in neural information processing systems, British Columbia, Canada, pp 1185–1192
Lee H, Ekanadham C, Ng AY (2008) Sparse deep belief net model forvisual area. In: Advances in neural information processing systems, British Columbia, Canada, pp 873–880
Nguyeny TD, Tranyz T, Phungy D, Venkateshy S (2013) Learning parts-based representations with nonnegative restricted boltzmann machine. J Mach Learn Res 29:133–148
Wachsmuth E, Oram MW, Perrett DI (1994) Recognition of objects and their component parts: responses of single units in the temporal cortex of the macaque. Cerebral Cortex 4:509–522
Rifai S, Mesnil G, Vincent P, Muller X, Bengio Y, Dauphin Y, GlorotX (2011) Higher order contractive auto-encoder. In: European conference on machine learning and knowledge discovery in databases, Dublin, Ireland, pp 645–660
Cai D, He X, Han J, Huang TS (2001) Graph regularized nonnegative matrix factorization for data representation. IEEE Trans Pattern Anal Mach Intell 33:1548–1560
Alain G, Bengio Y (2014) What regularized auto-encoders learn from the data-generating distributio. arXiv:1211.4246
Tenenbaum JB, Silva BD, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290:2319–2323
Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326
Liao Y, Wang Y, Liu (2017) Graph regularized auto-encoders for image representation. IEEE Trans Image Process 26:2839–2852
Wang N, Gao X, Sun L, Li J (2018) Anchored neighborhood index for face sketch synthesis. IEEE Trans Circuits Syst Video Tehcnol 28:2154–2163
Wang N, Gao X, Sun L, Li J (2017) Bayesian face sketch synthesis. IEEE Trans Image Process 26:1264–1274
Wang N, Gao X, Jie J (2017) Random sampling for fast face sketch synthesis. Pattern Recognit 76:215–227
Hong C, Yu J, Wan J, Tao D, Wang M (2015) Multimodal deep autoencoder for human pose recovery. IEEE Trans Image Process 24:5659–5670
Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, USA http://www.deeplearningbook.org
Krogh A, Hertz JA (1991) A simple weight decay can improve generalization. In: Advances in neural information processing systems, Colorado, USA, pp 950–957
Tang Z, Ding S (2012) Nonnegative dictionary learning by nonnegative matrix factorization with a sparsity constraint. Berlin, German. https://doi.org/10.1007/978-3-642-31362-2-11
Chung FR (1997) Spectral graph theory. American Mathematical Socity, Philadelphia
Hecht-Nielsen R (1998) Theory of the backpropagation neural network. Neural Netw 1:445–448
Byrd RH, Lu P, Nocedal J, Zhu C (1995) A limited memory algorithm for bound constrained optimization. SIAM J Sci Comput 16:1190–1208
Wei Q, Hong B, Deng C, He X, Li X (2016) Non-negative matrix factorization with Sinkhorn distance. In: International joint conference on artificial intelligence, New York, USA, pp 1960–1966
Cai D, He X, Han J (2005) Document clustering using locality preserving indexing. IEEE Trans Knowl Data Eng 17:1624–1637
Lovász L, Plummer MD (2009) Matching theory. Elsvier press, The Netherland
Maaten L, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9:2579–2605
Hoyer P (2004) Non-negative matrix factorization with sparseness constraints. J Mach Learn Res 5:1457–1469
Fan F, Shan H, Wang G (2019) Quadratic autoencoder for low-dose CT denoising. arXiv:1901.05593
Acknowledgements
This work was supported by the National Natural Science Foundations of China (81671773), Fundamental Research Funds for the Central Universities (N171903004) and Natural Science Foundation of Liaoning Province of China (20170540321).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Teng, Y., Liu, Y., Yang, J. et al. Graph Regularized Sparse Autoencoders with Nonnegativity Constraints. Neural Process Lett 50, 247–262 (2019). https://doi.org/10.1007/s11063-019-10039-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-019-10039-3