Skip to main content
Log in

Research on denoising sparse autoencoder

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Autoencoder can learn the structure of data adaptively and represent data efficiently. These properties make autoencoder not only suit huge volume and variety of data well but also overcome expensive designing cost and poor generalization. Moreover, using autoencoder in deep learning to implement feature extraction could draw better classification accuracy. However, there exist poor robustness and overfitting problems when utilizing autoencoder. In order to extract useful features, meanwhile improve robustness and overcome overfitting, we studied denoising sparse autoencoder through adding corrupting operation and sparsity constraint to traditional autoencoder. The results suggest that different autoencoders mentioned in this paper have some close relation and the model we researched can extract interesting features which can reconstruct original data well. In addition, all results show a promising approach to utilizing the proposed autoencoder to build deep models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. MNIST-Rotation dataset http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/MnistVariations.

  2. M. Schmidt. minFunc: unconstrained differentiable multivariate optimization in Matlab. http://www.cs.ubc.ca/~schmidtm/Software/minFunc.html, 2005.

References

  1. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313:504–507

    Article  MathSciNet  MATH  Google Scholar 

  2. Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. Neural information processing system foundation, Vancouver, pp 153–160

    Google Scholar 

  3. Vincent P, Larochelle H, Bengio Y et al (2008) Extracting and composing robust features with denoising autoencoders. ICML’08. ACM, New York, pp 1096–1103

    Google Scholar 

  4. Baldi P (2012) Autoencoders, unsupervised learning, and deep architectures. ICML Unsuperv Transf Learn 12:37–50

    Google Scholar 

  5. Almousli H, Vincent P (2013) Semi supervised autoencoders: better focusing model capacity during feature extraction. In: Neural Information Processing. Springer, Berlin, Heidelberg, Germany, pp 328–335

    Chapter  Google Scholar 

  6. Baldi P, Lu Z (2012) Complex-valued autoencoders. Neural Netw 33:136–147

    Article  MATH  Google Scholar 

  7. Luo Y, Wan Y (2013) A novel efficient method for training sparse auto-encoders. IEEE International Congress on Image and Signal Processing. Hangzhou, China, pp 1019–1023

    Google Scholar 

  8. Wei W, Yan H, Yizhou W et al (2014) Generalized autoencoder: a neural network framework for dimensionality reduction. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops Columbus. OH, USA, pp 496–503

    Google Scholar 

  9. Chandra B, Sharma RK (2014) Adaptive noise schedule for denoising autoencoder. In: Neural Information Processing, Springer, Switzerland, pp 535–542

    Google Scholar 

  10. Liu J, Chi G, Liu Z et al (2013) Predicting protein structural classes with autoencoder neural networks. In: 25th Chinese Control and Decision Conference (CCDC2013). Guiyang, pp 1894–1899

  11. Ouyang Y, Liu W, Rong W et al (2014) Autoencoder-based collaborative filtering. In: Neural Information Processing. Springer, Switzerland, pp 284–291

    Google Scholar 

  12. Tan CC, Eswaran C (2008) Reconstruction of handwritten digit images using autoencoder neural networks. In: International Conference on Electrical & Computer Engineering. Niagara Falls, pp 442–446

  13. Krizhevsky A, Hinton GE (2011) Using very deep autoencoders for content-based image retrieval. In: 2011 European Symposium on Artificial Neural Networks. Bruges, Belgium, pp 27–29

    Google Scholar 

  14. Xia B, Bao C (2014) Wiener filtering based speech enhancement with weighted denoising auto-encoder and noise classification. Speech Commun 60:13–29

    Article  Google Scholar 

  15. You Q, Zhang Y-J (2013) A new training principle for stacked denoising autoencoders. In: Seventh International Conference on Image and Graphics. Qingdao, pp 384–389

  16. Wu K, Gao Z, Peng C, Wen X (2013) Text window denoising autoencoder: building deep architecture for Chinese word segmentation. Commun Comput Inf Sci 400:1–12

    Google Scholar 

  17. Zheng Y, Jeon B, Xu D et al (2015) Image segmentation by generalized hierarchical fuzzy C-means algorithm. J Intell Fuzzy Syst 28:961–973

    Google Scholar 

  18. Ding S, Zhang J, Jia H, Qian J (2015) An adaptive density data stream clustering algorithm. Cognit Comput 8:30–38

    Article  Google Scholar 

  19. Gu B, Sun X, Sheng VS (2016) Structural minimax probability machine. IEEE Trans Neural Netw Learn Syst 1–11

  20. Lu S, Wang X, Zhang G, Zhou X (2015) Effective algorithms of the Moore–Penrose inverse matrices for extreme learning machine. Intell Data Anal 19:743–760

    Article  Google Scholar 

  21. LeCun Y, Ranzato MA, Poultney C et al (2006) Efficient learning of sparse representations with an energy-based model. Nips 1:1137–1144

    Google Scholar 

  22. Lee H, Ekanadham C, Ng AY (2009) Sparse deep belief net model for visual area V2. In: 21st Annu. Conf. Neural Inf. Process. Syst. 873–880

  23. Xi-Zhao W, Qing-Yan S, Qing M, Jun-Hai Z (2013) Architecture selection for networks trained with extreme learning machine using localized generalization error model. Neurocomputing 102:3–9

    Article  Google Scholar 

  24. Wang X, Chen A, Feng H (2011) Upper integral network with extreme learning mechanism. Neurocomputing 74:2520–2525

    Article  Google Scholar 

  25. You Z-H, Lei Y-K, Zhu L et al (2013) Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis. BMC Bioinf 14(Suppl 8):S10

    Article  Google Scholar 

  26. He Y-L, Wang X-Z, Huang JZ (2016) Fuzzy nonlinear regression analysis using a random weight network. Inf Sci (Ny). doi:10.1016/j.ins.2016.01.037. (in press)

  27. Vincent P, Larochelle H, Lajoie I et al (2010) Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res 11:3371–3408

    MathSciNet  MATH  Google Scholar 

  28. Boureau Y, Lecun Y, Ranzato MA (2007) Sparse feature learning for deep belief networks. Adv Neural Inf Process Syst 20:1–8

    Google Scholar 

  29. Lee H, Ekanadham C, Ng AY (2008) Sparse deep belief net model for visual area V2. In: Adv. Neural Inf. Process. Syst. 20. Curran Associates Inc., Computer Science Department, Stanford University, Stanford, CA 94305, United States, pp 873–880

  30. Makhzani A, Frey B (2013) k-Sparse Autoencoders. http://arxiv.org/abs/1312.5663

  31. Ng AY (2004) Feature selection, L 1 vs. L 2 regularization, and rotational invariance. Proceedings of the twenty-first international conference on Machine learning. In: Proc. twenty-first Int. Conf. Mach. Learn. pp 379–387

  32. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86:2278–2323

    Article  Google Scholar 

  33. Coates A, Arbor A, Ng AY (2011) An analysis of single-layer networks in unsupervised feature learning. Aistats 2011:215–223

    Google Scholar 

Download references

Acknowledgments

This work is supported by the National Natural Science Foundation of China (No. 61379101), the Priority Academic Program Development of Jiangsu Higher Education Institutions, and the Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shifei Ding.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Meng, L., Ding, S. & Xue, Y. Research on denoising sparse autoencoder. Int. J. Mach. Learn. & Cyber. 8, 1719–1729 (2017). https://doi.org/10.1007/s13042-016-0550-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-016-0550-y

Keywords

Navigation