Skip to main content

Visualizing and Understanding Nonnegativity Constrained Sparse Autoencoder in Deep Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9692))

Abstract

In this paper, we demonstrate how complex deep learning structures can be understood by humans, if likened to isolated but understandable concepts that use the architecture of Nonnegativity Constrained Autoencoder (NCAE). We show that by constraining most of the weights in the network to be nonnegative using both \(L_1\) and \(L_2\) nonnegativity penalization, a more understandable structure can result with minute deterioration in classification accuracy. Also, this proposed approach yields a more sparse feature extraction and additional output layer sparsification. The concept is illustrated using MNIST and the NORB datasets.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Hosseini-Asl, E., Zurada, J., Nasraoui, O.: Deep learning of part-based representation of data using sparse autoencoders with nonnegativity constraints. IEEE Trans. Neural Netw. Learn. Syst. 99, 1–13 (2015)

    Article  Google Scholar 

  2. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. Adv. Neural Inf. Process. Syst. 19, 153 (2007)

    Google Scholar 

  3. Hinton, G., Salakhutdinov, R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  4. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.: Extracting and composing robust features with denoising autoencoders. In: 25th International Conference on Machine learning, pp. 1096–1103. ACM (2008)

    Google Scholar 

  5. Lee, H., Ekanadham, C., Ng, A.: Sparse deep belief net model for visual area V2. Adv. Neural Inf. Process. Syst. 7, 873–880 (2007)

    Google Scholar 

  6. Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)

    Article  Google Scholar 

  7. Chorowski, J., Zurada, J.M.: Learning understandable neural networks with nonnegative weight constraints. IEEE Trans. Neural Netw. Learn. Syst. 26(1), 62–69 (2015)

    Article  MathSciNet  Google Scholar 

  8. Olshausen, B.A., et al.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607–609 (1996)

    Article  Google Scholar 

  9. Ranzato, M., Boureau, Y., LeCun, Y.: Sparse feature learning for deep belief networks. Adv. Neural Inf. Process. Syst. 20, 1185–1192 (2007)

    Google Scholar 

  10. Ishikawa, M.: Structural learning with forgetting. Neural Netw. 9(3), 509–521 (1996)

    Article  Google Scholar 

  11. Bartlett, P.L.: The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans. Inf. Theory 44(2), 525–536 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  12. Ayinde, B.O., Barnawi, A.Y.: Differential evolution based deployment of wireless sensor networks. In: 2014 IEEE/ACS 11th International Conference on Computer Systems and Applications (AICCSA), pp. 131–137. IEEE (2014)

    Google Scholar 

  13. Gnecco, G., Sanguineti, M.: Regularization techniques and suboptimal solutions to optimization problems in learning from data. Neural Comput. 22(3), 793–829 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  14. Moody, J., Hanson, S., Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. Adv. Neural Inf. Process. Syst. 4, 950–957 (1995)

    Google Scholar 

  15. Lemme, A., Reinhart, R., Steil, J.: Online learning and generalization of parts-based image representations by non-negative sparse autoencoders. Neural Netw. 33, 194–203 (2012)

    Article  Google Scholar 

  16. Nguyen, T.D., Tran, T., Phung, D., Venkatesh, S.: Learning partsbased representations with nonnegative restricted Boltzmann machine. In: Asian Conference on Machine Learning, pp. 133–148 (2013)

    Google Scholar 

  17. Wright, S.J., Nocedal, J.: Numerical Optimization. Springer, New York (1999)

    MATH  Google Scholar 

  18. Hashim, H.A., Ayinde, B., Abido, M.: Optimal placement of relay nodes in wireless sensor network using artificial bee colony algorithm. J. Netw. Comput. Appl. 64, 239–248 (2016)

    Article  Google Scholar 

  19. Zurada, J.M.: Introduction to Artificial Neural Systems. West Publishing Co., St. Paul (1992)

    Google Scholar 

  20. Hinton, G., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  21. LeCun, Y., Cortes, C., Burges, C.J.: The MNIST database of handwritten digits (1998)

    Google Scholar 

  22. der Maaten, L.V., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11), 2579–2605 (2008)

    MATH  Google Scholar 

  23. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors (2012). arXiv preprint arXiv:1207.0580

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jacek M. Zurada .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Ayinde, B.O., Hosseini-Asl, E., Zurada, J.M. (2016). Visualizing and Understanding Nonnegativity Constrained Sparse Autoencoder in Deep Learning. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L., Zurada, J. (eds) Artificial Intelligence and Soft Computing. ICAISC 2016. Lecture Notes in Computer Science(), vol 9692. Springer, Cham. https://doi.org/10.1007/978-3-319-39378-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-39378-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-39377-3

  • Online ISBN: 978-3-319-39378-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics