Abstract
Sparsity has become a concept of interest in machine learning for many years. In deep learning sparse solutions play crucial role in obtaining robust and discriminative features. In this paper, we study a new regularization term for sparse hidden units activation in the context of Restricted Boltzmann Machine (RBM). Our proposition is based on the symmetric Kullback-Leibler divergence applied to compare the actual and the desired distribution over the active hidden units. We compare our method against two other enforcing sparsity regularization terms by evaluating the empirical classification error using two datasets: (i) for image classification (MNIST), (ii) for document classification (20-newsgroups).
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
In [5] such approach is called selectivity.
- 2.
A i⋅ denotes the i th row of matrix A, A ⋅ j denotes the j th column of matrix A, and A ij is the element of matrix A.
- 3.
- 4.
In the experiments we used the small version of the original dataset: http://www.cs.nyu.edu/~roweis/data.html.
References
Bengio, Y.: Learning Deep Architectures for AI. Foundations and Trends® in Machine Learning 2(1):1-127. (2009).
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer New York. (2006).
Cho, K., Ilin, A., & Raiko, T.: Tikhonov-Type regularization for Restricted Boltzmann Machines. In Artificial Neural Networks and Machine Learning (ICANN 2012). pp. 81-88. Springer Berlin Heidelberg. (2012).
Glorot, X., Bordes, A., & Bengio, Y.: Deep Sparse Rectifier Networks. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS 2011). Journal of Machine Learning Research Workshop & Conference Proceedings 15:315-323. (2011).
Goh, H., Thome, N., & Cord, M.: Biasing restricted Boltzmann machines to manipulate latent selectivity and sparsity. In NIPS workshop on deep learning and unsupervised feature learning. (2010).
Hinton, G. E.: Training products of experts by minimizing contrastive divergence. Neural Comput 14:1771-1800. (2002)
Hinton, G. E.: A practical guide to training Restricted Boltzmann Machines. In Neural Networks: Tricks of the Trade. pp. 599-619. Springer Berlin Heidelberg. (2012).
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580. (2012).
Larochelle, H., & Bengio, Y.: Classification using discriminative restricted Boltzmann machines. In Proceedings of the 25th International Conference on Machine learning (ICML 2008). pp. 536-543. (2008, July).
Lee, H., Ekanadham, C., & Ng, A.: Sparse deep belief net model for visual area V2. In Advances in Neural Information Processing Systems (NIPS 2007). pp. 873-880. (2007).
Le, Q.V., Ngiam, J., Coates, A., Lahiri, A., Prochnow, B., & Ng, A.: On optimization methods for deep learning. In Proceedings of the 28th International Conference on Machine Learning (ICML 2011). pp. 265-272. (2011).
Le Roux, N., & Bengio, Y.: Representational power of Restricted Boltzmann Machines and deep belief networks. Neural Computation 20(6):1631-1649. (2008).
Lewicki, M. S., & Sejnowski, T. J.: Learning overcomplete representations. Neural Computation 12(2):337-365. (2000).
Marlin, B.M., Swersky, K., Chen, B., & Freitas, N.D.: Inductive principles for restricted Boltzmann machine learning. In International Conference on Artificial Intelligence and Statistics (ICML 2010). pp. 509-516. (2010).
Martens, J., Chattopadhya, A., Pitassi, T., & Zemel, R.: On the Expressive Power of Restricted Boltzmann Machines. In Advances in Neural Information Processing Systems (NIPS 2013). pp. 2877-2885. (2013).
Nair, V., & Hinton, G. E.: 3D object recognition with deep belief nets. In Advances in Neural Information Processing Systems (NIPS 2009). pp. 1339-1347. (2009).
Nair, V., & Hinton, G. E.: Rectified linear units improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on Machine Learning (ICML 2010). pp. 807-814. (2010).
Smolensky, P.: Information processing in dynamical systems: foundations of harmony theory. In: Parallel distributed processing: explorations in the microstructure of cognition, Vol. 1: Foundations, pp. 194281. MIT Press, Cambridge, MA, USA. (1986)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Tomczak, J.M., Gonczarek, A. (2015). Sparse hidden units activation in Restricted Boltzmann Machine. In: Selvaraj, H., Zydek, D., Chmaj, G. (eds) Progress in Systems Engineering. Advances in Intelligent Systems and Computing, vol 366. Springer, Cham. https://doi.org/10.1007/978-3-319-08422-0_27
Download citation
DOI: https://doi.org/10.1007/978-3-319-08422-0_27
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-08421-3
Online ISBN: 978-3-319-08422-0
eBook Packages: EngineeringEngineering (R0)