Abstract
Density models are fundamental in machine learning and have received a widespread application in practical cognitive modeling tasks and learning problems. In this work, we introduce a novel deep density model, referred to as deep mixtures of factor analyzers with common loadings (DMCFA), with an efficient greedy layer-wise unsupervised learning algorithm. The model employs a mixture of factor analyzers sharing common component loadings in each layer. The common loadings can be considered to be a feature selection or reduction matrix which makes this new model more physically meaningful. Importantly, sharing common components is capable of reducing both the number of free parameters and computation complexity remarkably. Consequently, DMCFA makes inference and learning rely on a dramatically more succinct model and avoids sacrificing its flexibility in estimating the data density by utilizing Gaussian distributions as the priors. Our model is evaluated on five real datasets and compared to three other competitive models including mixtures of factor analyzers (MFA), MFA with common loadings (MCFA), deep mixtures of factor analyzers (DMFA), and their collapsed counterparts. The results demonstrate the superiority of the proposed model in the tasks of density estimation, clustering, and generation.




Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
The greedy layer-wise algorithm is a generative model with many layers of hidden variables.
One component of the first layer can be divided into Mc sub-components. The size of the sub-components in each first-layer component need not be the same.
The superscript represents which layer these variables belong to. Since in the second layer the sub-components corresponding to a component of the first layer share a common loading and the variance of the independent noise, \(\mathbf {A}_{c}^{(2)}\) and \(\mathbf {{\Psi }}_{c}^{(2)}\) are marked with the subscript c. d corresponds to the subspace dimensionality in the second layer, where d < q.
References
Adams RP, Wallach HM, Ghahramani Z. Learning the structure of deep sparse graphical models. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics; 2010. p. 1–8.
Arnold L, Ollivier Y. Layer-wise learning of deep generative models. CoRR arXiv:1212.1524; 2012.
Baek J, McLachlan GJ. Mixtures of common t-factor analyzers for clustering high-dimensional microarray data. Bioinformatics 2011;27(9):1269–1276.
Baek J, McLachlan GJ, Flack LK. Mixtures of factor analyzers with common factor loadings: applications to the clustering and visualization of high-dimensional data. IEEE Trans Pattern Anal Mach Intell 2010;32(7):1298–1309.
Bengio Y. Learning deep architectures for AI. Found Trends Mach Learn 2009;2(1):1–127.
Chen B, Polatkan G, Sapiro G, Dunson DB, Carin L. The hierarchical beta process for convolutional factor analysis and deep learning. In: Proceedings of the 28th International conference on machine learning; 2011. p. 361–368.
Everett B. An introduction to latent variable models. Springer Science & Business Media; 2013.
Ghahramani Z. Probabilistic machine learning and artificial intelligence. Nature 2015;521(7553):452.
Ghahramani Z, Hinton G. The em algorithm for mixtures of factor analyzers. In: Technical Report CRG-TR-96-1. University of Toronto; 1996. p. 11–18. http://www.gatsby.ucl.ac.uk/.zoubin/papers.html.
Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput 2006; 18(7):1527–1554.
Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science 2006; 313(5786):504– 507.
Jiang Z, Zheng Y, Tan H, Tang B, Zhou H. Variational deep embedding: an unsupervised and generative approach to clustering. In: Proceedings of the twenty-sixth international joint conference on artificial intelligence; 2017. p. 1965–1972.
Johnson B. High resolution urban land cover classification using a competitive multi-scale object-based approach. Remote Sens Lett 2013;4(2):131–140.
Johnson B, Xie Z. Classifying a high resolution image of an urban area using super-object information. ISPRS J Photogramm Remote Sens 2013;83:40–49.
Kung SY, Mak MW, Lin SH. Biometric authentication: a machine learning approach, chap. Expectation-maximization theory. Upper Saddle River: Prentice Hall Professional Technical Reference; 2005.
Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE 1998;86(11):2278–2324.
Likas A, Vlassis N, Verbeek JJ. The global k-means clustering algorithm. Pattern Recogn 2003;36(2): 451–461.
McLachlan G, Krishnan T. The EM algorithm and extensions. Wiley; 2007. vol. 382.
McLachlan GJ, Peel D. Mixtures of factor analyzers. In: International Conference on machine learning (ICML); 2000. p. 599–606.
Nene SA, Nayar SK, Murase H. 1996. Columbia object image library (coil-20). Tech. rep. Technical Report CUCS-005-96.
Patel AB, Nguyen T, Baraniuk RG. 2015. A probabilistic theory of deep learning. arXiv:1504.00641.
Rippel O, Adams RP. 2013. High-dimensional probability estimation with deep density models. CoRR arXiv:http://arXiv.org/1302.5125.
Salakhutdinov R, Mnih A, Hinton GE. Restricted boltzmann machines for collaborative filtering. In: Machine learning, proceedings of the twenty-fourth international conference (ICML); 2007. p. 791–798.
Tang Y, Salakhutdinov R, Hinton GE. Deep mixtures of factor analysers. In: Proceedings of the 29th international conference on machine learning. ICML; 2012.
Tortora C, McNicholas PD, Browne RP. A mixture of generalized hyperbolic factor analyzers. Adv Data Anal Classif 2016;10(4):423–440.
Wang W. Mixtures of common factor analyzers for high-dimensional data with missing information. J Multivar Anal 2013;117:120–133.
Wei H, Dong Z. V4 neural network model for shape-based feature extraction and object discrimination. Cogn Comput 2015;7(6):753–762.
Wen G, Hou Z, Li H, Li D, Jiang L, Xun E. Ensemble of deep neural networks with probability-based fusion for facial expression recognition. Cogn Comput; 201. https://doi.org/10.1007/s12559-017-9472-6.
Yang X, Huang K, Goulermas JY, Zhang R. Joint learning of unsupervised dimensionality reduction and gaussian mixture model. Neural Process Lett 2017;45(3):791–806.
Yang X, Huang K, Zhang R. Deep mixtures of factor analyzers with common loadings: aa novel deep generative approach to clustering. In: Neural Information processing - 24rd international conference, ICONIP; 2017.
Zeng N, Wang Z, Zhang H, Liu W, Alsaadi FE. Deep belief networks for quantitative analysis of a gold immunochromatographic strip. Cogn Comput 2016;8(4):684–692.
Zhang J, Ding S, Zhang N, Xue Y. Weight uncertainty in Boltzmann machine. Cogn Comput 2016; 8(6):1064–1073.
Zheng Y, Cai Y, Zhong G, Chherawala Y, Shi Y, Dong J. Stretching deep architectures for text recognition. In: Document Analysis and recognition (ICDAR)–13th international conference. IEEE; 2015. p. 236–240.
Zhong G, Yan S, Huang K, Cai Y, Dong J. Reducing and stretching deep convolutional activation features for accurate image classification. Cogn Comput 2018;10(1):179–186.
Funding
The work reported in this paper was partially supported by the following: National Natural Science Foundation of China (NSFC) under grant no. 61473236, Natural Science Fund for Colleges and Universities in Jiangsu Province under grant no. 17KJD520010, Suzhou Science and Technology Program under grant nos. SYG201712 and SZS201613, Jiangsu University Natural Science Research Programme under grant no. 17KJB520041, Key Program Special Fund in XJTLU (KSF − A − 01).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interests
The authors declare that they have no conflict of interest.
Ethical Approval
This article does not contain any studies with human participants performed by any of the authors.
Rights and permissions
About this article
Cite this article
Yang, X., Huang, K., Zhang, R. et al. A Novel Deep Density Model for Unsupervised Learning. Cogn Comput 11, 778–788 (2019). https://doi.org/10.1007/s12559-018-9566-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12559-018-9566-9