Abstract
Nonnegative matrix factorization (NMF) is a novel paradigm for feature representation and dimensionality reduction. However, the performance of the NMF model is affected by two critical and challenging problems. One is that the original NMF does not consider the distribution information of data and parameters, resulting in inaccurate representations. The other is the high computational complexity in online processing. Bayesian approaches are proposed to address the former problem of NMF. However, most existing Bayesian-based NMF models utilize an exponential prior, which only guarantees the nonnegativity of parameters without fully considering the prior information of the parameters. Thus, a new Bayesian-based NMF model is constructed based on the Gaussian likelihood and a truncated Gaussian prior, called the truncated Gaussian-based NMF (TG-NMF) model, in which a truncated Gaussian prior can prevent overfitting while ensuring nonnegativity. Furthermore, Bayesian inference-based incremental learning is introduced to reduce the high computational complexity of TG-NMF; this model is called TG-INMF. We adopt variational Bayesian to estimate all parameters of TG-NMF and TG-INMF. Experiments on genetic data-based tumor recognition demonstrate that our models are competitive with other existing methods for classification problems.
Similar content being viewed by others
References
Wu WM, Ma XK (2020) Joint learning dimension reduction and clustering of single-cell RNA-sequencing data. Bioinformatics 36:3825–3832
Zhao W, Xu C, Guan Z, Liu Y (2021) Multiview concept learning via deep matrix factorization. IEEE Trans Neural Netw Learn Syst 32(2):814–825
Lecun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444
Guillen P, Ebalunode J (2016) Cancer classification based on microarray gene expression data using deep learning. In: 2016 International Conference on Computational Science and Computational Intelligence (CSCI), pp 1403–1405
Lee DD, Seung HS (1999) Learning the parts of objects by non-negative matrix factorization. Nature 401:788–791
Lei XL, Tie JJ, Fujita H (2020) Relational completion based non-negative matrix factorization for predicting metabolite-disease associations. Knowledge-Based Systems 204(27):106238
Feng XD, Jiao YT, Lv C, Zhou D (2016) Label consistent semi-supervised non-negative matrix factorization for maintenance activities identification. Eng Appl Artif Intell 52:161–167
Li Z, Tang J, He X (2018) Robust structured nonnegative matrix factorization for image representation. IEEE Trans Neural Netw Learn Syst 29(5):1947–1960
Zheng C, Ng T, Zhang L, Shiu C, Wang H (2011) Tumor classification based on non-negative matrix factorization using gene expression data. IEEE Transactions on NanoBioscience 10:86–93
Tu D, Chen L, Lv MQ, Shi HY, Chen GC (2018) Hierarchical online NMF for detecting and tracking topic hierarchies in a text stream. Pattern Recogn 76:203–214
Masood MA, Doshi-Velez F (2019) A particle-based variational approach to Bayesian non-negative matrix factorization. J Mach Learn Res 20(90):1–56
Schmidt MN, Winther O, Hansen LK (2009) Bayesian non-negative matrix factorization. Independent Component Analysis and Signal Separation 5441:504–547
Sun QQ, Wu P, Wu YQ, Guo MC, Lu J (2012) Unsupervised multi-Level non-negative matrix factorization model: Binary data case. Int J Inf Secur 3:245–250
Artac M, Jogan M, Leonardis A (2002) Incremental PCA for on-line visual learning and recognition. Object Recognition Supported by User Interaction for Service Robots 3:781–784
Bucak SS, Gunsel B (2009) Incremental subspace learning via non-negative matrix factorization. Pattern Recogn 42:788–797
Hu C, Chen Y, Peng X, Yu H, Gao C, Hu L (2019) A novel feature incremental learning method for Sensor-Based activity recognition. IEEE Trans Knowl Data Eng 31(6):1038–1050
Gu B, Sheng VS, Wang ZJ, Ho D, Osman S, Li S (2015) Incremental learning for v-support vector regression. Neural Netw 67:140–150
Cemgil AT (2009) Bayesian inference for nonnegative matrix factorization models. Computational Intelligence and Neuroscience 785152
Hoffman MD, Blei DM, Wang C, Paisley J (2013) Stochastic variational inference. J Mach Learn Res 14:1303–1347
Alquier P, Guedj B (2017) An oracle inequality for quasi-Bayesian non-negative matrix factorization. Mathematical Methods of Stats 26:55–67
Yen TJ (2011) A majorization-minimization approach to variable selection using spike and slab priors. Ann Stat 39:1748–1775
Ade RR, Deshmukh PR (2013) Methods for incremental learning: a survey. International Journal of Data Mining Knowledge and Management Process 3:119–125
Bishop CM (2009) Pattern recognition and machine learning, vol 738. Springer, New York
Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge university press, Cambridge
Yang XH, Wu WM, Chen YM et al (2019) An integrated inverse space sparse representation framework for tumor classification. Pattern Recogn 93:293–311
van’t Veer LJ, van de Vijver MJ, Dai H et al (2001) Expression profiling predicts poor outcome of disease in young breast cancer patients. Eur J Cancer 37:S271
Dudoit S, Fridlyand J, Speed TP (2002) Comparison of discrimination methods for the classification of tumors using gene expression data. J Am Stat Assoc 97:77–87
Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31:210–227
Yang XH, Liu F, Tian L, Li HF, Jiang XY (2018) Pseudo-full-space representation based classification for robust face recognition. Signal Processing: Image Communication 60:64–78
Bradley AP (1997) The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recogn 30:1145–1159
Alon U, Barkai N, Notterman DA et al (1999) Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences 96:6745–6750
Shipp MA, Ross KN, Tamayo P et al (2002) Diffuse large B-cell lymphoma outcome prediction by gene-expression profiling and supervised machine learning. Nat Med 8:68–74
van’t Veer LJ, Dai HY, van de Vijver MJ et al (2002) Gene expression profiling predicts clinical outcome of breast cancer. Nature 415:530–536
Armstrong SA, Staunton JE, Silverman LB et al (2002) MLL translocations specify a distinct gene expression profile that distinguishes a unique leukemia. Nat Genet 30:41–47
Khan J, Wei JS, Ringner M, Saal LH et al (2001) Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat Med 7(6):673–679
Staunton JE, Slonim DK, Coller HA et al (2001) Chemosensitivity prediction by transcriptional profiling. Proceedings of the National Academy of Sciences of the United States of America 98(19):10787–10792
Brouwer T, Frellsen J, Lió P (2017) Comparative study of inference methods for Bayesian nonnegative matrix factorization Machine Learning and Knowledge Discovery in Databases - European Conference. ECML PKDD 2017(10534):513–529
Cover TW, Hart PE (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory 13:21–27
Furey TS, Cristianini N, Duffy N, Bednarski DW, Schummer M, Haussler D (2000) Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics 16:906–914
Deng W, Hu J, Guo J (2012) Extended SRC: undersampled face recognition via intraclass variant dictionary. IEEE Trans Pattern Anal Mach Intell 34:1864–1870
Deng HT, Runger G (2013) Gene selection with guided regularized random forest. Pattern Recogn 46:3483–3489
Lu H, Chen J, Yan K, Jin Q, Xue Y, Gao Z (2017) A hybrid feature selection algorithm for gene expression data classification. Neurocomputing 256:56–62
Salem H, Attiya G, El-Fishawy N (2017) Classification of human cancer diseases by gene expression profiles. Appl Soft Comput 50:124–134
Aydadenta H (2018) A clustering approach for feature selection in microarray data classification using random forest. Journal of Information Processing Systems, 14
Purbolaksono MD, Widiastuti KC, Mubarok MS, Ma’ruf FA (2018) Implementation of mutual information and bayes theorem for classification microarray data. Journal of physics: Conference Series, 012011
Younsi R, Bagnall A (2016) Ensembles of random sphere cover classifiers. Pattern Recogn 49:213–225
Li JX, QS, Wang YN, Jiang XB, Chen FX, Lu WC (2017) A cancer gene selection algorithm based on the K-S test and CFS. BioMed Research International 2017:1645619
Gan B, Zheng CH, Zhang J, Wang HQ (2014) Sparse representation for tumor classification based on feature extraction using latent low-rank representation. BioMed Res Int 10:63–68
Gan B, Zheng CH, Liu JX (2013) Metasample-based robust sparse representation for tumor classification. Engineering 5:77–83
Liu J, Xu Y, Zheng C, Kong H, Lai Z (2015) RPCA-based tumor classification using gene expression data. IEEE/ACM Transactions on Computational Biology and Bioinformatics 12:964–970
Yang X, Tian L, Chen Y, Yang L, Xu S, Wu W (2020) Inverse projection representation and category contribution rate for robust tumor recognition. IEEE/ACM Transactions on Computational Biology and Bioinformatics 17(4):1262–1275
Khormuji MK, Bazrafkan M (2016) A novel sparse coding algorithm for classification of tumors based on gene expression data. Med Biol Eng Comput 54(6):869–876
Fan YY, Kong YF, Li DJ, Zheng ZM (2015) Innovated interaction screening for high-dimensional nonlinear classification. Ann Stat 43:1243–1272
Jiang BY, Chen ZQ, Leng CL (2020) Dynamic linear discriminant analysis in high dimensional space. Bernoulli 26:1234–1268
Acknowledgment
The authors would like to thank the journal editor and anonymous reviewers for their constructive comments. This work was supported by the National Natural Science Foundation of China (grant numbers 11701144, 1182002), Open Fund of Key Laboratory of Intelligence Perception and Image Understanding of Ministry of Eduction, and Program for Science and Technology Development of Henan Province (grant numbers 212102310305).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interests
The authors declare that they have no conflicts of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Firstly, the optimization problem of the TG-NMF model is solved by the variational Bayesian inference algorithm. The variational posterior distribution can be derived using the (15), and then the update rules are obtained according to those distributions. The derivations for Wir, Hrj, τ are shown below. For a random variable x and the function f of x, we adopt \(\widetilde {f(x)}\) as a shorthand for Eq[f(x)].
Following (15), the variational posterior distribution of Wir is truncated Gaussian distribution, i.e., \({W_{ir}} \sim TG\left ({\left .{{W_{ir}}} \right |\mu _{ir}^{W},(\tau _{ir}^{W})^{-1},0,+\infty }\right )\).
In the above formula, A(Wir,Hrj,τ) is shown in (25) respectively.
And the parameters \(\tau _{ir}^{W}\) and \(\mu _{ir}^{W}\) corresponding to variational posterior distribution of Wir are shown in (26) and (26).
In the same way, the variational posterior distribution of Hrj is also truncated Gaussian distribution, that is, \({H_{rj}} \sim TG\left ({\left .{{H_{rj}}} \right |\mu _{rj}^{H},(\tau _{rj}^{H})^{-1},0,+\infty }\right )\).
The B(Wir,Hrj,τ) contained in above (28) is shown in (29).
The parameters \(\tau _{rj}^{H}\) and \(\mu _{rj}^{H}\) in the variational posterior distribution of Hrj are written as in (30) and (31).
For the parameter τ, the variational posterior distribution takes the same form as the prior distribution, i.e., \(\tau \sim Gamma(\alpha _{\tau }^{*},\beta _{\tau }^{*})\),
where the parameters \(\alpha _{\tau }^{*},\beta _{\tau }^{*}\) corresponding to the variational posterior distribution of τ are shown as follows.
The above is the detailed optimization process of TG-NMF model. In fact, the prior distribution of current samples in the TG-INMF model equals to the posterior distribution of previous samples, which can be obtained by the TG-NMF model. Therefore, based on the optimal estimation of TG-NMF model, we then give the optimization process of the TG-INMF model.
The parameters to be optimized in the TG-INMF model can be summarized as \(\theta ^{\prime }=\{W_{ir}^{k+1},h_{ir}^{k+1},\tau ^{k+1}\}\). According to (15), the variational posterior distribution of \(W_{ir}^{k+1}\) is first derived and \(W_{ir}^{k+1}\) obeys the truncated Gaussian distribution with parameters \(\tau _{ir}^{W^{k+1}},\mu _{ir}^{W^{k+1}}\).
And (36) and (37) are the parameters \(\tau _{ir}^{W^{k+1}}\), \(\mu _{ir}^{W^{k+1}}\) of variational posterior distribution of \(W_{ir}^{k+1}\).
Secondly, (38) shows the optimization process of the variational posterior distribution corresponding to \(h_{r}^{k+1}\),
where the parameters \(\tau _{r}^{h^{k+1}}\), \(\mu _{r}^{h^{k+1}}\) in the variational posterior distribution of \(h_{r}^{k+1}\) are shown below.
Finally, the variational posterior distribution of τk+ 1 is illustrated in (41).
where the parameters αk+ 1, βk+ 1 of new variational posterior distribution are written as (42) and (43).
After obtaining the variational posterior distributions of the parameters, we can derive the optimal updates of \( {W^{k+1}}_{ir} \), \( {h^{k+1}}_{r} \), τk+ 1.
Rights and permissions
About this article
Cite this article
Yang, L., Yan, L., Yang, X. et al. Bayesian nonnegative matrix factorization in an incremental manner for data representation. Appl Intell 53, 9580–9597 (2023). https://doi.org/10.1007/s10489-022-03522-3
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-022-03522-3