Abstract
In this paper we discus a wide class of loss (cost) functions for non-negative matrix factorization (NMF) and derive several novel algorithms with improved efficiency and robustness to noise and outliers. We review several approaches which allow us to obtain generalized forms of multiplicative NMF algorithms and unify some existing algorithms. We give also the flexible and relaxed form of the NMF algorithms to increase convergence speed and impose some desired constraints such as sparsity and smoothness of components. Moreover, the effects of various regularization terms and constraints are clearly shown. The scope of these results is vast since the proposed generalized divergence functions include quite large number of useful loss functions such as the squared Euclidean distance,Kulback-Leibler divergence, Itakura-Saito, Hellinger, Pearson’s chi-square, and Neyman’s chi-square distances, etc. We have applied successfully the developed algorithms to blind (or semi blind) source separation (BSS) where sources can be generally statistically dependent, however they satisfy some other conditions or additional constraints such as nonnegativity, sparsity and/or smoothness.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Amari, S.: Differential-Geometrical Methods in Statistics. Springer, Heidelberg (1985)
Amari, S.: Information geometry of the EM and em algorithms for neural networks. Neural Networks 8, 1379–1408 (1995)
Lee, D.D., Seung, H.S.: Learning of the parts of objects by non-negative matrix factorization. Nature 401, 788–791 (1999)
Hoyer, P.: Non-negative matrix factorization with sparseness constraints. Journal of Machine Learning Research 5, 1457–1469 (2004)
Sajda, P., Du, S., Parra, L.: Recovery of constituent spectra using non-negative matrix factorization. In: Proceedings of SPIE. Wavelets: Applications in Signal and Image Processing, vol. 5207, pp. 321–331 (2003)
Cho, Y.C., Choi, S.: Nonnegative features of spectro-temporal sounds for classification. Pattern Recognition Letters 26, 1327–1336 (2005)
Lee, D.D., Seung, H.S.: Algorithms for nonnegative matrix factorization. NIPS, vol. 13. MIT Press, Cambridge (2001)
Cichocki, A., Amari, S.: Adaptive Blind Signal and Image Processing (New revised and improved edition). John Wiley, New York (2003)
Byrne, C.: Accelerating the EMML algorithm and related iterative algorithms by rescaled block-iterative (RBI) methods. IEEE Transactions on Image Processing 7, 100–109 (1998)
Csiszár, I.: Information measures: A critical survey. In: Prague Conference on Information Theory, Academia Prague, vol. A, pp. 73–86 (1974)
Cressie, N.A., Read, T.: Goodness-of-Fit Statistics for Discrete Multivariate Data. Springer, New York (1988)
Kompass, R.: A generalized divergence measure for nonnegative matrix factorization. In: Neuroinfomatics Workshop, Torun, Poland (September 2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Cichocki, A., Zdunek, R., Amari, Si. (2006). Csiszár’s Divergences for Non-negative Matrix Factorization: Family of New Algorithms. In: Rosca, J., Erdogmus, D., Príncipe, J.C., Haykin, S. (eds) Independent Component Analysis and Blind Signal Separation. ICA 2006. Lecture Notes in Computer Science, vol 3889. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11679363_5
Download citation
DOI: https://doi.org/10.1007/11679363_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-32630-4
Online ISBN: 978-3-540-32631-1
eBook Packages: Computer ScienceComputer Science (R0)