Abstract
One of the principles used for the transformation of original signal space into a space of lower dimension is a factor analysis based on the assumption that signals are random combinations of latent factors. The goal of the factor analysis is to find factors representation in the signal space (factor loadings) and the contributions of factors into the original signals (factor scores). Recently in [10] we have proposed the general method for Boolean factor analysis based on the Hopfield-like neural network. Due to the Hebbian learning rule the neurons of factor become connected more tightly than other neurons and hence factors can be revealed as attractors of the network dynamics by the random search. The peculiarity of usage the Hopfield-like network for Boolean factor analysis is the appearance of two global spurious attractors. They become dominant and, therefore, prevent successful factors search. To eliminate these attractors we propose a special unlearning procedure. The second unlearning procedure provides the suppression of factors with the largest attraction basins which dominate after suppression of global spurious attractors and prevent the recall of other factors. The origin of the global spurious attractors and the efficiency of the unlearning procedures are investigated in the present paper.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Barlow, H.: Single units and sesation: a neuron doctrine for perceptual psychology? Perception 1, 371–394 (1972)
Barlow, H.B.: Possible principles underlying the transformations of sensory messages. In: Rosenblith, W.A. (ed.) Sensory communication, pp. 217–234. MIT Press, Cambridge (1961)
Barlow, H.B.: Cerebral cortex as model builder. In: Rose, D., Dodson, V.G. (eds.) Models of the visual cortex, pp. 37–46. Wiley, Chichester (1985)
Belohlavek, R., Vychodil, V.: Formal concepts as optimal factors in Boolean factor analysis: implications and experiments? In: Fifth International Conference on Concept Lattices and Their Applications (2007)
Bucingham, J., Willshaw, D.: On setting unit thresholds in an incompletely connected associative net. Network 4, 441–459 (1993)
Crick, F., Mitchison, G.: The function of dream sleep. Nature 304(5922), 111–114 (1983)
Foldiak, P.: Forming sparse representations by local anti-hebbian learning. Biological Cybernetics 64, 165170 (1990)
Frolov, A.A., Husek, D., Muraviev, I.P.: Informational capacity and recall quality in sparsely encoded hopfield-like neural network: Analytical approaches and computer simulation. Neural Networks 10, 845–855 (1997)
Frolov, A.A., Husek, D., Muraviev, I.P.: Informational efficiency of sparsely encoded hopfield-like autoassociative memory. Optical Memory Neural Networks 12(3), 177–197 (2003)
Frolov, A.A., Husek, D., Muraviev, I.P., Polyakov, P.Y.: Boolean factor analysis by attractor neural network. IEEE Transactions on Neural Networks 18(3), 698–707 (2007)
Georgiev, P., Theis, F., Cichocki, A.: Sparse component analysis blind sourse separation of underdetermined mixters. IEEE Transactions on Neural Networks 16(4), 992–996 (2005)
Goles-Chacc, E., Fogelman-Soulie, F., Pellegrin, D.: Decreasing energy functions as a tool for studying threshold networks. Discrete Mathematics 12, 261–277 (1985)
Jankovic, M.V.: Modulated hebb-oja learning rule - a method for principal subspace analysis. IEEE Transactions on Neural Networks 17(2), 345–356 (2006)
Karhunen, J.: Nonlinear independent component analysis. In: Roberts, S., Everson, R. (eds.) Independent Component Analysis: Principles and Practice, pp. 113–134. Cambridge University Press, Cambridge (2001)
Karhunen, J., Joutsensalo, J.: Representation and separation of signals using nonlinear PCA type learning. Neural Networks 7, 113–127 (1994)
Leeuw, J.D.: Principal component analysis of binary data application to rollcall analysis (2003), http://gifi.stat.ucla.edu
Li, Y., Amari, S., Cichocki, A., Ho, D.C., Xie, S.: Underdetermined blind source separation based on sparse representation. IEEE Trans. Signal Process. 54(2), 423–437 (2006)
Li, Y., Cichocki, A., Amari, S.: Blind estimation of chanal parameters and source components for EEG signals: A sparse factorization approach. IEEE Transactions on Neural Networks 17(2), 419–431 (2006)
Liu, W., Zheng, N.: Non-negative matrix factorization based methods for object recognition. Pattern Recognition Letters 25(8), 893–897 (2004)
Moller, R., Konig, A.: Coupled principal component analysis. IEEE Transactions on Neural Networks 15(1), 214–222 (2006)
Spratling, M.W.: Learning image components for object recognition. Journal of MachineLearning Reasearch 7, 793–815 (2006)
Thurstone, L.L.: Multiple factor analysis. Psychological Review 38, 406–427 (1931)
Tichavsky, P., Koldovsky, Z., Oja, E.: Performance analysis of the FastICA algorithm and Crame/spl acute/r-rao bounds for linear independent component analysis. IEEE Transactions on Signal Processing, [see also IEEE Transactions on Acoustics, Speech, and Signal Processing] 54(4), 1189–1203 (2006)
Watanabe, S.: Pattern recognition: human and mechanical. Wiley, New York (1985)
Yi, Z., Ye, M., Lv, J.C., Tan, K.K.: Convergence analysis of deterministic discrete time system of oja’s PCA learning algorithm. IEEE Transactions on Neural Networks 16(6), 1318–1328 (2005)
Zafeiriou, S., Tefas, A., Bucie, I., Pitas, I.: Exploiting discriminant information in nonnegative matrix factorization with application to frontal face verification. IEEE Transactions on Neural Networks 17(3), 683–695 (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Frolov, A.A., Húsek, D., Muraviev, I.P., Polyakov, P.Y. (2010). Learning and Unlearning in Hopfield-Like Neural Network Performing Boolean Factor Analysis. In: Koronacki, J., Raś, Z.W., Wierzchoń, S.T., Kacprzyk, J. (eds) Advances in Machine Learning I. Studies in Computational Intelligence, vol 262. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-05177-7_26
Download citation
DOI: https://doi.org/10.1007/978-3-642-05177-7_26
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-05176-0
Online ISBN: 978-3-642-05177-7
eBook Packages: EngineeringEngineering (R0)