Abstract
In this paper, we try to show that the simple collection of competitive units can show some emergent property such as improved generalization performance. We have so far defined information-theoretic competitive learning with respect to individual competitive units. As information is increased, one competitive unit tends to win the competition. This means that competitive learning can be described as a process of information maximization. However, in living systems, a large number of neurons behave collectively. Thus, it is urgently needed to introduce collective property in information-theoretic competitive learning. In this context, we try to treat several competitive units as one unit, that is, one collective unit. Then, we try to maximize information content not in individual competitive units but in collective competitive units. We applied the method to an artificial data and cabinet approval rating estimation. In all cases, we successfully demonstrated that improved generalization could be obtained.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Barlow, H.B.: Unsupervised learning. Neural Computation 1, 295–311 (1989)
Bell, A.J., Sejnowski, T.J.: An informationmaximization approach to blind separation and blind deconvolution. Neural Computation 7(6), 1129–1159 (1995)
Cover, T.M., Thomas, J.A.: Elements of information theory. John Wiley and Sons, INC., Chichester (1991)
Gatlin, L.L.: Information Theory and Living Systems. Columbia University Press, Englewood Cliffs (1972)
Kamimura, R.: Information-theoretic competitive learning with inverse euclidean distance. Neural Processing Letters 18, 163–184 (2003a)
Kamimura, R.: Teacher-directed learning: information-theoretic competitive learning in supervised multi-layered networks. Connection Science 15, 117–140 (2003b)
Kamimura, R.: Improving information-theoretic competitive learning by accentuated information maximization. International Journal of General Systems 34(3), 219–233 (2006a)
Kamimura, R.: Unifying cost and information in informationtheoretic competitive learning. Neural Networks 18, 711–718 (2006b)
Linsker, R.: How to generate ordered maps by maximizing the mutual information between input and output. Neural Computation 1, 402–411 (1989)
Linsker, R.: Local synaptic rules suffice to maximize mutual information in a linear network. Neural Computation 4, 691–702 (1992)
Shannon, C.E., Weaver, W.: The mathematical theory of communication. University of Illinois Press, US (1949)
Lehn-Schioler, T., Anant Hegde, D.E., Principe, J.C.: Vector-quantization using information theoretic concepts. Natural Computation 4(1), 39–51 (2004)
Yoshida, F.: Main features of tex-ray, a software for analyzing Japanese sentences, and its applications: an attempt to predict poll support ratings for koizumi cabinet from editorial content of four major newspapers. Journal of mass communication studies 68, 80–96 (2006) (in Japanese)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kamimura, R., Yoshida, F., Kitajima, R. (2006). Collective Information-Theoretic Competitive Learning: Emergency of Improved Performance by Collectively Treated Neurons. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4232. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893028_70
Download citation
DOI: https://doi.org/10.1007/11893028_70
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-46479-2
Online ISBN: 978-3-540-46480-8
eBook Packages: Computer ScienceComputer Science (R0)