Skip to main content

Collective Information-Theoretic Competitive Learning: Emergency of Improved Performance by Collectively Treated Neurons

  • Conference paper
Neural Information Processing (ICONIP 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4232))

Included in the following conference series:

  • 1476 Accesses

Abstract

In this paper, we try to show that the simple collection of competitive units can show some emergent property such as improved generalization performance. We have so far defined information-theoretic competitive learning with respect to individual competitive units. As information is increased, one competitive unit tends to win the competition. This means that competitive learning can be described as a process of information maximization. However, in living systems, a large number of neurons behave collectively. Thus, it is urgently needed to introduce collective property in information-theoretic competitive learning. In this context, we try to treat several competitive units as one unit, that is, one collective unit. Then, we try to maximize information content not in individual competitive units but in collective competitive units. We applied the method to an artificial data and cabinet approval rating estimation. In all cases, we successfully demonstrated that improved generalization could be obtained.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Barlow, H.B.: Unsupervised learning. Neural Computation 1, 295–311 (1989)

    Google Scholar 

  2. Bell, A.J., Sejnowski, T.J.: An informationmaximization approach to blind separation and blind deconvolution. Neural Computation 7(6), 1129–1159 (1995)

    Google Scholar 

  3. Cover, T.M., Thomas, J.A.: Elements of information theory. John Wiley and Sons, INC., Chichester (1991)

    Google Scholar 

  4. Gatlin, L.L.: Information Theory and Living Systems. Columbia University Press, Englewood Cliffs (1972)

    Google Scholar 

  5. Kamimura, R.: Information-theoretic competitive learning with inverse euclidean distance. Neural Processing Letters 18, 163–184 (2003a)

    Google Scholar 

  6. Kamimura, R.: Teacher-directed learning: information-theoretic competitive learning in supervised multi-layered networks. Connection Science 15, 117–140 (2003b)

    Google Scholar 

  7. Kamimura, R.: Improving information-theoretic competitive learning by accentuated information maximization. International Journal of General Systems 34(3), 219–233 (2006a)

    Google Scholar 

  8. Kamimura, R.: Unifying cost and information in informationtheoretic competitive learning. Neural Networks 18, 711–718 (2006b)

    Google Scholar 

  9. Linsker, R.: How to generate ordered maps by maximizing the mutual information between input and output. Neural Computation 1, 402–411 (1989)

    Google Scholar 

  10. Linsker, R.: Local synaptic rules suffice to maximize mutual information in a linear network. Neural Computation 4, 691–702 (1992)

    Google Scholar 

  11. Shannon, C.E., Weaver, W.: The mathematical theory of communication. University of Illinois Press, US (1949)

    Google Scholar 

  12. Lehn-Schioler, T., Anant Hegde, D.E., Principe, J.C.: Vector-quantization using information theoretic concepts. Natural Computation 4(1), 39–51 (2004)

    Google Scholar 

  13. Yoshida, F.: Main features of tex-ray, a software for analyzing Japanese sentences, and its applications: an attempt to predict poll support ratings for koizumi cabinet from editorial content of four major newspapers. Journal of mass communication studies 68, 80–96 (2006) (in Japanese)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kamimura, R., Yoshida, F., Kitajima, R. (2006). Collective Information-Theoretic Competitive Learning: Emergency of Improved Performance by Collectively Treated Neurons. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4232. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893028_70

Download citation

  • DOI: https://doi.org/10.1007/11893028_70

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-46479-2

  • Online ISBN: 978-3-540-46480-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics