Abstract
In this paper, we introduce costs in the framework of information maximization and try to maximize the ratio of information to its associated cost. We have shown that competitive learning is realized by maximizing mutual information between input patterns and competitive units. One shortcoming of the method is that maximizing information does not necessarily produce representations faithful to input patterns. Information maximizing primarily focuses on some parts of input patterns used to distinguish between patterns. Thus, we introduce the ratio of information to its cost that represents distance between input patterns and connection weights. By minimizing the ratio, final connection weights reflect well input patterns. We applied unsupervised information maximization to a voting attitude problem and supervised learning to a chemical data analysis. Experimental results confirmed that by minimizing the ratio, the cost is decreased with better generalization performance.
An erratum to this chapter can be found at http://dx.doi.org/10.1007/11550907_163 .
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Marc, M., Hulle, M.V.: Faithful representations and topographic maps. John Wiley and Sons, Inc., New York (2000)
Grossberg, S.: Competitive learning: From interactive activation to adaptive resonance. Cognitive Science 11, 23–63 (1987)
Rumelhart, D.E., Zipser, D.: Feature discovery by competitive learning. In: Rumelhart, D.E., et al. (eds.) Parallel Distributed Processing., vol. 1, pp. 151–193. MIT Press, Cambridge (1986)
Rumelhart, D.E., McClelland, J.L.: On learning the past tenses of English verbs. In: Rumelhart, D.E., Hinton, G.E., Williams, R.J. (eds.) Parallel Distributed Processing., vol. 2, pp. 216–271. MIT Press, Cambrige (1986)
Grossberg, S.: Competitive learning: from interactive activation to adaptive resonance. Cognitive Science 11, 23–63 (1987)
DeSieno, D.: Adding a conscience to competitive learning. In: Proceedings of IEEE International Conference on Neural Networks, San Diego, pp. 117–124. IEEE, Los Alamitos (1988)
Ahalt, S.C., Krishnamurthy, A.K., Chen, P., Melton, D.E.: Competitive learning algorithms for vector quantization. Neural Networks 3, 277–290 (1990)
Xu, L.: Rival penalized competitive learning for clustering analysis, RBF net, and curve detection. IEEE Transaction on Neural Networks 4(4), 636–649 (1993)
Luk, A., Lien, S.: Properties of the generalized lotto-type competitive learning. In: Proceedings of International conference on neural information processing, San Mateo, CA, pp. 1180–1185. Morgan Kaufmann Publishers, San Francisco (2000)
Hulle, M.M.V.: The formation of topographic maps that maximize the average mutual information of the output responses to noiseless input signals. Neural Computation 9(3), 595–606 (1997)
Linsker, R.: Self-organization in a perceptual network. Computer 21, 105–117 (1988)
Linsker, R.: How to generate ordered maps by maximizing the mutual information between input and output. Neural Computation 1, 402–411 (1989)
Linsker, R.: Local synaptic rules suffice to maximize mutual information in a linear network. Neural Computation 4, 691–702 (1992)
Kamimura, R., Kamimura, T., Shultz, T.R.: Information theoretic competitive learning and linguistic rule acquistion. Transactions of the Japanese Society for Artificial Intelligence 16(2), 287–298 (2001)
Kamimura, R., Kamimura, T., Uchida, O.: Flexible feature discovery and structural information. Connection Science 13(4), 323–347 (2001)
Kamimura, R., Kamimura, T., Takeuchi, H.: Greedy information acquisition algorithm: A new information theoretic approach to dynamic information acquisition in neural networks. Connection Science 14(2), 137–162 (2002)
Kamimura, R.: Progressive feature extraction by greedy network-growing algorithm. Complex Systems 14(2), 127–153 (2003)
Gatlin, L.L.: Information Theory and Living Systems. Columbia University Press, Englewood Cliffs (1972)
Yoshikawa, M., Ikegami, Y., Hayasaka, S., Ishii, K., Ito, A., Sano, K., Suzuki, T., Togawa, T., Yoshida, H., Soda, H., Oka, M., Kohno, S., Sawada, S., Ishikawa, T., Tanabe, S.: Novel camtothecin analogues that circumvent abcg2-associated drug resistance in human tumor. Int. J. Cancer 110, 921–927 (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kamimura, R., Aida-Hyugaji, S. (2005). Maximizing the Ratio of Information to Its Cost in Information Theoretic Competitive Learning. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds) Artificial Neural Networks: Formal Models and Their Applications – ICANN 2005. ICANN 2005. Lecture Notes in Computer Science, vol 3697. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11550907_35
Download citation
DOI: https://doi.org/10.1007/11550907_35
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28755-1
Online ISBN: 978-3-540-28756-8
eBook Packages: Computer ScienceComputer Science (R0)