Abstract
The present paper shows how to solve the problem of vanishing information in potential mutual information maximization. We have previously developed a new information-theoretic method called “potential learning” which aims to extract the most important features through simplified information maximization. However, one of the major problems is that the potential effect diminishes considerably in the course of learning and it becomes impossible to take into account the potentiality in learning. To solve this problem, we here introduce repeated information maximization. To enhance the processes of information maximization, the method forces the potentiality to be assimilated in learning every time it becomes ineffective. The method was applied to the on-line article popularity data set to estimate the popularity of articles. To demonstrate the effectiveness of the method, the number of hidden neurons was made excessively large and set to 50. The results show that the potentiality information maximization could increase mutual information even with 50 hidden neurons, and lead to improved generalization performance. In addition, simplified representations could be obtained for better interpretation and generalization.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The first variable “timedelta” was deleted from the experiment.
References
Linsker, R.: Self-organization in a perceptual network. Computer 21(3), 105–117 (1988)
Linsker, R.: How to generate ordered maps by maximizing the mutual information between input and output signals. Neural Comput. 1(3), 402–411 (1989)
Linsker, R.: Local synaptic learning rules suffice to maximize mutual information in a linear network. Neural Comput. 4(5), 691–702 (1992)
Linsker, R.: Improved local learning rule for information maximization and related applications. Neural Netw. 18(3), 261–265 (2005)
Barlow, H.B.: Unsupervised learning. Neural Comput. 1(3), 295–311 (1989)
Barlow, H.B., Kaushal, T.P., Mitchison, G.J.: Finding minimum entropy codes. Neural Comput. 1(3), 412–423 (1989)
Atick, J.J.: Could information theory provide an ecological theory of sensory processing? Netw. Comput. Neural Syst. 3(2), 213–251 (1992)
Principe, J.C., Xu, D., Fisher, J.: Information theoretic learning. Unsuperv. Adapt. Filter. 1, 265–319 (2000)
Principe, J.C.: Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives. Springer Science & Business Media, New York (2010)
Nenadic, Z.: Information discriminant analysis: feature extraction with an information-theoretic objective. IEEE Trans. Pattern Anal. Mach. Intell. 29(8), 1394–1407 (2007)
Torkkola, K.: Nonlinear feature transforms using maximum mutual information. In: Proceedings of International Joint Conference on Neural Networks, IJCNN 2001, vol. 4, pp. 2756–2761. IEEE (2001)
Kamimura, R.: Self-organizing selective potentiality learning to detect important input neurons. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1619–1626. IEEE (2015)
Kamimura, R., Kitajima, R.: Selective potentiality maximization for input neuron selection in self-organizing maps. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2015)
Kamimura, R.: Supervised potentiality actualization learning for improving generalization performance. In: Proceedings on the International Conference on Artificial Intelligence (ICAI), p. 616. The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp) (2015)
Kitajima, R., Kamimura, R.: Simplifying potential learning by supposing maximum and minimum information for improved generalization and interpretation. In: International Conference on Modelling, Identification and Control. IASTED (2015)
Fernandes, K., Vinagre, P., Cortez, P.: A proactive intelligent decision support system for predicting the popularity of online news. In: Pereira, F., Machado, P., Costa, E., Cardoso, A. (eds.) EPIA 2015. LNCS, vol. 9273, pp. 535–546. Springer, Heidelberg (2015)
Bache, K., Lichman, M.: UCI machine learning repository (2013)
Kamimura, R.: Repeated potentiality assimilation: simplifying learning procedures by positive, independent and indirect operation for improving generalization and interpretation. In: Proceedings of IJCNN-2016, Vancouver (2016, in press)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Kamimura, R. (2016). Solving the Vanishing Information Problem with Repeated Potential Mutual Information Maximization. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9950. Springer, Cham. https://doi.org/10.1007/978-3-319-46681-1_53
Download citation
DOI: https://doi.org/10.1007/978-3-319-46681-1_53
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46680-4
Online ISBN: 978-3-319-46681-1
eBook Packages: Computer ScienceComputer Science (R0)