Abstract
The present paper aims to interpret final representations obtained by neural networks by maximizing the mutual information between neurons and data sets. Because complex procedures are needed to maximize information, the computational procedures are simplified as much as possible using the present method. The simplification lies in realizing mutual information maximization indirectly by focusing on the potentiality of neurons. The method was applied to restaurant data for which the ordinary regression analysis could not show good performance. For this problem, we tried to interpret final representations and obtain improved generalization performance. The results revealed a simple configuration where just a single important feature was extracted to explicitly explain the motivation to visit the restaurant.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Linsker, R.: Self-organization in a perceptual network. Computer 21(3), 105–117 (1988)
Linsker, R.: How to generate ordered maps by maximizing the mutual information between input and output signals. Neural Comput. 1(3), 402–411 (1989)
Linsker, R.: Local synaptic learning rules suffice to maximize mutual information in a linear network. Neural Comput. 4(5), 691–702 (1992)
Linsker, R.: Improved local learning rule for information maximization and related applications. Neural Netw. 18(3), 261–265 (2005)
Principe, J.C., Xu, D., Fisher, J.: Information theoretic learning. Unsupervised Adapt. Filter. 1, 265–319 (2000)
Nenadic, Z.: Information discriminant analysis: feature extraction with an information-theoretic objective. IEEE Trans. Pattern Anal. Mach. Intell. 29(8), 1394–1407 (2007)
Principe, J.C.: Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives. Springer, New York (2010)
Torkkola, K.: Nonlinear feature transforms using maximum mutual information. In: Proceedings of International Joint Conference on Neural Networks, IJCNN 2001, vol. 4, pp. 2756–2761, IEEE (2001)
Deco, G., Finnoff, W., Zimmermann, H.: Unsupervised mutual information criterion for elimination of overtraining in supervised multilayer networks. Neural Comput. 7(1), 86–107 (1995)
Kamimura, R., Nakanishi, S.: Hidden information maximization for feature detection and rule discovery. Netw. Comput. Neural Syst. 6(4), 577–602 (1995)
Kamimura, R.: Self-organizing selective potentiality learning to detect important input neurons. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1619–1626, IEEE (2015)
Kamimura, R., Kitajima, R.: Selective potentiality maximization for input neuron selection in self-organizing maps. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, IEEE (2015)
Kamimura, R.: Supervised potentiality actualization learning for improving generalization performance. In: Proceedings on the International Conference on Artificial Intelligence (ICAI), p. 616. The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp) (2015)
Kitajima, R., Kamimura, R.: Simplifying potential learning by supposing maximum and minimum information for improved generalization and interpretation. In: 2015 International Conference on Modelling, Identification and Control, IASTED (2015)
Kamimura, R.: Self-organized potential learning: enhancing SOM knowledge to train supervised neural networks with improved interpretation and generalization performance (under submission). J. Comput. Eng. Inf. Technol. (2016)
Nishiuchi, H.: Statistical Analysis for Billion People (in Japanese). Nikkei BP Marketing, Tokyo (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Kamimura, R. (2016). Simple and Stable Internal Representation by Potential Mutual Information Maximization. In: Jayne, C., Iliadis, L. (eds) Engineering Applications of Neural Networks. EANN 2016. Communications in Computer and Information Science, vol 629. Springer, Cham. https://doi.org/10.1007/978-3-319-44188-7_23
Download citation
DOI: https://doi.org/10.1007/978-3-319-44188-7_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-44187-0
Online ISBN: 978-3-319-44188-7
eBook Packages: Computer ScienceComputer Science (R0)