Skip to main content

Simple and Stable Internal Representation by Potential Mutual Information Maximization

  • Conference paper
  • First Online:

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 629))

Abstract

The present paper aims to interpret final representations obtained by neural networks by maximizing the mutual information between neurons and data sets. Because complex procedures are needed to maximize information, the computational procedures are simplified as much as possible using the present method. The simplification lies in realizing mutual information maximization indirectly by focusing on the potentiality of neurons. The method was applied to restaurant data for which the ordinary regression analysis could not show good performance. For this problem, we tried to interpret final representations and obtain improved generalization performance. The results revealed a simple configuration where just a single important feature was extracted to explicitly explain the motivation to visit the restaurant.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Linsker, R.: Self-organization in a perceptual network. Computer 21(3), 105–117 (1988)

    Article  Google Scholar 

  2. Linsker, R.: How to generate ordered maps by maximizing the mutual information between input and output signals. Neural Comput. 1(3), 402–411 (1989)

    Article  Google Scholar 

  3. Linsker, R.: Local synaptic learning rules suffice to maximize mutual information in a linear network. Neural Comput. 4(5), 691–702 (1992)

    Article  Google Scholar 

  4. Linsker, R.: Improved local learning rule for information maximization and related applications. Neural Netw. 18(3), 261–265 (2005)

    Article  MATH  Google Scholar 

  5. Principe, J.C., Xu, D., Fisher, J.: Information theoretic learning. Unsupervised Adapt. Filter. 1, 265–319 (2000)

    MATH  Google Scholar 

  6. Nenadic, Z.: Information discriminant analysis: feature extraction with an information-theoretic objective. IEEE Trans. Pattern Anal. Mach. Intell. 29(8), 1394–1407 (2007)

    Article  Google Scholar 

  7. Principe, J.C.: Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives. Springer, New York (2010)

    Book  MATH  Google Scholar 

  8. Torkkola, K.: Nonlinear feature transforms using maximum mutual information. In: Proceedings of International Joint Conference on Neural Networks, IJCNN 2001, vol. 4, pp. 2756–2761, IEEE (2001)

    Google Scholar 

  9. Deco, G., Finnoff, W., Zimmermann, H.: Unsupervised mutual information criterion for elimination of overtraining in supervised multilayer networks. Neural Comput. 7(1), 86–107 (1995)

    Article  Google Scholar 

  10. Kamimura, R., Nakanishi, S.: Hidden information maximization for feature detection and rule discovery. Netw. Comput. Neural Syst. 6(4), 577–602 (1995)

    Article  MATH  Google Scholar 

  11. Kamimura, R.: Self-organizing selective potentiality learning to detect important input neurons. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1619–1626, IEEE (2015)

    Google Scholar 

  12. Kamimura, R., Kitajima, R.: Selective potentiality maximization for input neuron selection in self-organizing maps. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, IEEE (2015)

    Google Scholar 

  13. Kamimura, R.: Supervised potentiality actualization learning for improving generalization performance. In: Proceedings on the International Conference on Artificial Intelligence (ICAI), p. 616. The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp) (2015)

    Google Scholar 

  14. Kitajima, R., Kamimura, R.: Simplifying potential learning by supposing maximum and minimum information for improved generalization and interpretation. In: 2015 International Conference on Modelling, Identification and Control, IASTED (2015)

    Google Scholar 

  15. Kamimura, R.: Self-organized potential learning: enhancing SOM knowledge to train supervised neural networks with improved interpretation and generalization performance (under submission). J. Comput. Eng. Inf. Technol. (2016)

    Google Scholar 

  16. Nishiuchi, H.: Statistical Analysis for Billion People (in Japanese). Nikkei BP Marketing, Tokyo (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryotaro Kamimura .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Kamimura, R. (2016). Simple and Stable Internal Representation by Potential Mutual Information Maximization. In: Jayne, C., Iliadis, L. (eds) Engineering Applications of Neural Networks. EANN 2016. Communications in Computer and Information Science, vol 629. Springer, Cham. https://doi.org/10.1007/978-3-319-44188-7_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-44188-7_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-44187-0

  • Online ISBN: 978-3-319-44188-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics