Skip to main content

Repeated Potentiality Augmentation for Multi-layered Neural Networks

  • Conference paper
  • First Online:
Advances in Information and Communication (FICC 2023)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 652))

Included in the following conference series:

  • 840 Accesses

Abstract

The present paper proposes a new method to augment the potentiality of components in neural networks. The basic hypothesis is that all components should have equal potentiality (equi-potentiality) to be used for learning. This equi-potentiality of components has implicitly played critical roles in improving multi-layered neural networks. We introduce here the total potentiality and relative potentiality for each hidden layer, and we try to force networks to increase the potentiality as much as possible to realize the equi-potentiality. In addition, the potentiality augmentation is repeated at any time the potentiality tends to decrease, which is used to increase the chance for any components to be used as equally as possible. We applied the method to the bankruptcy data set. By keeping the equi-potentiality of components by repeating the process of potentiality augmentation and reduction, we could see improved generalization. Then, by considering all possible representations by the repeated potentiality augmentation, we can interpret which inputs can contribute to the final performance of networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

References

  1. Rumelhart, D.E., Zipser, D.: Feature discovery by competitive learning. Cogn. Sci. 9, 75–112 (1985)

    Article  Google Scholar 

  2. Kohonen, T.: Self-Organizing Maps. Springer, Heidelberg (1995). https://doi.org/10.1007/978-3-642-97610-0

    Book  MATH  Google Scholar 

  3. Himberg, J.: A SOM based cluster visualization and its application for false colouring. In: Proceedings of the International Joint Conference on Neural Networks, pp. 69–74 (2000)

    Google Scholar 

  4. Bogdan, M., Rosenstiel, W.: Detection of cluster in self-organizing maps for controlling a prostheses using nerve signals. In 9th European Symposium on Artificial Neural Networks. ESANN 2001. Proceedings. D-Facto, Evere, Belgium, pp. 131–136 (2001)

    Google Scholar 

  5. Yin, H.: Visom-a novel method for multivariate data projection and structure visualization. IEEE Trans. Neural Networks 13(1), 237–243 (2002)

    Article  Google Scholar 

  6. Brugger, D., Bogdan, M., Rosenstiel, W.: Automatic cluster detection in Kohonen’s Som. IEEE Trans. Neural Networks 19(3), 442–459 (2008)

    Article  Google Scholar 

  7. Xu, L., Xu, Y., Chow, T.W.S.: Polsom: a new method for multidimensional data visualization. Pattern Recogn. 43(4), 1668–1675 (2010)

    Article  MATH  Google Scholar 

  8. Xu, L., Xu, Y., Chow, T.W.S.: PolSOM-a new method for multidimentional data visualization. Pattern Recogn. 43, 1668–1675 (2010)

    Article  MATH  Google Scholar 

  9. Linsker, R.: Self-organization in a perceptual network. Computer 21(3), 105–117 (1988)

    Article  Google Scholar 

  10. DeSieno, D.: Adding a conscience to competitive learning. In: IEEE international Conference on Neural Networks, vol. 1, pp. 117–124. Institute of Electrical and Electronics Engineers, New York (1988)

    Google Scholar 

  11. Fritzke, B.: Vector quantization with a growing and splitting elastic net. In: Gielen, S., Kappen, B. (eds.) ICANN 1993, pp. 580–585. Springer, London (1993). https://doi.org/10.1007/978-1-4471-2063-6_161

    Chapter  Google Scholar 

  12. Fritzke, B.: Automatic construction of radial basis function networks with the growing neural gas model and its relevance for fuzzy logic. In: Applied Computing 1996: Proceedings of the 1996 ACM Symposium on Applied Computing, Philadelphia, pp. 624–627. ACM (1996)

    Google Scholar 

  13. Choy, C.S., Siu, W.: A class of competitive learning models which avoids neuron underutilization problem. IEEE Trans. Neural Networks 9(6), 1258–1269 (1998)

    Article  Google Scholar 

  14. Van Hulle, M.M.: Faithful representations with topographic maps. Neural Netw. 12(6), 803–823 (1999)

    Article  Google Scholar 

  15. Banerjee, A., Ghosh, J.: Frequency-sensitive competitive learning for scalable balanced clustering on high-dimensional hyperspheres. IEEE Trans. Neural Networks 15(3), 702–719 (2004)

    Article  Google Scholar 

  16. Van Hulle, M.M.: Entropy-based kernel modeling for topographic map formation. IEEE Trans. Neural Networks 15(4), 850–858 (2004)

    Article  Google Scholar 

  17. Linsker, R.: Self-organization in a perceptual network. Computer 21, 105–117 (1988)

    Article  Google Scholar 

  18. Linsker, R.: How to generate ordered maps by maximizing the mutual information between input and output signals. Neural Comput. 1(3), 402–411 (1989)

    Article  Google Scholar 

  19. Linsker, R.: Local synaptic learning rules suffice to maximize mutual information in a linear network. Neural Comput. 4(5), 691–702 (1992)

    Article  Google Scholar 

  20. Torkkola, K.: Feature extraction by non-parametric mutual information maximization. J. Mach. Learn. Res. 3, 1415–1438 (2003)

    MathSciNet  MATH  Google Scholar 

  21. Leiva-Murillo, J.M., Artés-Rodríguez, A.: Maximization of mutual information for supervised linear feature extraction. IEEE Trans. Neural Networks 18(5), 1433–1441 (2007)

    Article  Google Scholar 

  22. Van Hulle, M.M.: The formation of topographic maps that maximize the average mutual information of the output responses to noiseless input signals. Neural Comput. 9(3), 595–606 (1997)

    Article  Google Scholar 

  23. Principe, J.C.: Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives. Springer, New York (2010). https://doi.org/10.1007/978-1-4419-1570-2

    Book  MATH  Google Scholar 

  24. Moody, J., Hanson, S., Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. Adv. Neural. Inf. Process. Syst. 4, 950–957 (1995)

    Google Scholar 

  25. Kukačka, J., Golkov, V., Cremers, D.: Regularization for deep learning: a taxonomy. arXiv preprint arXiv:1710.10686 (2017)

  26. Goodfellow, I., Bengio, Y., Courville, A.: Regularization for deep learning. Deep Learn. 216–261 (2016)

    Google Scholar 

  27. Wu, C., Gales, M.J.F., Ragni, A., Karanasou, P., Sim, K.C.: Improving interpretability and regularization in deep learning. IEEE/ACM Trans. Audio Speech Language Process. 26(2), 256–265 (2017)

    Article  Google Scholar 

  28. Fan, F.-L., Xiong, J., Li, M., Wang, G.: On interpretability of artificial neural networks: a survey. IEEE Trans. Radiat. Plasma Med. Sci. 5, 741–760 (2021)

    Article  Google Scholar 

  29. X. Ma, et al.: Sanity checks for lottery tickets: does your winning ticket really win the jackpot? In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  30. Bai, Y., Wang, H., Tao, Z., Li, K., Fu, Y.: Dual lottery ticket hypothesis. arXiv preprint arXiv:2203.04248 (2022)

  31. da Cunha, A., Natale, E., Viennot, L.: Proving the strong lottery ticket hypothesis for convolutional neural networks. In: International Conference on Learning Representations (2022)

    Google Scholar 

  32. Chen, X., Cheng, Y., Wang, S., Gan, Z., Liu, J., Wang, Z.: The elastic lottery ticket hypothesis. In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  33. Malach, E., Yehudai, G., Shalev-Schwartz, S., Shamir, O.: Proving the lottery ticket hypothesis: pruning is all you need. In: International Conference on Machine Learning, pp. 6682–6691. PMLR (2020)

    Google Scholar 

  34. Frankle, J., Dziugaite, G.K., Roy, D., Carbin, M.: Linear mode connectivity and the lottery ticket hypothesis. In: International Conference on Machine Learning, pp. 3259–3269. PMLR (2020)

    Google Scholar 

  35. Frankle, J., Dziugaite, G.K., Roy, D.M., Carbin, M.: Stabilizing the lottery ticket hypothesis. arXiv preprint arXiv:1903.01611 (2019)

  36. Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635 (2018)

  37. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a right to explanation. arXiv preprint arXiv:1606.08813 (2016)

  38. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  39. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  40. Rai, A.: Explainable AI: from black box to glass box. J. Acad. Mark. Sci. 48(1), 137–141 (2020)

    Article  Google Scholar 

  41. Weidele, D.K.I., et al.: opening the blackbox of automated artificial intelligence with conditional parallel coordinates. In: Proceedings of the 25th International Conference on Intelligent User Interfaces, pp. 308–312 (2020)

    Google Scholar 

  42. Pintelas, E., Livieris, I.E., Pintelas, P.: A grey-box ensemble model exploiting black-box accuracy and white-box intrinsic interpretability. Algorithms 13(1), 17 (2020)

    Article  MathSciNet  Google Scholar 

  43. Nguyen, A., Yosinski, J., Clune, J.: Understanding neural networks via feature visualization: a survey. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 55–76. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_4

    Chapter  Google Scholar 

  44. Erhan, D., Bengio, Y., Courville, A., Vincent, P.: Visualizing higher-layer features of a deep network. University of Montreal, 1341 (2009)

    Google Scholar 

  45. Nguyen, A., Clune, J., Bengio, Y., Dosovitskiy, A., Yosinski, J.: Plug & play generative networks: Conditional iterative generation of images in latent space. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4467–4477 (2017)

    Google Scholar 

  46. Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5188–5196 (2015)

    Google Scholar 

  47. Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J.: Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: Advances in Neural Information Processing Systems, pp. 3387–3395 (2016)

    Google Scholar 

  48. van den Oord,A., Kalchbrenner, N., Kavukcuoglu, K.: Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759 (2016)

  49. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)

  50. Khan, J., et al.: Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat. Med. 7(6), 673–679 (2001)

    Article  Google Scholar 

  51. Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., MÞller, K.-R.: How to explain individual classification decisions. J. Mach. Learn. Res. 11, 1803–1831 (2010)

    MathSciNet  MATH  Google Scholar 

  52. Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017)

  53. Sundararajan, M., Taly, A., Yan., Q.: Axiomatic attribution for deep networks. arXiv preprint arXiv:1703.01365 (2017)

  54. Lapuschkin, S., Binder, A., Montavon, G., Muller, K.-R., Samek, W.: Analyzing classifiers: fisher vectors and deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2912–2920 (2016)

    Google Scholar 

  55. Arbabzadah, F., Montavon, G., Müller, K.-R., Samek, W.: Identifying individual facial expressions by deconstructing a neural network. In: Rosenhahn, B., Andres, B. (eds.) GCPR 2016. LNCS, vol. 9796, pp. 344–354. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-45886-1_28

    Chapter  Google Scholar 

  56. Sturm, I., Lapuschkin, S., Samek, W., Müller, K.-R.: Interpretable deep neural networks for single-trial EEG classification. J. Neurosci. Methods 274, 141–145 (2016)

    Article  Google Scholar 

  57. Binder, A., Montavon, G., Lapuschkin, S., Müller, K.-R., Samek, W.: Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers. In: Villa, A.E.P., Masulli, P., Pons Rivero, A.J. (eds.) ICANN 2016. LNCS, vol. 9887, pp. 63–71. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44781-0_8

    Chapter  MATH  Google Scholar 

  58. Shimizu, K.: Multivariate Analysis (2009). (in Japanese), Nikkan Kogyo Shinbun

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryotaro Kamimura .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kamimura, R. (2023). Repeated Potentiality Augmentation for Multi-layered Neural Networks. In: Arai, K. (eds) Advances in Information and Communication. FICC 2023. Lecture Notes in Networks and Systems, vol 652. Springer, Cham. https://doi.org/10.1007/978-3-031-28073-3_9

Download citation

Publish with us

Policies and ethics