Skip to main content

Selective Weight Update Rule for Hybrid Neural Network

  • Conference paper
Advances in Neural Networks – ISNN 2012 (ISNN 2012)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7367))

Included in the following conference series:

  • 2585 Accesses

Abstract

VSF-Network,Vibration Synchronizing Function Network, is a hybrid neural network combining a chaos neural network with a hierarchical network. It is a neural network model which learns symbols. In this paper, the two theoretical backgrounds of VSF–Network are described. The first one is the incremental learning by CNN and the second background is ensemble learning. VSF-Network finds unknown parts of input data by comparing to learned pattern and it learns the unknown parts using unused part of the network. By the ensemble learning, the capability of VSF-network for recognizing combined patterns that are learned by every sub-network of VSF-network can be explained. Through the experiments, we show that VSF-network can recognize combined patterns only if it has learned parts of the patterns and show factors for affecting performance of the learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kakemoto, Y., Nakasuka, S.: Dynamics of Incremental Learning by VSF-Network. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds.) ICANN 2009. LNCS, vol. 5768, pp. 688–697. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  2. Kakemoto, Y., Nakasuka, S.: Neural assembly generation by selective connection weight updating. In: Proc. IJCNN 2010 (2010)

    Google Scholar 

  3. Inamura, T., Tanie, H., Nakamura, Y.: Proto-symbol development and manipulation in the geometry of stochastic model for motion generation and recognition. Technical Report NC2003-65. IEICE (2003)

    Google Scholar 

  4. Chandler, D.: Semiotics for Beginners. Routledge (1995)

    Google Scholar 

  5. Kakemoto, Y., Nakasuka, S.: The learning and dynamics of vsf-network. In: Proc. of ISIC 2006 (2006)

    Google Scholar 

  6. Giraud-Carrier, C.: A note on the utility of incremental learning. AI Communications 13, 215–223 (2000)

    MATH  Google Scholar 

  7. Lin, M., Tang, K., Yao, X.: Incremental learning by negative correlation leaning. In: Proc. of IJCNN 2008 (2008)

    Google Scholar 

  8. Aihara, T., Tanabe, T., Toyoda, M.: Chaotic neural networks. Phys. Lett. 144A, 333–340 (1990)

    Google Scholar 

  9. Kaneko, K.: Chaotic but regular posi-nega switch among coded attractors by cluster size variation. Phys. Rev. Lett. 63, 219 (1989)

    Article  MathSciNet  Google Scholar 

  10. Komuro, M.: A mechanism of chaotic itinerancy in globally coupled maps. In: Dynamical Systems, NDDS 2002 (2002)

    Google Scholar 

  11. Uchiyama, S., Fujisaki, H.: Chaotic itinerancy in the oscillator neural network without lyapunov functions. Chaos 14, 699–706 (2004)

    Article  Google Scholar 

  12. Jones, L.K.: A simple lemma on greedy approximation in hilbert space and convergence rates for projection pursuit regression and neural networktraining. Annals of Statistics 20(1), 608–613 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  13. Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Information Theory 39(3), 930–945 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  14. Girosi, F., Anzellotti, G.: Convergence rates of approximation by translates. artificial intelligence laboratory technical report. Technical report, Massachusetts Institute of Technology (1992)

    Google Scholar 

  15. Murata, N.: Approximation bounds of three-layered neural networks – a theorem on an integral transform with ridge functions. Electronics and Communications in Japan 79(3), 23–33 (1996)

    Article  Google Scholar 

  16. Opitz, D., Maclin, R.: Popular ensemble methods: An empirical study. Journal of Artificial Intelligence Research 11, 169–198 (1999)

    MATH  Google Scholar 

  17. Amari, S., Nagaoka, H.: Methods of Information Geometry. Oxford University Press (2007)

    Google Scholar 

  18. Akaho, S.: Information geometry in machine learning. Journal of the Society of Instrument and Control Engineers 44(5), 299–306 (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kakemoto, Y., Nakasuka, S. (2012). Selective Weight Update Rule for Hybrid Neural Network. In: Wang, J., Yen, G.G., Polycarpou, M.M. (eds) Advances in Neural Networks – ISNN 2012. ISNN 2012. Lecture Notes in Computer Science, vol 7367. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31346-2_56

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-31346-2_56

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-31345-5

  • Online ISBN: 978-3-642-31346-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics