Skip to main content

On Acceleration of Incremental Learning in Chaotic Neural Network

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9095))

Included in the following conference series:

Abstract

The incremental learning is a method to compose an associate memory using a chaotic neural network and provides larger capacity than correlative learning in compensation for a large amount of computation. A chaotic neuron has spatio-temporal sum in it and the temporal sum makes the learning stable to input noise. When there is no noise in input, the neuron may not need temporal sum. In this paper, to reduce the computations, a simplified network without temporal sum are introduced and investigated through the computer simulations comparing with the network as in the past. It turns out that the simplified network is able to learn input patterns quickly with the learning parameter varying.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Asakawa, S., Deguchi, T., Ishii N.: On-demand learning in neural network. In: Proc. of the ACIS 2nd Intl. Conf. on Software Engineering, Artificial Intelligence, Networking & Parallel/Distributed Computing, pp. 84–89 (2001)

    Google Scholar 

  2. Deguchi, T., Ishii, N.: On refractory parameter of chaotic neurons in incremental learning. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3214, pp. 103–109. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  3. Watanabe, M., Aihara, K., Kondo, S.: Automatic learning in chaotic neural networks. In: Proc. of 1994 IEEE Symposium on Emerging Technologies and Factory Automation, pp. 245–248 (1994)

    Google Scholar 

  4. Aihara, K., Tanabe, T., Toyoda, M.: Chaotic neural networks. Phys. Lett. A 144(6,7), 333–340 (1990)

    Article  MathSciNet  Google Scholar 

  5. Deguchi, T., Matsuno, K., Ishii, N.: On capacity of memory in chaotic neural networks with incremental learning. In: Lovrek, I., Howlett, R.J., Jain, L.C. (eds.) KES 2008, Part II. LNCS (LNAI), vol. 5178, pp. 919–925. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  6. Deguchi, T., Fukuta, J., Ishii, N.: On appropriate refractoriness and weight increment in incremental learning. In: Tomassini, M., Antonioni, A., Daolio, F., Buesser, P. (eds.) ICANNGA 2013. LNCS, vol. 7824, pp. 1–9. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  7. Deguchi, T., Takahashi, T., Ishii, N.: On simplification of chaotic neural network on incremental learning. In: 15th IEEE/ACIS Intl. Conf. on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2014), pp. 1–4 (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Toshinori Deguchi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Deguchi, T., Takahashi, T., Ishii, N. (2015). On Acceleration of Incremental Learning in Chaotic Neural Network. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2015. Lecture Notes in Computer Science(), vol 9095. Springer, Cham. https://doi.org/10.1007/978-3-319-19222-2_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-19222-2_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-19221-5

  • Online ISBN: 978-3-319-19222-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics