Skip to main content

On Capacity with Incremental Learning by Simplified Chaotic Neural Network

  • Conference paper
  • First Online:
Theory and Practice of Natural Computing (TPNC 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11324))

Included in the following conference series:

  • 989 Accesses

Abstract

Chaotic behaviors are often shown in the biological brains. They are related strongly to the memory storage and learning in the chaotic neural networks. The incremental learning is a method to compose an associative memory using a chaotic neural network and provides larger capacity than the Hebbian rule in compensation for amount of computation. In the former works, patterns were generated randomly to have plus 1 in half of elements and minus 1 in the others. When finely-tuned parameters were used, the network learned these pattern features, well. But, this result could be taken as an over-learning. Then, we proposed pattern generating methods to avoid over-learning and tested the patterns, in which the ratio of plus 1 and minus 1 is different from 1 to 1. In this paper, our simulations investigate the capacity of the usual chaotic neural network and that of the simplified chaotic neural network with these patterns to ensure no over-learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Freeman, W.J., Barrie, J.M.: Chaotic oscillations and the genesis of meaning in cerebral cortex. In: Buzsáki, G., Llinás, R., Singer, W., Berthoz, A., Christen, Y. (eds.) Temporal Coding in the Brain. Research and Perspectives in Neurosciences, pp. 13–37. Springer, Heidelberg (1994). https://doi.org/10.1007/978-3-642-85148-3_2

  2. Babloyantz, A., Lourenco, C.: Brain chaos and computation. Int. J. Neural Syst. 7, 461–471 (1996)

    Article  Google Scholar 

  3. Crook, N.T., Dobbyn, C.H., Scheper, T.O.: Chaos as a desirable stable state of artificial neural networks. In: John, R., Birkenhead, R. (eds.) Advances in Soft Computing: Soft Computing Techniques and Applications, pp. 52–60. Physica-Verlag (2000)

    Google Scholar 

  4. Asakawa, S., Deguchi, T., Ishii, N.: On-demand learning in neural network. In: Proceedings of the ACIS 2nd International Conference on Software Engineering, Artificial Intelligence, Networking & Parallel/Distributed Computing, pp. 84–89 (2001)

    Google Scholar 

  5. Deguchi, T., Ishii, N.: On refractory parameter of chaotic neurons in incremental learning. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3214, pp. 103–109. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30133-2_14

    Chapter  Google Scholar 

  6. Watanabe, M., Aihara, K., Kondo, S.: Automatic learning in chaotic neural networks. In: Proceedings of 1994 IEEE Symposium on Emerging Technologies and Factory Automation, pp. 245–248 (1994)

    Google Scholar 

  7. Aihara, K., Tanabe, T., Toyoda, M.: Chaotic neural networks. Phys. Lett. A 144(6,7), 333–340 (1990)

    Article  MathSciNet  Google Scholar 

  8. Deguchi, T., Matsuno, K., Ishii, N.: On capacity of memory in chaotic neural networks with incremental learning. In: Lovrek, I., Howlett, Robert J., Jain, Lakhmi C. (eds.) KES 2008. LNCS (LNAI), vol. 5178, pp. 919–925. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85565-1_114

    Chapter  Google Scholar 

  9. Deguchi, T., Matsuno, K., Kimura, T., Ishii, N.: Capacity of memory and error correction capability in chaotic neural networks with incremental learning. In: Lee, R., Hu, G., Miao, H. (eds.) Computer and Information Science 2009. Studies in Computational Intelligence, vol. 208, pp. 295–302. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-01209-9_27

    Chapter  Google Scholar 

  10. Matsuno, K., Deguchi, T., Ishii, N.: On influence of refractory parameter in incremental learning. In: Lee, R. (ed.) Computer and Information Science 2010. Studies in Computational Intelligence, vol. 317, pp. 13–21. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15405-8_2

    Chapter  Google Scholar 

  11. Deguchi, T., Ishii, N.: On memory capacity in incremental learning with appropriate refractoriness and weight increment. In: Proceedings of 1st ACIS/JNU International Conference on Computers, Networks, Systems, and Industrial Engineering, pp. 427–430 (2011)

    Google Scholar 

  12. Deguchi, T., Fukuta, J., Ishii, N.: On appropriate refractoriness and weight increment in incremental learning. In: Tomassini, M., Antonioni, A., Daolio, F., Buesser, P. (eds.) ICANNGA 2013. LNCS, vol. 7824, pp. 1–9. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-37213-1_1

    Chapter  Google Scholar 

  13. Deguchi, T., Takahashi, T., Ishii, N.: On simplification of chaotic neural network on incremental learning. In: 15th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, pp. 1–4 (2014)

    Google Scholar 

  14. Deguchi, T., Takahashi, T., Ishii, N.: On temporal summation in chaotic neural network with incremental learning. Int. J. Softw. Innov. 2(4), 72–84 (2014)

    Article  Google Scholar 

  15. Deguchi, T., Takahashi, T., Ishii, N.: On acceleration of incremental learning in chaotic neural network. In: Rojas, I., Joya, G., Catala, A. (eds.) IWANN 2015. LNCS, vol. 9095, pp. 370–379. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-19222-2_31

    Chapter  Google Scholar 

  16. Adachi, M., Aihara, K., Kotani, M.: Nonlinear associative dynamics in a chaotic neural Networks. In: Proceedings of the 2nd International Conference on Fuzzy Logic & Neural Networks, pp. 947–950 (1992)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Naohiro Ishii .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Deguchi, T., Ishii, N. (2018). On Capacity with Incremental Learning by Simplified Chaotic Neural Network. In: Fagan, D., Martín-Vide, C., O'Neill, M., Vega-Rodríguez, M.A. (eds) Theory and Practice of Natural Computing. TPNC 2018. Lecture Notes in Computer Science(), vol 11324. Springer, Cham. https://doi.org/10.1007/978-3-030-04070-3_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04070-3_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04069-7

  • Online ISBN: 978-3-030-04070-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics