Skip to main content
Log in

Injecting Chaos in Feedforward Neural Networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Chaos appears in many natural and artificial systems; accordingly, we propose a method that injects chaos into a supervised feed forward neural network (NN). The chaos is injected simultaneously in the learnable temperature coefficient of the sigmoid activation function and in the weights of the NN. This is functionally different from the idea of noise injection (NI) which is relatively distant from biological realism. We investigate whether chaos injection is more efficient than standard back propagation, adaptive neuron model, and NI algorithms by applying these techniques to different benchmark classification problems such as heart disease, glass, breast cancer, and diabetes identification, and time series prediction. In each case chaos injection is superior to the standard approaches in terms of generalization ability and convergence rate. The performance of the proposed method is also statistically different from that of noise injection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Haykin S (2009) Neural networks and machine learning, 3rd edn. Pearson Education, Upper Saddle River

    Google Scholar 

  2. Lorrentz P, Howells WGJ, McDonald-Maier KD (2010) A novel weightless artificial neural based multi-classifier for complex classifications. Neural Process Lett 31: 25–44

    Article  Google Scholar 

  3. Jhuang H, Serre T, Wolf L, Poggio T (2007) A biologically inspired system for action recognition. In: 11th IEEE international conference on computer vision, Rio de Janeiro

  4. Chen L, Chen S (2006) Distance-based sparse associative memory neural network algorithm for pattern recognition. Neural Process Lett 24: 67–80

    Article  Google Scholar 

  5. Mees A, Aihara K, Adachi M, Judd K, Ikeguchi T, Matsumoto G (1992) Deterministic prediction and chaos in squid axon response. Phys Lett A 169(1–2): 41–45

    Article  Google Scholar 

  6. Aihara K, Takabe T, Toyota M (1990) Chaotic neural networks. Phys Lett A 144(6): 333–340

    Article  MathSciNet  Google Scholar 

  7. Nasr MB, Chtourou M (2006) A hybrid training algorithm for feedforward neural networks. Neural Process Lett 24: 107–117

    Article  Google Scholar 

  8. Ng S-C, Cheung C-C, Leung S-H (2004) Magnified gradient function with deterministic weight modification in adaptive learning. IEEE trans Neural Netw 15(6): 1411–1423

    Article  Google Scholar 

  9. Fazayeli F, Wang L, Wen L (2008) Back-propagation with chaos. In: IEEE international conference on neural network & signal processing, Zhenjing, 8–10 June 2008, pp 5–8

  10. Ho K, Leung C, Sum J (2009) On weight-noise-injection training. In: Advances in neural information processing, Lecture Notes in Computer Science, vol 5507. Springer, Heidelberg

  11. Assaduzzaman Md, Shahjahan Md, Murase K (2009) Faster training using fusion of activation functions for feed forward neural networks. Int J Neural Syst 19(6): 437–448

    Article  Google Scholar 

  12. Rimer M, Martinez T (2006) CB3: an adaptive error function for backpropagation training. Neural Process Lett 24: 81–92

    Article  Google Scholar 

  13. Tawel R (1989) Does the neuron learn like synapse? In: Touretzy D (ed) Advance in neural information processing system 1. Morgan Kaufmann, San Mateo, pp 169–176

  14. Nozawa H (1992) A neural network model as a globally coupled map and applications based on chaos. Chaos 2(3): 377–386

    Article  MATH  MathSciNet  Google Scholar 

  15. Wang L, Li S, Tian FY, Fu XJ (2004) A noisy chaotic neural network for solving combinational optimization problems: stochastic chaotic simulated annealing. IEEE Trans Syst Man Cybernet 34(5): 2119–2125

    Article  Google Scholar 

  16. Bertels K, Neuberg L, Vassiliadis S, Pechanek DG (2001) On chaos and neural networks: the backpropagation paradigm. Artif Intell Rev 15: 165–187

    Article  MATH  Google Scholar 

  17. Matsuoka K (1992) Noise injection into input in back-propagation learning. IEEE Trans Syst Man Cybernet 22(3): 436–440

    Article  Google Scholar 

  18. Hammadi NC, Ito H (1998) Improving the performance of feedforward neural networks by noise injection into hidden neurons. J Intell Robotic Syst 21: 103–115

    Article  Google Scholar 

  19. An G (1996) The effects of adding noise during backpropagation training on a generalization performance. Neural Comput 8(3): 643–674

    Article  Google Scholar 

  20. Dennis B, Desharnais RA, Cushing JM, Henson SM, Costantino RF (2003) Can noise induce chaos?.   OIKOS 102(2): 329–339

    Google Scholar 

  21. Scrott JP (2003) Chaos and time series analysis. Oxford University Press, Oxford

    Google Scholar 

  22. Asuncion A, Newman DJ (2007) UCI machine learning repository. School of information and computer science, University of California, Irvine. http://www.ics.uci.edu/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sultan Uddin Ahmed.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ahmed, S.U., Shahjahan, M. & Murase, K. Injecting Chaos in Feedforward Neural Networks. Neural Process Lett 34, 87–100 (2011). https://doi.org/10.1007/s11063-011-9185-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-011-9185-x

Keywords

Navigation