Abstract
Chaos appears in many natural and artificial systems; accordingly, we propose a method that injects chaos into a supervised feed forward neural network (NN). The chaos is injected simultaneously in the learnable temperature coefficient of the sigmoid activation function and in the weights of the NN. This is functionally different from the idea of noise injection (NI) which is relatively distant from biological realism. We investigate whether chaos injection is more efficient than standard back propagation, adaptive neuron model, and NI algorithms by applying these techniques to different benchmark classification problems such as heart disease, glass, breast cancer, and diabetes identification, and time series prediction. In each case chaos injection is superior to the standard approaches in terms of generalization ability and convergence rate. The performance of the proposed method is also statistically different from that of noise injection.
Similar content being viewed by others
References
Haykin S (2009) Neural networks and machine learning, 3rd edn. Pearson Education, Upper Saddle River
Lorrentz P, Howells WGJ, McDonald-Maier KD (2010) A novel weightless artificial neural based multi-classifier for complex classifications. Neural Process Lett 31: 25–44
Jhuang H, Serre T, Wolf L, Poggio T (2007) A biologically inspired system for action recognition. In: 11th IEEE international conference on computer vision, Rio de Janeiro
Chen L, Chen S (2006) Distance-based sparse associative memory neural network algorithm for pattern recognition. Neural Process Lett 24: 67–80
Mees A, Aihara K, Adachi M, Judd K, Ikeguchi T, Matsumoto G (1992) Deterministic prediction and chaos in squid axon response. Phys Lett A 169(1–2): 41–45
Aihara K, Takabe T, Toyota M (1990) Chaotic neural networks. Phys Lett A 144(6): 333–340
Nasr MB, Chtourou M (2006) A hybrid training algorithm for feedforward neural networks. Neural Process Lett 24: 107–117
Ng S-C, Cheung C-C, Leung S-H (2004) Magnified gradient function with deterministic weight modification in adaptive learning. IEEE trans Neural Netw 15(6): 1411–1423
Fazayeli F, Wang L, Wen L (2008) Back-propagation with chaos. In: IEEE international conference on neural network & signal processing, Zhenjing, 8–10 June 2008, pp 5–8
Ho K, Leung C, Sum J (2009) On weight-noise-injection training. In: Advances in neural information processing, Lecture Notes in Computer Science, vol 5507. Springer, Heidelberg
Assaduzzaman Md, Shahjahan Md, Murase K (2009) Faster training using fusion of activation functions for feed forward neural networks. Int J Neural Syst 19(6): 437–448
Rimer M, Martinez T (2006) CB3: an adaptive error function for backpropagation training. Neural Process Lett 24: 81–92
Tawel R (1989) Does the neuron learn like synapse? In: Touretzy D (ed) Advance in neural information processing system 1. Morgan Kaufmann, San Mateo, pp 169–176
Nozawa H (1992) A neural network model as a globally coupled map and applications based on chaos. Chaos 2(3): 377–386
Wang L, Li S, Tian FY, Fu XJ (2004) A noisy chaotic neural network for solving combinational optimization problems: stochastic chaotic simulated annealing. IEEE Trans Syst Man Cybernet 34(5): 2119–2125
Bertels K, Neuberg L, Vassiliadis S, Pechanek DG (2001) On chaos and neural networks: the backpropagation paradigm. Artif Intell Rev 15: 165–187
Matsuoka K (1992) Noise injection into input in back-propagation learning. IEEE Trans Syst Man Cybernet 22(3): 436–440
Hammadi NC, Ito H (1998) Improving the performance of feedforward neural networks by noise injection into hidden neurons. J Intell Robotic Syst 21: 103–115
An G (1996) The effects of adding noise during backpropagation training on a generalization performance. Neural Comput 8(3): 643–674
Dennis B, Desharnais RA, Cushing JM, Henson SM, Costantino RF (2003) Can noise induce chaos?. OIKOS 102(2): 329–339
Scrott JP (2003) Chaos and time series analysis. Oxford University Press, Oxford
Asuncion A, Newman DJ (2007) UCI machine learning repository. School of information and computer science, University of California, Irvine. http://www.ics.uci.edu/
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ahmed, S.U., Shahjahan, M. & Murase, K. Injecting Chaos in Feedforward Neural Networks. Neural Process Lett 34, 87–100 (2011). https://doi.org/10.1007/s11063-011-9185-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-011-9185-x