Skip to main content
Log in

Adaptive internal activation functions and their effect on learning in feed forward networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Selecting the correct function for each neuron allows greater representational power, and thus smaller more efficient networks. This paper presents a method of dynamically modifying the activation function within each neuron during training. Thus allowing the network designer to use complex activation functions without having to assign the correct one to each individual neuron.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. A.E.Bryson and Y.C.Ho, Applied Optimal Control, New York Press: New York, 1969.

    Google Scholar 

  2. J.P.Cater, “Successfully using peak learning rates of 10 (and greater) in back propagation networks with the heuristic learning algorithm”, IEEE First International Conference on Neural networks, Vol. 2, pp. 645–651, 1987.

    Google Scholar 

  3. G.P.Fletcher and C.J.Hinde, “Interpretation of neural networks as Boolean transfer functions”, Knowledge Based Systems, Vol. 7, pp. 207–214, 1994.

    Google Scholar 

  4. G.P.Fletcher and C.J.Hinde, “Using neural networks as a toll for constructing rule based systems”, Knowledge Based Systems, Vol. 8, pp. 183–189, 1995.

    Google Scholar 

  5. M.Frean, “The Upstart Algorithm. A method for construction and training feedforward neural networks”, Neural Computation, Vol. 2, pp. 198–209, 1990.

    Google Scholar 

  6. H.V.Helmoltz, in J.P.C.Southall (ed) Handbook of Physiological Objects, Vol. 2, New York Press: New York, 1924.

    Google Scholar 

  7. J.Horejes and O.Kufudaki, “Neural networks with local distributed parameters”, Neurocomputing, Vol. 5, pp. 211–219, 1993.

    Google Scholar 

  8. G.E. Hinton, “A parallel computation that assigns canonical objectbased frames of reference”, Proceedings of the 7th International Joint Conferance on Artificial Intelligence, 1981.

  9. R.W.G.Hunt, “A model of colour vision for predicting colour appearance in various viewing conditions”, Colour Research Applications, Vol. 12, pp. 297–314, 1987.

    Google Scholar 

  10. M.R.Luo, A.A.Clarke, P.A.Rhodes, A.Schappo, S.A.R.Scrivener and C.J.Tait, “Quantifying colour appearance”, Colour Research Applications, Vol. 16, pp. 166–179, 1991.

    Google Scholar 

  11. M.Marchand, M.Golea and P.A.Rujan, “A Convergence Theorem for Sequential Learning in Two Layer Perceptrons”. Euro Physics Letters, Vol. 11, pp. 487–492, 1990.

    Google Scholar 

  12. W.S.McCulloch and W.Pitts, “A logical calculus of the ideas immanent in nervous activity”, Bulletin of Mathematical Biophysics, Vol. 5, pp. 115–133, 1943.

    Google Scholar 

  13. M. Mezard and J.P. Nadal, “Learning in feedforward neural networks: the tiling algorithm”. Journal of Physics A, Maths & General, 1990.

  14. D.E.Rumelhart, G.E.Hinton and R.J.Williams, “Learning representations by back-propagating errors”, Nature, Vol. 323, pp. 533–536, 1986.

    Google Scholar 

  15. J.Sietsma and R.J.F.Dow, “Neural net pruning — why and how”. IEEE International Conference on Neural Networks, Vol. 1, pp. 665–672, 1988.

    Google Scholar 

  16. J.Wang and B.Malakooti, “A feed forward neural network for multiple criteria decision making”, Computers and Operations Research, Vol. 19, pp. 151–166, 1992.

    Google Scholar 

  17. R.J. Williams, “The logic of activation functions”, in D.E. Rummelhart and J.L. McCelland (eds) Parallel Distributed Processing, Explorations in the Microstructure of Cognition, Vol. 1.

  18. T. Young, “On the theory of light and colours”, Royal Philosophical Society, Vol. 92, No. 12, 1802.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fletcher, G.P. Adaptive internal activation functions and their effect on learning in feed forward networks. Neural Process Lett 4, 29–38 (1996). https://doi.org/10.1007/BF00454843

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00454843

Key words

Navigation