Abstract
Selecting the correct function for each neuron allows greater representational power, and thus smaller more efficient networks. This paper presents a method of dynamically modifying the activation function within each neuron during training. Thus allowing the network designer to use complex activation functions without having to assign the correct one to each individual neuron.
Similar content being viewed by others
References
A.E.Bryson and Y.C.Ho, Applied Optimal Control, New York Press: New York, 1969.
J.P.Cater, “Successfully using peak learning rates of 10 (and greater) in back propagation networks with the heuristic learning algorithm”, IEEE First International Conference on Neural networks, Vol. 2, pp. 645–651, 1987.
G.P.Fletcher and C.J.Hinde, “Interpretation of neural networks as Boolean transfer functions”, Knowledge Based Systems, Vol. 7, pp. 207–214, 1994.
G.P.Fletcher and C.J.Hinde, “Using neural networks as a toll for constructing rule based systems”, Knowledge Based Systems, Vol. 8, pp. 183–189, 1995.
M.Frean, “The Upstart Algorithm. A method for construction and training feedforward neural networks”, Neural Computation, Vol. 2, pp. 198–209, 1990.
H.V.Helmoltz, in J.P.C.Southall (ed) Handbook of Physiological Objects, Vol. 2, New York Press: New York, 1924.
J.Horejes and O.Kufudaki, “Neural networks with local distributed parameters”, Neurocomputing, Vol. 5, pp. 211–219, 1993.
G.E. Hinton, “A parallel computation that assigns canonical objectbased frames of reference”, Proceedings of the 7th International Joint Conferance on Artificial Intelligence, 1981.
R.W.G.Hunt, “A model of colour vision for predicting colour appearance in various viewing conditions”, Colour Research Applications, Vol. 12, pp. 297–314, 1987.
M.R.Luo, A.A.Clarke, P.A.Rhodes, A.Schappo, S.A.R.Scrivener and C.J.Tait, “Quantifying colour appearance”, Colour Research Applications, Vol. 16, pp. 166–179, 1991.
M.Marchand, M.Golea and P.A.Rujan, “A Convergence Theorem for Sequential Learning in Two Layer Perceptrons”. Euro Physics Letters, Vol. 11, pp. 487–492, 1990.
W.S.McCulloch and W.Pitts, “A logical calculus of the ideas immanent in nervous activity”, Bulletin of Mathematical Biophysics, Vol. 5, pp. 115–133, 1943.
M. Mezard and J.P. Nadal, “Learning in feedforward neural networks: the tiling algorithm”. Journal of Physics A, Maths & General, 1990.
D.E.Rumelhart, G.E.Hinton and R.J.Williams, “Learning representations by back-propagating errors”, Nature, Vol. 323, pp. 533–536, 1986.
J.Sietsma and R.J.F.Dow, “Neural net pruning — why and how”. IEEE International Conference on Neural Networks, Vol. 1, pp. 665–672, 1988.
J.Wang and B.Malakooti, “A feed forward neural network for multiple criteria decision making”, Computers and Operations Research, Vol. 19, pp. 151–166, 1992.
R.J. Williams, “The logic of activation functions”, in D.E. Rummelhart and J.L. McCelland (eds) Parallel Distributed Processing, Explorations in the Microstructure of Cognition, Vol. 1.
T. Young, “On the theory of light and colours”, Royal Philosophical Society, Vol. 92, No. 12, 1802.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Fletcher, G.P. Adaptive internal activation functions and their effect on learning in feed forward networks. Neural Process Lett 4, 29–38 (1996). https://doi.org/10.1007/BF00454843
Issue Date:
DOI: https://doi.org/10.1007/BF00454843