Skip to main content
Log in

Evolving transfer functions for artificial neural networks

  • Original Article
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

The paper describes a methodology for constructing transfer functions for the hidden layer of a back-propagation network, which is based on evolutionary programming. The method allows the construction of almost any mathematical form. It is tested using four benchmark classification problems from the well-known machine intelligence problems repository maintained by the University of California, Irvine. It was found that functions other than the commonly used sigmoidal function could perform well when used as hidden layer transfer functions. Three of the four problems showed improved test results when these evolved functions were used.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

Similar content being viewed by others

References

  1. Minsky M, Papert S (1969) Perceptrons. MIT Press, Cambridge, MA

  2. Rumelhart DE, Hinton GE and Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE, McClelland L (eds) Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: Foundations, MIT Press, Cambridge, MA

  3. Koza JR, Rice JP (1991) Genetic generation of both the weights and architecture for a neural network. In: Proceedings of the International Joint Conference on Neural Networks, Seattle, WA, July 1991

  4. Friedrich CM, Moraga C (1996) An evolutionary method to find good nuilding-blocks for architectures of artificial neural networks. In: Proceedings of the Sixth International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, Granada, Spain, 1–6 July 1996

  5. Hussain TS, Browse RA (1998) Network generating attribute grammar encoding. In: Proceedings of the IEEE International Joint Conference on Neural Networks, Anchorage, AK, 5–9 May 1998

  6. Pujol JCF, Poli R (1998) Evolving neural networks using a dual representation with a combined crossover operator. In: IEEE International Conference on Evolutionary Computation, Anchorage, AK, 5–9 May 1998

  7. Pujol JCF, Poli R (1998) Evolving the topology and the weights of neural networks using a dual representation. Appl Intellig J 8(1):73–84

    Article  Google Scholar 

  8. Pujol JCF, Poli R (1998) Efficient evolution of asymmetric recurrent neural networks using a PDGP—inspired two-dimensional representation. In: Proceedings of the First European Workshop on Genetic Programming (EuroGP ‘98), Paris, France, April 1998

  9. Poli R (1997) Discovery of symbolic, neuro-symbolic, and neural networks with parallel distributed genetic programming. In: Proceedings of the Third International Conference on Artificial Neural Networks and Genetic Algorithms (ICANNGA ‘97), Norwich, UK, April 1997

  10. Blake CL, Merz CJ (1998) UCI repository of machine learning databases. Irvine, CA: University of California Department of Information and Computer Science, 1998

  11. Koza JR, Bennett III FH, Andre D and Keane MA (1999) The design of analog circuits by means of genetic programming. In: Bentley PJ (ed) Evolutionary design by computers, Wiley, London, UK

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marijke F. Augusteijn.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Augusteijn, M.F., Harrington, T.P. Evolving transfer functions for artificial neural networks. Neural Comput & Applic 13, 38–46 (2004). https://doi.org/10.1007/s00521-003-0393-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-003-0393-9

Keywords

Navigation