Skip to main content
Log in

Sigmoidal Function Classes for Feedforward Artificial Neural Networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

The role of activation functions in feedforward artificial neural networks has not been investigated to the desired extent. The commonly used sigmoidal functions appear as discrete points in the sigmoidal functional space. This makes comparison difficult. Moreover, these functions can be interpreted as the (suitably scaled) integral of some probability density function (generally taken to be symmetric/bell shaped). Two parameterization methods are proposed that allow us to construct classes of sigmoidal functions based on any given sigmoidal function. The suitability of the members of the proposed class is investigated. It is demonstrated that all members of the proposed class(es) satisfy the requirements to act as an activation function in feedforward artificial neural networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Barron, A. R.: Universal approximation bounds for superposition of a sigmoidal function, IEEE Transactions on Information Theory 39 (1993), 930–945.

    Google Scholar 

  2. Chen, T., Chen, H. and Liu, R. W.: Approximation capability in C(194-1) by multilayer feedforward networks and related problems, IEEE Transactions on Neural Networks 6(1) (1995), 25–30.

  3. Cheney, W. and Light, W.: A Course in Approximation Theory. Brooks/Cole Publishing Company, Pacific Grove, CA, 2000.

    Google Scholar 

  4. Chui, C. K. and Li, X.: Approximation by ridge functions and neural networks with one hidden layer, Journal of Approximation Theory 70 (1992), 131–141.

    Google Scholar 

  5. Cybenko, G.: Approximation by superposition of a sigmoidal function, Mathematics of Control, Signal and Systems 5(2–3) (1989), 233–243.

    Google Scholar 

  6. Funahashi, K.: On the approximate realization of continuous mappings by neural networks, Neural Networks 2 (1989), 183–192.

    Google Scholar 

  7. Gallant, A. R. and White, H.: There exists a neural network that does not make avoidable mistakes, In: Proceedings of the Second International Joint Conference on Neural Networks I: 593–606, San Diego: SOS Printing (1988).

    Google Scholar 

  8. Hornik, K., Stinchcombe, M. and White, H.: Multilayer feedforward networks are universal approximators, Neural Networks 2 (1989), 359–366.

    Google Scholar 

  9. Hornik, K., Stinchcombe, M. and White, H.: Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks, Neural Networks 3 (1990), 551–560.

    Google Scholar 

  10. Pinkus, A.: Approximation theory of the MLP model in neural networks, Acta Numerica (1999), 143–195.

  11. Scarselli, F. and Tsoi, A. C.: Universal approximation using feedforward neural networks: a survey of some existing methods and some new results, Neural Networks 11 (1998), 15–37.

    Google Scholar 

  12. Cherkassky, V. and Mulier, F.: Learning From Data – Concepts, Theory, and Methods. John Wiley & Sons, New York, 1998.

    Google Scholar 

  13. Haykin, S.: Neural Networks – A Comprehensive Foundation. Prentice Hall, Singapore, 1999.

    Google Scholar 

  14. Nguyen, D. and Widrow, B.: Improving the Learning Speed of 2-Layer Neural Networks by Choosing Initial Values of the Adaptive Weights, In: Proceedings of the International Joint Conference on Neural Networks, III: 21–26, San Diego: SOS Printing (1990).

    Google Scholar 

  15. Yam, J. Y. F. and Chow, T. W. S.: Feedforward network training speed enhancement by optimal initialisation of the synaptic coefficients, IEEE Transactions on Neural Networks, 12 (2001), 430–434.

    Google Scholar 

  16. LeCun, Y., B. Battou, Orr, G. B. and Muller, K.-R.: Efficient backprop, In: G. Orr and K. Muller (eds), Neural Networks: Tricks of the Trade, pp. 5–50, Berlin, Springer-Verlag, 1998.

    Google Scholar 

  17. Duch, W. and Jankowski, N.: Survey of neural transfer functions, Neural Computing Surveys 2 (1999), 163–212, http://www.icsi.berkeley.edu/~jagota/NCS.

    Google Scholar 

  18. Kolen, J. F. and Pollock. J. B.: Back Propagation is Sensitive to Initial Conditions. Technical Report TR 90-JK-BPSIC, Computer and Information Science Department, The Ohio State University, Ohio, 1990.

    Google Scholar 

  19. Singh, Y. and Chandra, P.: A class+1 sigmoidal activation functions for FFANNs, Journal of Economic Dynamics and Control 28(1) (2003), 183–187.

    Google Scholar 

  20. Jost, J.: Postmodern Analysis. Springer-Verlag, Berlin, 1998.

    Google Scholar 

  21. Abramowitz, M. and Stegun, I. E.: (eds.), Handbook of Mathematical Functions, Dover Publications, Inc., New York, 1965.

    Google Scholar 

  22. Sneddon, I. N.: Special Functions of Mathematical Physics and Chemistry, Oliver and Boyd, Edinburgh, 1966.

    Google Scholar 

  23. Trentin, T.: Networks with trainable amplitude of activation functions, Neural Networks 14 (2001), 471–493.

    Google Scholar 

  24. Yamada, T. and Yabuta, T.: Neural network controller using auto-tuning method for nonlinear functions, IEEE Transactions on Neural Networks 3 (1992), 595–601.

    Google Scholar 

  25. Chen, C. T. and Chang, W. D.: A feedforward neural network with function shape autotuning, Neural Networks 9(4) (1996), 627–641.

    Google Scholar 

  26. Hu, Z. and Shao, H.: The study of neural adaptive control systems, Control and Decision 7 (1992), 361–366.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chandra, P. Sigmoidal Function Classes for Feedforward Artificial Neural Networks. Neural Processing Letters 18, 205–215 (2003). https://doi.org/10.1023/B:NEPL.0000011137.04221.96

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:NEPL.0000011137.04221.96

Navigation