Skip to main content
Log in

A New Multi-output Neural Model with Tunable Activation Function and its Applications

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In this paper, a new multi-output neural model with tunable activation function (TAF) and its general form are presented. It combines both traditional neural model and TAF neural model. Recursive least squares algorithm is used to train a multilayer feedforward neural network with the new multi-output neural model with tunable activation function (MO-TAF). Simulation results show that the MO-TAF-enabled multi-layer feedforward neural network has better capability and performance than the traditional multilayer feedforward neural network and the feedforward neural network with tunable activation functions. In fact, it significantly simplifies the neural network architecture, improves its accuracy and speeds up the convergence rate.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Moody, J. and Wu L.: Optimization of trading system and portfolios, Proceedings of the Neural Networks Capital Markets Conference, Pasadena, CA, 1996.

  2. Bassi, D.: Stock price predictions by recurrent multilayer neural network architecture, Proceedings of the Neural Networks in the Capital Markets Conference, London Business School, London, UK, 1995,pp. 331-340.

    Google Scholar 

  3. Puskorius, G. and Feldman, L.: Neurocontrol of dynamical syatems with Kalman filter trained recurrent network, IEEE Transactions on Neural Networks, 5 (1994) 279-297.

    Google Scholar 

  4. Narendra, K. and Parthasarathy, K.: Identification of control of dynamical systems using neural networks, IEEE Transactions on Neural Networks, 1 (1990) 4-27.

    Google Scholar 

  5. Suykens, J., De Moor, B. and Vandewalle, J.: Nonlinear system identification using neural state-space model applicable to robust control design, International Journal of Control, 62 (1995) 129-152.

    Google Scholar 

  6. Haykin, S.: Neural Networks: A Comprehensive Foundation, IEEE Press, 1994.

  7. Segee, B E.: Using spectral techniques for improved performance in ANN, Proceedings of the IEEE, 1993, pp. 500-505.

  8. Lee, S. and Kil, R. M.: A Gaussian potential function network with hierarchnically selforganizing learning, Neural Network 4 (1991), 207-224.

    Google Scholar 

  9. Stork, D. G., Allen, J. D., and et al.: How to solve the N-bit parity problem with two hidden units, Neural Networks 5 (1992), 923-926.

    Google Scholar 

  10. Stock, D. G.: A replay to Brown and Kom, Neural Networks 6 (1993), 607-609.

    Google Scholar 

  11. Wu, Y. S.: A new approach to design a simplest ANN for performing certain special problems, Proceeding of the International Conference on Neural Information, Beijing, 1995, pp. 477-480.

  12. Wu, Y. S.: The research on constructing some group neural networks by the known input signal, Science in China, E-edition, 26(2) (1996), 140-144.

    Google Scholar 

  13. Wu, Y. S., Zhao, M. S. and Ding, X. Q.: A new artificial neural network with tunable activation function and its applicatin, Science in China, 27(1) (1997), 55-60.

    Google Scholar 

  14. Wu, Y. S., Zhao, M. S., and Ding, X Q.: A new perceptron model with tunable activation function, Chinese Journal of Electronics, 5(2) (1996), 55-62.

    Google Scholar 

  15. Wu, Y. S. and Zhao, M. S.: The neural model with tunable activation funtion and its supervised learning and application, Science in China, 31(3) (2001), 263-272.

    Google Scholar 

  16. Lippmann, R. P.: An introduction to computing neural networks, IEEE ASSP Magazine, 4(2) (1987).

  17. Rumelhart, D. E. and McClelland, J. L.: Parallel Distributed Processing, Vol. 1, Cambridge, MA, MIT Press, 1986.

    Google Scholar 

  18. Scalero, R. S. and Tepedelenlioglu, N.: A fast new algorithm for training feedforward neural networks, IEEE Transactions on Signal Processing, 40(1) (1992), 203-210.

    Google Scholar 

  19. Azimi-Sadjadi, M. R. and Liou, R. J.: Fast learning process of multilayer neural networks using recursive least squares method, IEEE Transactions on Signal Processing, 40(2) (1992), 447-450.

    Google Scholar 

  20. Ouyang, S., Bao, Z. and Liao, G. S.: Robust recursive least squares learning algorithm for principal component analysis, IEEE Transactions on Neural Networks, 11(1) (2000).

  21. Haykin, S.: Adaptive Filter Theory, Prentice-Hall, Enflewoood Cliffs, NJ, 1986.

    Google Scholar 

  22. Goodwin, G. C. and Sin, K. S.: Adaptive Filtering, Prediction, and Control. Prentice-Hall, Englewood Cliffs, NJ, 1984.

    Google Scholar 

  23. Hagan, M.T. and Menhaj, M. ''Training feedforward networks with the Marquardt algorithm,'' IEEE Transaction. Neural Networks, 5 (1994) 989-993.

    Google Scholar 

  24. Levenberg, K.: A method for the solution of certain problems in least squares, Quart. Appl. Math., 5 164-168. (5) 1944.

    Google Scholar 

  25. Marquardt, D.: An algorithm for least squares estimation of nonlinear parameters, SIAM J. appl. Math., 11 (1963) 431-441.

    Google Scholar 

  26. Hagan, M.T. and Menhaj, M.: ''Training feedforward networks with the Marquardt algorithm,'' IEEE Transaction. Neural Networks, 5, (1994) 989-993.

    Google Scholar 

  27. Ampazis, N. and Perantonis, S.J.: Two Highly Efficient Second-order Algorithms for Training Feedforward Networks, IEEE Transaction. Neural Networks, 13 (5) 1064-1074.

  28. Atiya, A. F. and Parlos, A. G.: New results on recurrent network training: unifying the algorithms and accelerating convergence, IEEE Transactions on Neural Networks, 11 (3) (2000).

  29. Narenda, K. S and Parthasarathy, K.: Neural networks and dynamical systems, Int. J.Approximate Reasoning, 6 (1992) 109-131.

    Google Scholar 

  30. Narendra, K. and Parthasarathy, K.: Identification and control of dynamical systems using neural netwroks, IEEE Transactions on Neural Netwroks, 1, (1990), 4-27.

    Google Scholar 

  31. Hochreiter, S. and Schmidhuber, J.: Long short-term memory, Neural Computing, 9(8) (1997) 1735-1780.

    Google Scholar 

  32. Shen, Y. and Wang, B.: A fast learning algorithm for neural networks with tunable activation function, Science in China, E-edition, 33 (2003) 733-740.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Shen, Y., Wang, B., Chen, F. et al. A New Multi-output Neural Model with Tunable Activation Function and its Applications. Neural Processing Letters 20, 85–104 (2004). https://doi.org/10.1007/s11063-004-0637-4

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-004-0637-4

Navigation