Abstract
A novel learning algorithm called BPWA for feedforward neural networks is presented, which adjusts the weights during both forward phase and backward phase. It calculates the minimum norm square solution as the weights between the hidden layer and output layer in the forward pass, while the backward pass adjusts the weights connecting the input layer to hidden layer according to error gradient descent algorithm. The algorithm is compared with Extreme learning Machine, BP algorithm and LMBP algorithm on function approximation and classification tasks. The experiments’ results demonstrate that the proposed algorithm performs well.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Hornik, K., Stinchcombe, M., White, H.: Multilayer Feedforward Networks are Universal Approximators. Neural Networks 2(5), 359–366 (1989)
Chen, T., Chen, H.: Universal Approximation to Nonlinear Operators by Neural Networks with Arbitrary Activation Functions and Its Application to Dynamical Systems. IEEE Trans. Neural Networks 6(4), 911–917 (1995)
Judd, J.S.: Learning in Networks is Hard. In: Proc. the 1st IEEE International Conference on Neural Networks, New York, vol. 2, pp. 685–692 (1987)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation. In: Parallel Distributed Processing, vol. 1. MIT Press, Cambridge (1986)
Hagan, M.T., Menhaj, M.B.: Training Feedforward Networks with the Levenberg-Marquardt Algorithm. IEEE Trans. on Neural Networks 5(6), 989–993 (1994)
Huang, G.-B., Zhu, Q.-Y., Siew, C.-K.: Extreme Learning Machine: a New Learning Scheme of Feedforward Neural Networks. In: Proceedings 2004 International Joint Conference on Neural Networks, vol. 2, pp. 985–990 (2004)
Huang, G.-B., Zhu, Q.-Y., Siew, C.-K., Saratchandran, P., Sundararajan, N.: Can Threshold Networks Be Trained Directly?. IEEE Transactions on Circuits and Systems II: Express Briefs (Accepted for future publication)
Huang, G.-B., Babri, H.A.: Upper Bounds on the Number of Hidden Neurons in Feedforward Networks with Arbitrary Bounded Nonlinear Activation Functions. IEEE Trans. on Neural Networks 9(1), 224–229 (1998)
Bartlett, P.L.: The Sample Complexity of Pattern Classification with Neural Networks: the Size of the Weights Is More Important than the Size of the Network. IEEE Trans. on Information Theory 44(2), 525–536 (1998)
Chandra, P., Singh, Y.: An Activation Function Adapting Training Algorithm for Sigmoidal Feedforward Networks. Neurocomputing 61(10), 429–437 (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Chen, H., Jin, F. (2006). A Novel Learning Algorithm for Feedforward Neural Networks. In: Wang, J., Yi, Z., Zurada, J.M., Lu, BL., Yin, H. (eds) Advances in Neural Networks - ISNN 2006. ISNN 2006. Lecture Notes in Computer Science, vol 3971. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11759966_76
Download citation
DOI: https://doi.org/10.1007/11759966_76
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-34439-1
Online ISBN: 978-3-540-34440-7
eBook Packages: Computer ScienceComputer Science (R0)