Abstract
In this paper, a new learning algorithm which encodes a priori information into feedforward neural networks is proposed for function approximation problem. The new algorithm considers two kinds of constraints, which are architectural constraints and connection weight constraints, from a priori information of function approximation problem. On one hand, the activation functions of the hidden neurons are specific polynomial functions. On the other hand, the connection weight constraints are obtained from the first-order derivative of the approximated function. The new learning algorithm has been shown by theoretical justifications and experimental results to have better generalization performance and faster convergent rate than other algorithms.
Similar content being viewed by others
References
Ng SC, Cheung CC, Leung SH (2004) Magnified gradent function with deterministic weight modification in adaptive learning. IEEE Trans Neural Netw 15(6):1411–1423
Baum E, Haussler D (1989) What size net gives valid generalization? Neural Comput 1(1):151–160
Huang DS (2004) A constructive approach for finding arbitrary roots of polynomials by neural networks. IEEE Trans Neural Netw 15(2):477–491
Huang DS, Zheru Chi (2004) Finding roots of arbitrary high order polynomials based on neural network recursive partitioning method. Sci China Ser F Inf Sci 47(2):232–245
Huang DS, Ip HHS, Chi Z (2004) A neural root finder of polynomials based on root momnets. Neural Comput 16(8):1721–1762
Huang DS, Ip HHS, Chi Z, Wong HS (2003) Dilation method for finding close roots of polynomials based on constrained learning neural networks. Phys Lett A 309(5–6):443–451
Karras DA (1995) An efficient constrained training algorithm for feedforward networks. IEEE Trans Neural Netw 6(6):1420–1434
LeCun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Advances in neural information processing systems, vol 2. Morgan Kaufmann, San Mateo, pp 598–605
Poggio T, Girosi F (1990) Regularization algorithms for learning that are equivalent to multilayer networks. Science 247:978–982
Reed R (1993) Pruning algorithms—a survey. IEEE Trans Neural Netw 4(5):740–747
Huang DS (1996) Systematic theory of neural networks for pattern recognition. Publishing House of Electronic Industry of China, Beijing, pp 70–118
Bishop CM (1993) Curvature-driven smoothing: a learning algorithm for feedforward networks. IEEE Trans Neural Netw 4(5):882–884
Cottrell M, Girard B et al (1995) Neural modeling for time series: a statistical stepwise method for weight limitation. IEEE Trans Neural Netw 6(6):1355–1364
Huang DS (2002) Constrained learning algorithms for finding the roots of polynomials: a case study. In: IEEE Region 10 technical conference on computers, communications, control and power engineering, vol. III, Beijing, China, pp 1516–1520
Jeong SY, Lee SY (2000) Adaptive learning algorithms to incorporate additional functional constraints into neural networks. Neurocomputing 35:73–90
Jeong D-G, Lee S-Y (1996) Merging back-propagation and Hebbian learning rules for robust classifications. Neural Netw 9(7):1213–1222
Han Fei, Huang DS, Yiu-Ming Cheung, Guang-Bin Huang (2005) A new modified hybrid learning algorithm for feedforward neural networks. Lecture Notes in Computer Science, vol 3496. Springer, Heidelberg, pp 572–577 (International Symposium on Neural Network, Chongqing, May 30–June 1, China)
Liu Y, Fu P (1992) Mathematical Analysis Lectures, vol II, 3rd edn. Higher Education Press, Beijing, pp 106–127
Perantonis SJ, Karras DA (1995) An efficient constrained learning algorithm with momentum acceleration. Neural Netw 8(2):237–249
Acknowledgments
This work was supported by the National Science Foundation of China (Nos. 60472111, 30570368 and 60405002).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Han, F., Huang, DS. A new constrained learning algorithm for function approximation by encoding a priori information into feedforward neural networks. Neural Comput & Applic 17, 433–439 (2008). https://doi.org/10.1007/s00521-007-0135-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-007-0135-5