Skip to main content
Log in

A new constrained learning algorithm for function approximation by encoding a priori information into feedforward neural networks

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this paper, a new learning algorithm which encodes a priori information into feedforward neural networks is proposed for function approximation problem. The new algorithm considers two kinds of constraints, which are architectural constraints and connection weight constraints, from a priori information of function approximation problem. On one hand, the activation functions of the hidden neurons are specific polynomial functions. On the other hand, the connection weight constraints are obtained from the first-order derivative of the approximated function. The new learning algorithm has been shown by theoretical justifications and experimental results to have better generalization performance and faster convergent rate than other algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Ng SC, Cheung CC, Leung SH (2004) Magnified gradent function with deterministic weight modification in adaptive learning. IEEE Trans Neural Netw 15(6):1411–1423

    Article  Google Scholar 

  2. Baum E, Haussler D (1989) What size net gives valid generalization? Neural Comput 1(1):151–160

    Article  Google Scholar 

  3. Huang DS (2004) A constructive approach for finding arbitrary roots of polynomials by neural networks. IEEE Trans Neural Netw 15(2):477–491

    Article  Google Scholar 

  4. Huang DS, Zheru Chi (2004) Finding roots of arbitrary high order polynomials based on neural network recursive partitioning method. Sci China Ser F Inf Sci 47(2):232–245

    Article  MATH  Google Scholar 

  5. Huang DS, Ip HHS, Chi Z (2004) A neural root finder of polynomials based on root momnets. Neural Comput 16(8):1721–1762

    Article  MATH  Google Scholar 

  6. Huang DS, Ip HHS, Chi Z, Wong HS (2003) Dilation method for finding close roots of polynomials based on constrained learning neural networks. Phys Lett A 309(5–6):443–451

    Article  MATH  MathSciNet  Google Scholar 

  7. Karras DA (1995) An efficient constrained training algorithm for feedforward networks. IEEE Trans Neural Netw 6(6):1420–1434

    Article  Google Scholar 

  8. LeCun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Advances in neural information processing systems, vol 2. Morgan Kaufmann, San Mateo, pp 598–605

  9. Poggio T, Girosi F (1990) Regularization algorithms for learning that are equivalent to multilayer networks. Science 247:978–982

    Article  MathSciNet  Google Scholar 

  10. Reed R (1993) Pruning algorithms—a survey. IEEE Trans Neural Netw 4(5):740–747

    Article  Google Scholar 

  11. Huang DS (1996) Systematic theory of neural networks for pattern recognition. Publishing House of Electronic Industry of China, Beijing, pp 70–118

    Google Scholar 

  12. Bishop CM (1993) Curvature-driven smoothing: a learning algorithm for feedforward networks. IEEE Trans Neural Netw 4(5):882–884

    Article  Google Scholar 

  13. Cottrell M, Girard B et al (1995) Neural modeling for time series: a statistical stepwise method for weight limitation. IEEE Trans Neural Netw 6(6):1355–1364

    Article  Google Scholar 

  14. Huang DS (2002) Constrained learning algorithms for finding the roots of polynomials: a case study. In: IEEE Region 10 technical conference on computers, communications, control and power engineering, vol. III, Beijing, China, pp 1516–1520

  15. Jeong SY, Lee SY (2000) Adaptive learning algorithms to incorporate additional functional constraints into neural networks. Neurocomputing 35:73–90

    Article  MATH  Google Scholar 

  16. Jeong D-G, Lee S-Y (1996) Merging back-propagation and Hebbian learning rules for robust classifications. Neural Netw 9(7):1213–1222

    Article  Google Scholar 

  17. Han Fei, Huang DS, Yiu-Ming Cheung, Guang-Bin Huang (2005) A new modified hybrid learning algorithm for feedforward neural networks. Lecture Notes in Computer Science, vol 3496. Springer, Heidelberg, pp 572–577 (International Symposium on Neural Network, Chongqing, May 30–June 1, China)

  18. Liu Y, Fu P (1992) Mathematical Analysis Lectures, vol II, 3rd edn. Higher Education Press, Beijing, pp 106–127

    Google Scholar 

  19. Perantonis SJ, Karras DA (1995) An efficient constrained learning algorithm with momentum acceleration. Neural Netw 8(2):237–249

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the National Science Foundation of China (Nos. 60472111, 30570368 and 60405002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fei Han.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Han, F., Huang, DS. A new constrained learning algorithm for function approximation by encoding a priori information into feedforward neural networks. Neural Comput & Applic 17, 433–439 (2008). https://doi.org/10.1007/s00521-007-0135-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-007-0135-5

Keywords

Navigation