Skip to main content

A Constrained Approximation Algorithm by Encoding Second-Order Derivative Information into Feedforward Neural Networks

  • Conference paper
Emerging Intelligent Computing Technology and Applications. With Aspects of Artificial Intelligence (ICIC 2009)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5755))

Included in the following conference series:

  • 1461 Accesses

Abstract

In this paper, a constrained learning algorithm is proposed for function approximation. The algorithm incorporates constraints into single hidden layered feedforward neural networks from the a priori information of the approximated function. The activation functions of the hidden neurons are specific polynomial functions based on Taylor series expansions, and the connection weight constraints are obtained from the second-order derivative information of the approximated function. The new algorithm has been shown by experimental results to have better generalization performance than other traditional learning ones.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Nasr, M.B., Chtourou, M.: A Fuzzy Neighborhood-Based Training Algorithm for Feedforward Neural Networks. Neural Computing and Applications 18(2), 127–133 (2009)

    Article  Google Scholar 

  2. Ng, S.C., Cheung, C.C., Leung, S.H.: Magnified Gradient Function with Deterministic Weight Modification in Adaptive Learning. IEEE Transactions on Neural Networks 15(6), 1411–1423 (2004)

    Article  Google Scholar 

  3. Karras, D.A.: An Efficient Constrained Training Algorithm for Feedforward Networks. IEEE Trans. Neural Networks 6, 1420–1434 (1995)

    Article  Google Scholar 

  4. Baum, E., Haussler, D.: What Size Net Gives Valid Generalization? Neural Computation 1, 151–160 (1989)

    Article  Google Scholar 

  5. Huang, D.S., Chi, Z.R.: Finding Roots of Arbitrary High Order Polynomials Based on Neural Network Recursive Partitioning Method. Science in China Series F Information Sciences 47, 232–245 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  6. Huang, D.S., Ip, H.H.S., Chi, Z.R.: A Neural Root Finder of Polynomials Based on Root Momnets. Neural Computation 16, 1721–1762 (2004)

    Article  MATH  Google Scholar 

  7. Jeong, S.Y., Lee, S.Y.: Adaptive Learning Algorithms to Incorporate Additional Functional Constraints into Neural Networks. Neurocomputing 35, 73–90 (2000)

    Article  MATH  Google Scholar 

  8. Jeong, D.G., Lee, S.Y.: Merging Back-propagation and Hebbian Learning Rules for Robust Classifications. Neural Networks 9, 1213–1222 (1996)

    Article  Google Scholar 

  9. Han, F., Huang, D.-S., Cheung, Y.-m., Huang, G.-B.: A new modified hybrid learning algorithm for feedforward neural networks. In: Wang, J., Liao, X.-F., Yi, Z. (eds.) ISNN 2005. LNCS, vol. 3496, pp. 572–577. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  10. Han, F., Huang, D.S.: A New Constrained Learning Algorithm for Function Approximation by Encoding a Priori Information into Feedforward Neural Networks. Neural Computing and Applications 17(5-6), 433–439 (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ling, QH., Han, F. (2009). A Constrained Approximation Algorithm by Encoding Second-Order Derivative Information into Feedforward Neural Networks. In: Huang, DS., Jo, KH., Lee, HH., Kang, HJ., Bevilacqua, V. (eds) Emerging Intelligent Computing Technology and Applications. With Aspects of Artificial Intelligence. ICIC 2009. Lecture Notes in Computer Science(), vol 5755. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04020-7_100

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04020-7_100

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04019-1

  • Online ISBN: 978-3-642-04020-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics