Skip to main content

New Training Method and Optimal Structure of Backpropagation Networks

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3610))

Abstract

New algorithm was devised to speed up the convergence of backpropagation networks and the Bayesian Information Criterion was presented to obtain the optimal network structure. Nonlinear neural network problem can be partitioned into the nonlinear part in the weights of the hidden layers and the linear part in the weights of the output layer. We proposed the algorithm for speeding up the convergence by employing the conjugate gradient method for the nonlinear part and the Kalman filter algorithm for the linear part. From simulation experiments with daily data on the stock prices in the Thai market, it was found that the algorithm and the Bayesian Information Criterion could perform satisfactorily.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   119.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Werbos, P.J.: Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Ph.D. Thesis. Harvard University (1974)

    Google Scholar 

  2. Parker, D.B.: Learning-Logic. Technical Report No. TR-47, Center for Computational Research in Economics and Management Science. MIT, Cambridge (1985)

    Google Scholar 

  3. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation. In: Rumelhert, D.E., McClelland, J.L. (eds.) Parallel Distributed Processing: Exploration in the Microstructure of Cognition. Foundations, vol. 1, pp. 318–362. The MIT Press, Cambridge (1986)

    Google Scholar 

  4. Gershenfield, N.A., Weigend, A.S.: The Future of Time Series. Technical Report, Palo Alto Research Center (1993)

    Google Scholar 

  5. Phien, H.N., Siang, J.J.: Forecasting Monthly Flows of the Mekong River using Back Propagation. In: Proc. of the IASTED Int. Conf., pp. 17–20 (1993)

    Google Scholar 

  6. Chu, C.H., Widjaja, D.: Neural Network System for Forecasting Method Selection. Decision Support Systems 12, 13–24 (1994)

    Article  Google Scholar 

  7. Sarkar, D.: Methods to speed up Error Back-Propagation Learning Algorithm. ACM Computing Surveys 27(4), 519–542 (1995)

    Article  Google Scholar 

  8. Hornik, K., Stinchcombe, M., White, H.: Multilayer Feedforward Networks are Universal Approximators. Neural Networks 2, 359–366 (1989)

    Article  Google Scholar 

  9. Patterson, D.W.: Artificial Neural Networks: Theory and Applications. Prentice Hall, Singapore (1996)

    MATH  Google Scholar 

  10. Scalero, R.S., Tepedelenlioglu, N.: A Fast New Algorithm for Training Feedforward Neural Networks. IEEE Trans. on Signal Processing 40(1), 202–210 (1992)

    Article  Google Scholar 

  11. Edgar, T.F., Himmelblau, D.H.: Optimization of Chemical Processes. McGraw-Hill, New York (1988)

    Google Scholar 

  12. Fletcher, R., Reeves, C.M.: Function Minimization by Conjugate Gradients. Computer J. 7, 149–154 (1964)

    Article  MATH  MathSciNet  Google Scholar 

  13. Polak, E., Ribiere, G.: Note sur la Convergence de Methods de Directions Conjugres. Revue Francaise Informat Recherche Operationnelle 16, 35–43 (1969)

    MathSciNet  Google Scholar 

  14. Hestense, M.R., Stierel, E.: Methods of Conjugate Gradients for Solving Linear Systems. J. Res. Nat. Bur. Standards Sec. B 48, 409–436 (1952)

    Google Scholar 

  15. Pottmann, M., Seborg, D.E.: Identification of Nonlinear Processes using Reciprocal Multiquadratic Functions. J. Proc. Cont. 2(4), 189–203 (1992)

    Article  Google Scholar 

  16. Brown, M., Harris, C.: Neurofuzzy Adaptive Modelling and Control. Prentice-Hall, UK (1994)

    Google Scholar 

  17. Akaike, H.: A New Look at the Statistical Model Identification. IEEE Trans. on Automatic Control AC-19(6), 716–723 (1974)

    Article  MathSciNet  Google Scholar 

  18. Kashyap, R.L.: A Bayesian Comparison of Different Classes of Dynamic Models using Empirical Data. IEEE Trans. on Automatic Control AC-22(5), 715–727 (1977)

    Article  MathSciNet  Google Scholar 

  19. Schwarz, G.: Estimating the Dimension of a Model. The Annals of Statistics 6(2), 461–464 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  20. Box, G.E.P., Jenkins, G.M.: Time Series Analysis Forecasting and Control, Revised edn. Holden-Day, San Francisco (1976)

    Google Scholar 

  21. Kashyap, R.L.: Inconsistency of the AIC Rule for Estimating the Order of Autoregressive Models. Technical Report, Dep. of Electr. Eng., Purdue University, Lafayette (1980)

    Google Scholar 

  22. Dennis, J.E., Schnabel, R.B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, New Jersey (1983)

    MATH  Google Scholar 

  23. Nash, J.E., Sutcliffee, J.V.: River Flow Forecasting through Conceptual Models. J. Hydrology 10, 282–290 (1970)

    Article  Google Scholar 

  24. Sureerattanan, S., Phien, H.N.: Speeding up the Convergence of Backpropagation Networks. In: Proc. IEEE Asia-Pacific Conf. on Circuits and Systems: Microelectronics and Integration Systems, Chiangmai, Thailand, pp. 651–654 (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sureerattanan, S., Sureerattanan, N. (2005). New Training Method and Optimal Structure of Backpropagation Networks. In: Wang, L., Chen, K., Ong, Y.S. (eds) Advances in Natural Computation. ICNC 2005. Lecture Notes in Computer Science, vol 3610. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11539087_18

Download citation

  • DOI: https://doi.org/10.1007/11539087_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28323-2

  • Online ISBN: 978-3-540-31853-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics