Skip to main content
Log in

An Efficient Gradient Forecasting Search Method Utilizing the Discrete Difference Equation Prediction Model

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Optimization theory and method profoundly impact numerous engineering designs and applications. The gradient descent method is simpler and more extensively used to solve numerous optimization problems than other search methods. However, the gradient descent method is easily trapped into a local minimum and slowly converges. This work presents a Gradient Forecasting Search Method (GFSM) for enhancing the performance of the gradient descent method in order to resolve optimization problems.

GFSM is based on the gradient descent method and on the universal Discrete Difference Equation Prediction Model (DDEPM) proposed herein. In addition, the concept of the universal DDEPM is derived from the grey prediction model. The original grey prediction model uses a mathematical hypothesis and approximation to transform a continuous differential equation into a discrete difference equation. This is not a logical approach because the forecasting sequence data is invariably discrete. To construct a more precise prediction model, this work adopts a discrete difference equation. GFSM proposed herein can accurately predict the precise searching direction and trend of the gradient descent method via the universal DDEPM and can adjust prediction steps dynamically using the golden section search algorithm.

Experimental results indicate that the proposed method can accelerate the searching speed of gradient descent method as well as help the gradient descent method escape from local minima. Our results further demonstrate that applying the golden section search method to achieve dynamic prediction steps of the DDEPM is an efficient approach for this search algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. G.V. Reklaitis, A. Ravindran, and K.M. Ragsdell, Engineering Optimization Methods and Applications, Wiley, New York, 1983.

    Google Scholar 

  2. P.E. Gill, W. Murray, and M.H. Wright, Practical Optimization, Harcourt Brace Jovanovich, London, 1981.

    Google Scholar 

  3. M.-S. Chen, “Control of linear time-varying systems by the gradient algorithm,” IEEE Conference on Decision & Control, vol. 5, pp. 4549–4553, 1997.

    Google Scholar 

  4. S.-H. So and D.J. Park, “Design of gradient descent based selforganizing fuzzy logic controller with dual outputs,” IEEE International Conference on Fuzzy Systems, vol. 1, pp. 460–464, 1999.

    Google Scholar 

  5. Y. Shi, M. Mizumoto, N. Yubazaki, and M. Otani, “A learning algorithm for tuning fuzzy rules based on the gradient descent method,” IEEE International Conference on Neural Networks, vol. 1, pp. 55–61, 1996.

    Google Scholar 

  6. D.C. Park and I. Dagher, “Gradient based fuzzy C-means (GBFCM) algorithm,” IEEE International Conference on Fuzzy Systems, vol. 3, pp. 1626–1631, 1994.

    Google Scholar 

  7. D.E. Rumelhart, G.E. Hiton, and R.J. Williams, “Learning internal representation by error propagation,” Parallel Distributed Processing, vol. 1, pp. 318–362, 1986.

    Google Scholar 

  8. C. Charalambous, “Conjugate gradient algorithm for efficient training of artificial neural networks,” IEEE Proceedings-G, vol. 139, no. 3, pp. 301–310, 1992.

    Google Scholar 

  9. S. Amari and S.C. Douglas, “Why natural gradient?,” in Proceedings of IEEE International Conference Acoust., Speech, Signal Processing, 1998, pp. 1213–1216.

  10. Shun-ichi Amari, “Natural gradient works efficiently in learning,” Neural Computation, vol. 10, pp. 251–276, 1998.

    Google Scholar 

  11. R.A. Jacobs, “Increased rates of convergence through learning rate adaptation,” Neural Networks, vol. 1, pp. 295–307, 1988.

    Google Scholar 

  12. N. Cesa-Bianchi, “Analysis of two gradient-based algorithms for on-line regression,” Journal of Computer and System Sciences, vol. 59, pp. 392–411, 1999.

    Google Scholar 

  13. S.C. Ng, S.H. Leung, C.Y. Chung, A. Luk, and W.H. Lau, “A new learning algorithm for adaptive IIR filtering,” IEEE Signal Processing Magazine, vol. 13, no. 6, pp. 38–46, 1996.

    Google Scholar 

  14. D. Precup and R.S. Sutton, “Exponentiated gradient methods for reinforcement learning,” in Proceedings of the Fourteenth International Conference on Machine Learning, San Mateo, CA, 1997.

  15. J.L. Deng, “Control problems of grey system,” Systems & Control Letters, vol. 1, pp. 288–294, 1982.

    Google Scholar 

  16. J.L. Deng, “Introduction to grey system theory,” The Journal of Grey System, vol. 1, pp. 1–24, 1989.

    Google Scholar 

  17. T.R. Chandrupatla, “An efficient equdratic fit-sectioning algorithm for minimization without derivatives,” Computer Methods in Applied Mechanics and Engineering, vol. 152, no. 12, pp. 211–217, 1998.

    Google Scholar 

  18. L. Pronzato, H.P. Wynn, and A.A. Zhigljavsky, “A generalized golden-section algorithm for line search,” IMA Journal of Mathematical Control and Information, vol. 15, no. 2, pp. 185–214, 1998.

    Google Scholar 

  19. S.S. Panwar, T.K. Philips, and M.S. Chen, “Golden ratio scheduling for flow control with low buffer requirements,” IEEE Transaction on Communications, vol. 40, no. 4, pp. 765–772, 1992.

    Google Scholar 

  20. I.M. Ei-Amin, S.O. Duffuaa, and A.U. Bawah, “Optimal shunt compensators at nonsinusoidal busbars,” IEEE Transaction on Power System, vol. 10, no. 2, pp. 716–722, 1995.

    Google Scholar 

  21. T.C. Hsia, System Identification: Least Square Methods University of California, Lexington, Davis.

  22. D.M. Bates and D.G.Watts, Nonlinear Regression Analysis and its Applications, John Wiley & Sons, New York, 1988.

    Google Scholar 

  23. M.-F.Yeh, “Studies and applications of GM(1,1) model and grey relational analysis,” Ph.D. Dissertation, Department of Electrical Engineering, Tatung Institute of Technology, Taiwan, 1999.

    Google Scholar 

  24. M.S. Bazarara, H.D. Sherali, and C.M. Shetty, Nonlinear Programming Theory and Algorithms, John Wiley & Sons, New York, 1993.

    Google Scholar 

  25. D.E. Goldberg, “Genetic algorithms in search, optimization, and machine learning,” Addison-Wesley, Massachusetts, 1989.

    Google Scholar 

  26. M. Gori and A. Tesi, “On the problem of local minima in backpropagation,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 14, no. 1, pp. 76–85, 1992.

    Google Scholar 

  27. R. Fisher, “The use of multiple measurements in taxonomic problems,” Annals of Eugenics, vol. 7, no. 2, pp. 179–188, 1936.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chen, CM., Lee, HM. An Efficient Gradient Forecasting Search Method Utilizing the Discrete Difference Equation Prediction Model. Applied Intelligence 16, 43–58 (2002). https://doi.org/10.1023/A:1012817410590

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1012817410590

Navigation