Skip to main content
Log in

A Gradient-based Continuous Method for Large-scale Optimization Problems

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

In this paper, we study a gradient-based continuous method for large-scale optimization problems. By converting the optimization problem into an ODE, we are able to show that the solution trajectory of this ODE tends to the set of stationary points of the original optimization problem. We test our continuous method on large-scale problems available in the literature. The simulation results are very attractive.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. A.R. Conn N.I.M. Gould P.L. Toint (1998) ArticleTitleTesting a class of methods for solving minimization problems with simple bounds on the variables Mathematical Computational 50 399–430

    Google Scholar 

  2. C.W. Gear (1971) Numerical Initial Value Problems in Ordinary Differential Equation Prentice Hall New Jersey

    Google Scholar 

  3. J.K. Hale (1969) Ordinary Differential Equations Wiley-Interscience New York

    Google Scholar 

  4. Q.M. Han L.-Z. Liao H.D. Qi L.Q. Qi (2001) ArticleTitleStability analysis of gradient-based neural networks for optimization problems Journal of Global Optimization 19 IssueID4 363–381

    Google Scholar 

  5. Hiebert, K.L. and Shampine, L.F. (1980), Implicitly defined output points for solutions of ODEs, Sandia Report SAND80-0180.

  6. A.C. Hindmarsh (1983) ODEPACK, a systematized collection of ODE solvers R.S. Stepleman (Eds) et al. Scientific Computing North-Holland Amsterdam 55–84

    Google Scholar 

  7. J.J. Hopfield (1982) ArticleTitleNeural networks and physical systems with emergent collective computational ability Proceedings of the National Academic Science, 79 2554–2558

    Google Scholar 

  8. J.J. Hopfield D.W. Tank (1985) ArticleTitleNeural computation of decisions in optimization problems Biological Cybernetics 52 141–152

    Google Scholar 

  9. J. Moré B. Garbow K. Hillstrom (1981) ArticleTitleTesting unconstrained, optimization software ACM Transactions on Mathematical Software 7 17–41

    Google Scholar 

  10. L.R. Petzold (1983) ArticleTitleAutomatic selection of methods for solving stiff and nonstiff systems of ordinary differential equations SIAM J. Scientific and Statistical Computing 4 136–148

    Google Scholar 

  11. J.-J.E. Slotine W. Li (1991) Applied Nonlinear Control Prentice-Hall Englewood Cliffs, New Jersey

    Google Scholar 

  12. Y. Xia (1996) ArticleTitleA new neural networks for solving linear programming problems and its application IEEE Transactions of Neural Network 7 525–529

    Google Scholar 

  13. Y. Xia J. Wang (1998) ArticleTitleA general methodology for designing globally convergent optimization neural networks IEEE Transaction of Neural Network 9 1331–1343

    Google Scholar 

  14. X.-S. Zhang (2000) Neural Network in Optimization Kluwer Academic Publishers Dordrecht

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liqun Qi.

Additional information

This research was supported in part by Grants FRG/99-00/II-23 and FRG/00-0l/II-63 of Hong Kong Baptist University and the Research Grant Council of Hong Kong.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liao, LZ., Qi, L. & Tam, H.W. A Gradient-based Continuous Method for Large-scale Optimization Problems. J Glob Optim 31, 271–286 (2005). https://doi.org/10.1007/s10898-004-5700-1

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-004-5700-1

Keywords

Navigation