Skip to main content
Log in

A rapidly convergent five-point algorithm for univariate minimization

  • Published:
Mathematical Programming Submit manuscript

Abstract

This paper presents an algorithm for minimizing a function of one variable which uses function, but not derivative, values at five-points to generate each iterate. It employs quadratic and polyhedral approximations together with a safeguard. The basic method without the safeguard exhibits a type of better than linear convergence for certain piecewise twice continuously differentiable functions. The safeguard guarantees convergence to a stationary point for very general functions and preserves the better than linear convergence of the basic method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. J. Barzilai and A. Ben-Tal, “Nonpolynomial and inverse interpolation for line search: Synthesis and convergence rates,”SIAM Journal on Numerical Analysis 19 (1982) 1263–1277.

    Google Scholar 

  2. R.P. Brent,Algorithms for Minimization Without Derivatives (Prentice-Hall, Englewood Cliffs, NJ, 1973).

    Google Scholar 

  3. B. Dijkhuis, “An adaptive algorithm for minimizing a unimodal function of 1 variable,”Zeitschrift für Angewandte Mathematik und Mechanik 51 (1971) 45–46.

    Google Scholar 

  4. P.E. Gill and W. Murray, “Safeguarded steplength algorithms for optimization using descent methods,” National Physical Laboratory Report NAC 37 (Teddington, 1974).

  5. C. Lemarechal and R. Mifflin, “Global and superlinear convergence of an algorithm for one-dimensional minimization of convex functions,”Mathematical Programming 24 (1982) 241–256.

    Google Scholar 

  6. R. Mifflin, “Stationarity and superlinear convergence of an algorithm for univariate locally Lipschitz constrained minimization,”Mathematical Programming 28 (1984) 50–71.

    Google Scholar 

  7. R. Mifflin, “Better than linear convergence and safeguarding in nonsmooth minimization,” in: P. Thoft-Christensen, ed.,System Modelling and Optimization (Springer, Berlin, 1984) pp. 321–330.

    Google Scholar 

  8. R. Mifflin and J.-J. Strodiot, “A bracketing technique to ensure desirable convergence in univariate minimization,”Mathematical Programming 43 (1989) 117–130.

    Google Scholar 

  9. S.M. Robinson, “Quadratic interpolation is risky,”SIAM Journal on Numerical Analysis 16 (1979) 377–379.

    Google Scholar 

  10. R.T. Rockafellar,The Theory of Subgradients and its Application to Problems of Optimization. Convex and Nonconvex Functions (Heldermann Verlag, Berlin, 1981).

    Google Scholar 

  11. A. Tamir, “Line search techniques based on interpolating polynomials using function values only,”Management Science 22 (1976) 576–586.

    Google Scholar 

  12. M.A. Wolfe,Numerical Methods for Unconstrained Optimization (Van Nostrand Reinhold, New York, 1978).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

This paper is dedicated to Phil Wolfe on the occasion of his 65th birthday.

Research sponsored by the Institut National de Recherche en Informatique et Automatique, Rocquencourt, France, and by the Air Force Office of Scientific Research, Air Force System Command, USAF, under Grant Number AFOSR-83-0210.

Research sponsored, in part, by the Institut National de Recherche en Informatique et Automatique, Rocquencourt, France.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mifflin, R., Strodiot, J.J. A rapidly convergent five-point algorithm for univariate minimization. Mathematical Programming 62, 299–319 (1993). https://doi.org/10.1007/BF01585172

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01585172

Key words

Navigation