Skip to main content
Log in

A bracketing technique to ensure desirable convergence in univariate minimization

  • Published:
Mathematical Programming Submit manuscript

Abstract

This paper gives a general safeguarded bracketing technique for minimizing a function of a single variable. In certain cases the technique guarantees convergence to a stationary point and, when combined with sequential polynomial and/or polyhedral fitting algorithms, preserves rapid convergence. Each bracket has an interior point whose function value does not exceed those of the two bracket endpoints. The safeguarding technique consists of replacing the fitting algorithm's iterate candidate by a close point whose distance from the three bracket points exceeds a positive multiple of the square of the bracket length. It is shown that a given safeguarded quadratic fitting algorithm converges in a certain better than linear manner with respect to the bracket endpoints for a strongly convex twice continuously differentiable function.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. R.P. Brent,Algorithms for Minimization Without Derivatives (Prentice-Hall, Englewood Cliffs, NJ, 1973).

    Google Scholar 

  2. P.E. Gill and W. Murray, “Safeguarded steplength algorithms for optimization using descent methods,” National Physical Laboratory Report NAC 37 (Teddington, 1974).

  3. C. Lemarechal and R. Mifflin, “Global and superlinear convergence of an algorithm for one-dimensional minimization of convex functions,”Mathematical Programming 24 (1982) 241–256.

    Google Scholar 

  4. R. Mifflin, “Stationarity and superlinear convergence of an algorithm for univariate locally Lipschitz constrained minimization,”Mathematical Programming 28 (1984) 50–71.

    Google Scholar 

  5. R. Mifflin, “Better than linear convergence and safeguarding in nonsmooth minimization,” in: P. Thoft-Christensen, ed.,System Modelling and Optimization (Springer-Verlag, Berlin, 1984) 321–330.

    Google Scholar 

  6. R. Mifflin and J.-J. Strodiot, “A rapidly convergent five point algorithm for univariate minimization,” Report 85/12, Department of Mathematics, Facultés Universitaires de Namur (Namur, 1985).

    Google Scholar 

  7. S.M. Robinson, “Quadratic interpolation is risky,”SIAM Journal on Numerical Analysis 16 (1979) 377–379.

    Google Scholar 

  8. R.T. Rockafellar,The Theory of Subgradients and its Application to Problems of Optimization. Convex and Nonconvex Functions (Heldermann Verlag, Berlin, 1981).

    Google Scholar 

  9. M.A. Wolfe,Numerical Methods for Unconstrained Optimization (Van Nostrand Reinhold Company, New York, 1978).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Research sponsored by the Institut National de Recherche en Informatique et en Automatique, Rocquencourt, France, and by the Air Force Office of Scientific Research, Air Force System Command, USAF, under Grant Number AFOSR-83-0210. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.

Research sponsored, in part, by the Institut National de Recherche en Informatique et en Automatique, Rocquencourt, France.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mifflin, R., Strodiot, J.J. A bracketing technique to ensure desirable convergence in univariate minimization. Mathematical Programming 43, 117–130 (1989). https://doi.org/10.1007/BF01582285

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01582285

Key words

Navigation