Skip to main content
Log in

Steplength algorithms for minimizing a class of nondifferentiable functions

“Step-Length”-Algorithmen zur Minimierung einer Klasse von nicht-differenzierbaren Funktionen

  • Published:
Computing Aims and scope Submit manuscript

Abstract

Four steplength algorithms are presented for minimizing a class of nondifferentiable functions which includes functions arising froml 1 andl approximation problems and penalty functions arising from constrained optimization problems. Two algorithms are given for the case when derivatives are available wherever they exist and two for the case when they are not avaible. We take the view that although a simple steplength algorithm may be all that is required to meet convergence criteria for the overall algorithm, from the point, of view of efficiency it is important that the step achieve as large a reduction in the function value as possible, given a certain limit on the effort to be expended. The algorithms include the facility for varying this limit, producing, anything from an algorithm requiring a single function evaluation to one doing an exact linear search. They are based on univariate minimization algorithms which we present first. These are normally at least quadratically convergent when derivatives are used and superlinearly convergent otherwise, regardless of whether or not the function is differentiable at the minimum.

Zusammenfassung

Wir stellen vier “step-length”-Algorithmen zur Minimierung einer Klasse von nicht-differenzierbaren Funktionen vor. Diese Klasse enthält sowohl Funktionen, die bei Approximationsproblemen unter Verwendung derl 1- undl-Norm auftreten, als auch Straf-Funktionen, wie sie bei Optimierungsproblemen mit Nebenbedingungen benützt werden. Zwei Algorithmen betreffen den Fall, wo die Ableitungen überall da numerisch berechenbar sind, wo sie existieren, und weitere den Fall, wo diese Ableitungen nicht berechenbar sind. Obwohl ein einfacher “step-length”-Algorithms ausreichend sein mag, um bloße Konvergenz zu erzielen, vertreten wir die Auffassung, daß es aus Effizienzgründen wichtig ist, für eine vorgegebene feste Grenze des Aufwandes bei jedem Schritt die größtmögliche Reduktion des Funktionswertes zu erreichen. Unsere Algorithmen beinhalten die Möglichkeit, die Grenze des Aufwandes so zu verändern, daß sie als Spezialfälle sowohl die Algorithmen enthalten, die mit einer einzigen Funktionsauswertung auskommen, als auch diejenigen, die ein lineares Suchen durchführen. Die Algorithmen beruhen auf einem eindimensionalen Minimierungsverfahren, welches am Anfang beschrieben wird. Unsere Algorithmen besitzen im Normfall mindestens superlineare Konvergenz; wenn Ableitungen verwendet werden, so ist die Konvergenz quadratisch; beides gilt unabhängig davon, ob die Funktion an der Minimalstelle differenzierbar ist oder nicht.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Gill, P. E., Murray, W.: Safeguarded steplength algorithms for optimization using descent methods. National Physical Laboratory Report NAC 37 (1974).

  2. Ortega, J. M., Rheinboldt, W. C.: Iterative solution of nonlinear equations in several variables. New York-London: Academic Press 1970.

    Google Scholar 

  3. Brent, R. P.: Algorithms for minimization without derivatives. Englewood Cliffs, N. J.: Prentice-Hall 1973.

    Google Scholar 

  4. Jarratt, P.: An iterative method for locating turning points. Comp. J.10, 82–84 (1967).

    Google Scholar 

  5. Kowalik, J., Osborne, M. R.: Methods for unconstrained optimization problems. New York: Elsevier 1968.

    Google Scholar 

  6. Tamir, A.: Rates of convergence of a one-dimensional search based on interpolating polynomials. Center for Mathematical Studies in Economics and Management Science Paper No. 143, Northwestern University (1975).

  7. Charalambous, C., Conn, A. R.: An efficient method to solve the minimax problem directly. SIAM J. Numer. Anal.15, 162–187 (1978).

    Google Scholar 

  8. Traub, J. F.: Iterative methods for the solution of equations. Englewood Cliffs, N. J.: Prentice-Hall 1964.

    Google Scholar 

  9. Conn, A. R., Pietrzykowski, T.: A penalty function method converging directly to a constrained optimum. SIAM J. Numer. Anal.14, 348–375 (1977).

    Google Scholar 

  10. Han, S. P.: A globally convergent method for nonlinear programming. JOTA22, 297–309 (1977).

    Google Scholar 

  11. Gill, P. E., Murray, W., Picken, S. M., Wright, M. H.: The design and implementation of a Fortran program library for optimization. Trans. on Math. Software5, No. 3 (1979).

  12. Gill, P. E.,et al.: Documents for Subroutines LINDER and LINDIF, NPL Algorithms Library Reference Nos. E4/15/0 and E4/16/0. National Physical Laboratory (1976).

  13. Dekker, T. J.: Finding a zero by means of successive linear interpolation, in Constructive aspects of the fundamental theorem of algebra (Dejon, Henrici, eds.) New York: Interscience 1969.

    Google Scholar 

  14. Murray, W., Overton, M. L.: Steplength algorithms for minimizing a class of nondifferentiable functions. Computer Science Department Report STAN-CS-78-679, Stanford, University (1978).

Download references

Author information

Authors and Affiliations

Authors

Additional information

The work of this author was supported in part by the National Research Council of Canada and in part by the National Science Foundation Grant MCS 75-13497-A01.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Murray, W., Overton, M.L. Steplength algorithms for minimizing a class of nondifferentiable functions. Computing 23, 309–331 (1979). https://doi.org/10.1007/BF02254861

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02254861

Keywords

Navigation