Skip to main content
Log in

A projected conjugate gradient method for sparse minimax problems

  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

A new method for nonlinear minimax problems is presented. The method is of the trust region type and based on sequential linear programming. It is a first order method that only uses first derivatives and does not approximate Hessians. The new method is well suited for large sparse problems as it only requires that software for sparse linear programming and a sparse symmetric positive definite equation solver are available. On each iteration a special linear/quadratic model of the function is minimized, but contrary to the usual practice in trust region methods the quadratic model is only defined on a one dimensional path from the current iterate to the boundary of the trust region. Conjugate gradients are used to define this path. One iteration involves one LP subproblem and requires three function evaluations and one gradient evaluation. Promising numerical results obtained with the method are presented. In fact, we find that the number of iterations required is comparable to that of state-of-the-art quasi-Newton codes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. D.H. Andersson and M.R. Osborne, Discrete nonlinear approximation in polyhedral norms: a Levenberg-like algorithm, Numer. Math. 28 (1977) 157–170.

    Google Scholar 

  2. C. Charalambous and A.R. Conn, An efficient method to solve the minimax problem directly, SIAM J. Numer. Anal. 15 (1978) 162–187.

    Google Scholar 

  3. A.R. Conn, An efficient second order method to solve the (constrained) minimax problem, Report CCRR 79-5, Dept. of Combinatorics and Optimization, Univ. of Waterloo, Canada (1979).

    Google Scholar 

  4. A.R. Conn, N.I.M. Gould and Ph.L. Toint,LANCELOT: A Fortran Package for Large Scale Nonlinear Optimization (Release A) (Springer, 1992).

  5. A.R. Conn and Yuying Li, An efficient algorithm for nonlinear minimax problems, University of Waterloo report CS-88-41 (1989).

  6. L. Cromme, Strong uniqueness, Numer. Math. 29 (1978) 179–194.

    Google Scholar 

  7. R. Fletcher, A model algorithm for composite nondifferentiable optimization problems, Math. Prog. Study 17 (1982) 67–76.

    Google Scholar 

  8. R. Fletcher,Practical Methods of Optimization, 2nd ed. (Wiley, 1987).

  9. R. Fletcher, Low storage methods for unconstrained optimization, Report NA117, Dept. of Math. Sciences, University of Dundee (1988).

  10. R. Fletcher and E. Sainz de la Maza, Nonlinear programming and nonsmooth optimization by successive linear programming, Math. Prog. 43 (1989).

  11. J. Hald, MMLA1Q, a FORTRAN subroutine for linearly constrained minimax optimization, Report no. NI-81-01, Inst. for Numer. Anal., Technical University of Denmark (1981).

  12. J. Hald and K. Madsen, Combined LP and quasi-Newton methods for minimax optimization, Math. Prog. 20 (1981).

  13. S.P. Han, Variable metric methods for minimizing a class of nondifferentiable functions, Math. Prog. 20 (1981) 1–13.

    Google Scholar 

  14. R. Hettich, A Newton-method for nonlinear Chebyshev approximation, in:Approximation Theory, eds. R. Schaback and K. Scherer, Lect. Notes in Math. 556 (Springer, Berlin, 1976) pp. 222–236.

    Google Scholar 

  15. K. Jōnasson and K. Madsen, Corrected sequential linear programming for sparse minimax optimization, Report NI-92-06, Inst. for Numer. Anal., Technical University of Denmark (1992).

  16. K. Madsen, An algorithm for minimax solution of over-determined systems of non-linear equations, J. Inst. Math. Appl. 16 (1975) 321–328.

    Google Scholar 

  17. K. Madsen, O. Tingleff, P.Chr. Hansen and W. Owczarc, Robust subroutines for non-linear optimization, Report NI-90-06, Inst. for Numer. Anal., Technical University of Denmark (1990).

  18. W. Murray and M.L. Overton, A projected Lagrangian algorithm for nonlinear minimax optimization, SIAM J. Sci. Stat. Comp. 1 (1980) 345–370.

    Google Scholar 

  19. M.R. Osborne and G.A. Watson, An algorithm for minimax optimization in the non-linear case, Comp. J. 12 (1969) 63–68.

    Google Scholar 

  20. G. Di Pillo, L. Grippo and S. Lucidi, A smooth method for the finite minimax problem, Report Rap.05.92, Dipartimento di Informatica e Sistematica, Università di Roma (1992).

  21. G.A. Watson, The minimax solution of an overdetermined system of non-linear equations, J. Inst. Math. Appl. 23 (1979) 167–180.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Research supported by The Nordic Council of Ministers, The Icelandic Science Council, The University of Iceland Research Fund and The Danish Natural Science Research Council.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Jónasson, K. A projected conjugate gradient method for sparse minimax problems. Numer Algor 5, 309–323 (1993). https://doi.org/10.1007/BF02108465

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02108465

Keywords

Navigation