Abstract
A line search algorithm for minimizing nonconvex and/or nonsmooth objective functions is presented. The algorithm is a hybrid between a standard Broyden–Fletcher–Goldfarb–Shanno (BFGS) and an adaptive gradient sampling (GS) method. The BFGS strategy is employed because it typically yields fast convergence to the vicinity of a stationary point, and together with the adaptive GS strategy the algorithm ensures that convergence will continue to such a point. Under suitable assumptions, it is proved that the algorithm converges globally with probability one. The algorithm has been implemented in C\(++\) and the results of numerical experiments illustrate the efficacy of the proposed approach.


Similar content being viewed by others
References
Anstreicher, K.M., Lee, J.: A masked spectral bound for maximum-entropy sampling. In: MODA 7—Advances in Model-Oriented Design and Analysis, Berlin, pp. 1–10 (2004)
Ben-Tal, A., Nemirovski, A.: Robust optimization—methodology and applications. Math. Program. Ser. B 92(3), 453–480 (2002)
Bertsekas, D.P.: Convex Optimization Theory. Athena Scientific, Nashua (2009)
Bonnans, J.F., Gilbert, J.C., Lemaréchal, C., Sagastizábal, C.A.: Numerical Optimization: Theoretical and Practical Aspects. Springer, Berlin (2006)
Broyden, C.G.: The convergence of a class of double-rank minimization algorithms. J. Inst. Math. Appl. 6(1), 76–90 (1970)
Burke, J.V., Lewis, A.S., Overton, M.L.: Approximating subdifferentials by random sampling of gradients. Math. Oper. Res. 27(3), 567–584 (2002)
Burke, J.V., Lewis, A.S., Overton, M.L.: A robust gradient sampling algorithm for nonsmooth, nonconvex optimization. SIAM J. Optim. 15(3), 751–779 (2005)
Candès, E.J., Tao, T.: Near optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)
Chartrand, R.: Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data. In: IEEE International Symposium on Biomedical Imaging (ISBI’09), pp. 262–265 (2009)
Chen, X.: Smoothing methods for nonsmooth, nonconvex optimization. Math. Program. Ser. B 134(1), 71–99 (2012)
Chen, X., Niu, L., Yuan, Y.: Optimality conditions and a smoothing trust region newton method for non-Lipschitz optimization. SIAM J. Optim. 23(3), 1528–1552 (2013)
Clarke, F.H.: Optimization and nonsmooth analysis. In: Canadian Mathematical Society Series of Monographs and Advanced Texts. Wiley, New York (1983)
Curtis, F.E., Overton, M.L.: A sequential quadratic programming algorithm for nonconvex, nonsmooth constrained optimization. SIAM J. Optim. 22(2), 474–500 (2012)
Curtis, F.E., Que, X.: An Adaptive gradient sampling algorithm for nonsmooth optimization. Optim. Methods Softw. (2012). doi:10.1080/10556788.2012.714781
Dolan, E., Moré, J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)
Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)
Fletcher, R.: A new approach to variable metric algorithms. Comput. J. 13(3), 317–322 (1970)
Goldfarb, D.: A family of variable metric updates derived by variational means. Math. Comput. 24(109), 23–26 (1970)
Goldfarb, D., Iyengar, G.: Robust portfolio selection problems. Math. Oper. Res. 28(1), 1–38 (2003)
Goldstein, A.A.: Optimization of Lipschitz continuous functions. Math. Program. 13(1), 14–22 (1977)
Greif, C., Varah, J.: Minimizing the condition number for small rank modifications. SIAM J. Matrix Anal. Appl. 29(1), 82–97 (2006)
Guenter, S., Sempf, M., Merkel, P., Strumberger, E., Tichmann, C.: Robust control of resistive wall modes using pseudospectra. New J. Phys. 11(053015), 1–40 (2009)
Gumussoy, S., Henrion, D., Millstone, M., Overton, ML.: Multiobjective robust control with HIFOO 2.0. In: Proceedings of the IFAC Symposium on Robust Control Design, Haifa (2009)
Haarala, M.: Large-scale nonsmooth optimization: variable metric bundle method with limited memory. PhD thesis, University of Jyväskylä (2004)
Haarala, M., Miettinen, K., Mäkelä, M.M.: New limited memory bundle method for large-scale nonsmooth optimization. Optim. Methods Softw. 19(6), 673–692 (2004)
Hare, W., Macklem, M.: Derivative-free optimization methods for finite minimax problems. Optim. Methods Softw. 28(2), 300–312 (2013)
Hare, W., Nutini, J.: A derivative-free approximate gradient sampling algorithm for finite minimax problems. Comput. Optim. Appl. 56(1), 1–38 (2013)
Hiriart-Urruty, J.B., Lemaréchal, C.: Convex analysis and minimization algorithms II. In: A Series of Comprehensive Studies in Mathematics. Springer, New York (1993)
Karmitsa, N.: Limited memory bundle method (2014). http://napsu.karmitsa.fi/lmbm/. Accessed 12 May 2014
Kiwiel, K.C.: Methods of descent for nondifferentiable optimization. In: Lecture Notes in Mathematics. Springer, New York (1985)
Kiwiel, K.C.: A method for solving certain quadratic programming problems arising in nonsmooth optimization. IMA J. Numer. Anal. 6(2), 137–152 (1986)
Kiwiel, K.C.: Convergence of the gradient sampling algorithm for nonsmooth nonconvex optimization. SIAM J. Optim. 18(2), 379–388 (2007)
Kiwiel, K.C.: A nonderivative version of the gradient sampling algorithm for nonsmooth nonconvex optimization. SIAM J. Optim. 20(4), 1983–1994 (2010)
Lemaréchal, C.: A view of line-searches. In: Optimization and Optimal Control: Proceedings of a Conference Held at Oberwolfach, March 16–22, 1980, Berlin, pp. 59–78 (1981)
Lewis, A.S., Overton, M.L.: Nonsmooth optimization via quasi-Newton methods. Math. Program. 141(1–2), 135–163 (2013)
Lukšan, L., Tůma, M., Šiška, M., Vlček, J., Ramešová, N.: UFO 2002: Interactive System for Universal Functional Optimization. Tech. Rep. 883, Institute of Computer Science, Academy of Sciences of the Czech Republic (2002)
Mifflin, R.: An algorithm for constrained optimization with semismooth functions. Math. Oper. Res. 2(2), 191–207 (1977)
Nocedal, J.: Updating quasi-Newton matrices with limited storage. Math. Comput. 35(151), 773–782 (1980)
Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, New York (2006)
Overton, M.L.: HANSO: hybrid algorithm for non-smooth optimization (2014). http://www.cs.nyu.edu/faculty/overton/software/hanso/. Accessed 12 May 2014
Rockafellar, R.T.: Nonsmooth optimization. In: Birge, J.R., Murty, K.G. (eds.) Mathematical Programming: The State of the Art 1994, pp. 248–258. University of Michigan Press, Ann Arbor (1994)
Sanderson, C., Curtin, R.: Armadillo C\(++\) linear algebra library (2014). http://arma.sourceforge.net/. Accessed 12 May 2014
Shanno, D.F.: Conditioning of quasi-Newton methods for function minimization. Math. Comput. 24(111), 647–656 (1970)
Skajaa, A.: Limited memory BFGS for nonsmooth optimization. Master’s thesis, New York University, New York (2010)
Vanbiervliet, J., Verheyden, K., Michiels, W., Vandewalle, S.: A nonsmooth optimisation approach for the stabilisation of time-delay systems. ESAIM Control Optim. Calc. Var. 14(3), 478–493 (2008)
Vanbiervliet, J., Vandereycken, B., Michiels, W., Vandewalle, S., Diehl, M.: The smoothed spectral abscissa for robust stability optimization. SIAM J. Optim. 20(1), 156–171 (2009)
Watson, G.A.: Data fitting problems with bounded uncertainties in the data. SIAM J. Matrix Anal. Appl. 22(4), 1274–1293 (2001)
Wolfe, P.: A method of conjugate subgradients for minimizing nondifferentiable functions. Math. Program. Stud. 3, 145–173 (1975)
Author information
Authors and Affiliations
Corresponding author
Additional information
F. E. Curtis and X. Que were supported in part by National Science Foundation Grant DMS-1016291.
Rights and permissions
About this article
Cite this article
Curtis, F.E., Que, X. A quasi-Newton algorithm for nonconvex, nonsmooth optimization with global convergence guarantees. Math. Prog. Comp. 7, 399–428 (2015). https://doi.org/10.1007/s12532-015-0086-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12532-015-0086-2
Keywords
- Nonsmooth optimization
- Nonconvex optimization
- Unconstrained optimization
- Quasi-Newton methods
- Gradient sampling
- Line search methods