Abstract
We propose new versions of accelerated first-order methods for convex composite optimization, where the prox parameter is allowed to increase from one iteration to the next. In particular, we show that a full backtracking strategy can be used within the FISTA and FALM algorithms while preserving their worst-case iteration complexities of \(O(\sqrt{L(f)/\epsilon })\). In the original versions of FISTA and FALM the prox parameter value on each iteration must be bounded from above by its value on prior iterations. The complexity of the algorithm then depends on the smallest value of the prox parameter encountered by the algorithm. The theoretical implications of using full backtracking in the framework of accelerated first-order and alternating linearization methods allow for better complexity estimates that depend on the “average” prox parameter value. Moreover, we show that in the case of compressed sensing problem and Lasso, the additional cost of the new backtracking strategy is negligible compared to the cost of the original FISTA iteration. Our computational results show the benefit of the new algorithm.




Similar content being viewed by others
References
A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sciences, 2 (2009), pp. 183–202.
E. van den Berg, M. P. Friedlander, G. Hennenfent, F. Herrmann, R. Saab and Ö. Yılmaz, Sparco: a testing framework for sparse reconstruction, Tech. Report TR-2007-20, Dept. Computer Science, University of British Columbia, Vancouver, Oct 2007.
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statis- tical learning via the alternating direction method of multipliers, Foundations and Trends in Machine Learning, 3 (2011), pp. 1–122.
E. Candès, Compressive sampling, Proc. International Congress of Mathematics, 3 (2006), pp. 1433–1452.
M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems, IEEE J. Sel. Top. Signal Process, 1 (2007).
A. Frank and A. Asuncion, UCI machine learning repository, 2010.
D. Goldfarb, S. Ma, and K. Scheinberg, Fast alternating linearization methods for minimizing the sum of two convex functions, Math. Prog. (2013) 141, 349-382.
A. Nemirovski and D. Yudin, Informational complexity and efficient methods for solution of convex extremal problems, Wiley, New York, 1983.
Y. E. Nesterov, Gradient methods for minimizing composite objective function. http://rwww.optimization-online.org.
Y. E. Nesterov, A method for unconstrained convex minimization problem with the rate of convergence \({\cal O}(1/k^{2})\), Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543–547.
Y. E. Nesterov, Introductory lectures on convex optimization: a basic course, vol. 87, Springer, 2004, pp. xviii+236.
Y. E. Nesterov, Smooth minimization for non-smooth functions, Math. Program. Ser. A, 103 (2005), pp. 127–152.
R. Tibshirani, Regression shrinkage and selection via the lasso, Journal Royal Statistical Society B, 58 (1996), pp. 267–288.
P. Tseng, On accelerated proximal gradient methods for convex–concave optimization. submitted to, SIAM J. Optim. (2008).
M. Yuan and Y. Lin, Model selection and estimation in regression with grouped variables, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68 (2006), pp. 49–67.
Y. Zhang, Yall1: Your algorithms for \(\ell _1\). http://www.caam.rice.edu/optimization/L1/YALL1/ (2009).
Acknowledgments
The authors are grateful to A. d’Aspremont for helpful discussions on the average case behavior and to S. Ma for help with preparing the manuscript. We are also grateful to the anonymous referees for their very helpful comments. The research of K. Scheinberg on this work was supported in part by National Science Foundation (NSF) Grant DMS 10-16571, Air Force Office of Scientific Research (AFOSR) Grant FA9550-11-1-0239, and Defense Advanced Research Projects Agency (DARPA) Grant FA 9550-12-1-0406 negotiated by AFOSR. The research of D. Goldfarb on this work was supported in part by NSF Grant DMS 10-16571, Office of Naval Research (ONR) Grant N00014-08-1-1118, and Department of Energy (DOE) Grant DE-FG02-08ER25856.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Yurii Nesterov.
Rights and permissions
About this article
Cite this article
Scheinberg, K., Goldfarb, D. & Bai, X. Fast First-Order Methods for Composite Convex Optimization with Backtracking. Found Comput Math 14, 389–417 (2014). https://doi.org/10.1007/s10208-014-9189-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10208-014-9189-9