Abstract
This paper considers a special but broad class of convex programming problems whose feasible region is a simple compact convex set intersected with the inverse image of a closed convex cone under an affine transformation. It studies the computational complexity of quadratic penalty based methods for solving the above class of problems. An iteration of these methods, which is simply an iteration of Nesterov’s optimal method (or one of its variants) for approximately solving a smooth penalization subproblem, consists of one or two projections onto the simple convex set. Iteration-complexity bounds expressed in terms of the latter type of iterations are derived for two quadratic penalty based variants, namely: one which applies the quadratic penalty method directly to the original problem and another one which applies the latter method to a perturbation of the original problem obtained by adding a small quadratic term to its objective function.
Similar content being viewed by others
References
Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16, 697–725 (2006)
Lan, G., Lu, Z., Monteiro, R.D.C.: Primal–dual first-order methods with \(\cal O(1/\epsilon )\) iteration-complexity for cone programming. Math. Program. 126, 1–29 (2011)
Monteiro, R.D.C., Svaiter, B.F.: Complexity of variants of tsengs modified f-b splitting and korpelevichs methods for hemi-variational inequalities with applications to saddle-point and convex optimization problems. Manuscript, School of ISyE, Georgia Tech, Atlanta, June (2010)
Monteiro, R.D.C., Svaiter, B.F.: On the complexity of the hybrid proximal projection method for the iterates and the ergodic mean. SIAM J. Optim. 20, 2755–2787 (2010)
Nemirovski, A.: Prox-method with rate of convergence \(o(1/t)\) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2005)
Nesterov, Y.E.: A method for unconstrained convex minimization problem with the rate of convergence \(O(1/k^2)\). Doklady AN SSSR 269, 543–547 (1983)
Nesterov, Y.E.: Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, Massachusetts (2004)
Nesterov, Y.E.: Smooth minimization of nonsmooth functions. Math. Program. 103, 127–152 (2005)
Nesterov, Y.E.: Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Program. 109, 319–344 (2007)
Tseng, P.: On Accelerated Proximal Gradient Methods for Convex-Concave Optimization. Manuscript, University of Washington, Seattle, May (2008)
Author information
Authors and Affiliations
Corresponding author
Additional information
The work of the first author was partially supported by NSF Grants CCF-0430644 and CMMI-1000347. The work of the second author was partially supported by NSF Grants CCF-0430644, CCF-0808863 and CMMI-0900094 and ONR Grants N00014-08-1-0033 and N00014-11-1-0062.
Appendix
Appendix
In this section, we prove Proposition 1.
Proof of Proposition 1
Define \(\mathcal{C}:= \{(v, t) \in \mathfrak R ^m \times \mathsf{I\!\!R}: \Vert v\Vert \le t\}\) and let \(\mathcal{C}^*\) denote the dual cone of \(\mathcal{C}\). It is easy to see that \(\mathcal{C}^* = \{(\tilde{v}, \tilde{t}) \in \mathfrak R ^m \times \mathsf{I\!\!R}: \Vert \tilde{v}\Vert \le \tilde{t}\}\). By definition of \(d_\mathcal{K ^*}\) and conic duality, we have
Statement (a) follows from the above identity and the definition of the support function of a set (see Sect. 1.1).
To show statement (b), let \(u \in \mathfrak R ^m\) and \(\lambda \in \mathcal K \) be given and assume without loss of generality that \(\lambda \ne 0\). Now noting that \(-\lambda /\Vert \lambda \Vert \in C := (-\mathcal K ) \cap B(0,1)\), we conclude from the above identity that \(d_\mathcal K (u) \ge \langle u, -\lambda /\Vert \lambda \Vert \rangle \), or equivalently, \(\langle u, \lambda \rangle \ge - \Vert \lambda \Vert \, d_\mathcal K (u)\). \(\square \)
Rights and permissions
About this article
Cite this article
Lan, G., Monteiro, R.D.C. Iteration-complexity of first-order penalty methods for convex programming. Math. Program. 138, 115–139 (2013). https://doi.org/10.1007/s10107-012-0588-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-012-0588-x