Skip to main content
Log in

Dual Convergence for Penalty Algorithms in Convex Programming

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

Algorithms for convex programming, based on penalty methods, can be designed to have good primal convergence properties even without uniqueness of optimal solutions. Taking primal convergence for granted, in this paper we investigate the asymptotic behavior of an appropriate dual sequence obtained directly from primal iterates. First, under mild hypotheses, which include the standard Slater condition but neither strict complementarity nor second-order conditions, we show that this dual sequence is bounded and also, each cluster point belongs to the set of Karush–Kuhn–Tucker multipliers. Then we identify a general condition on the behavior of the generated primal objective values that ensures the full convergence of the dual sequence to a specific multiplier. This dual limit depends only on the particular penalty scheme used by the algorithm. Finally, we apply this approach to prove the first general dual convergence result of this kind for penalty-proximal algorithms in a nonlinear setting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Alvarez, F.: Absolute minimizer in convex programming by exponential penalty. J. Convex Anal. 7(1), 197–202 (2000)

    MathSciNet  MATH  Google Scholar 

  2. Champion, T.: Tubularity and asymptotic convergence of penalty trajectories in convex programming. SIAM J. Optim. 13(1), 212–227 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cominetti, R., Courdurier, M.: Coupling general penalty schemes for convex programming with the steepest descent and the proximal point algorithm. SIAM J. Optim. 13(3), 745–765 (2003) (electronic)

    Article  MathSciNet  Google Scholar 

  4. Fiacco, A.V., McCormick, G.P.: Nonlinear Programming: Sequential Unconstrained Minimization Techniques, 2nd edn. Classics in Applied Mathematics, vol. 4. SIAM, Philadelphia (1990)

    MATH  Google Scholar 

  5. Gonzaga, C.G.: Path-following methods for linear programming. SIAM Rev. 34(2), 167–224 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  6. Auslender, A., Cominetti, R., Haddou, M.: Asymptotic analysis for penalty methods in convex and linear programming. Math. Oper. Res. 22(1), 43–62 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  7. Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer Series in Operations Research and Financial Engineering. Springer, New York (2006)

    MATH  Google Scholar 

  8. Alvarez, F., Carrasco, M., Pichard, K.: Convergence of a hybrid projection-proximal point algorithm coupled with approximation methods in convex optimization. Math. Oper. Res. 30(4), 966–984 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  9. Alvarez, F., Cominetti, R.: Primal and dual convergence of a proximal point exponential penalty method for linear programming. Math. Program., Ser. A 93(1), 87–96 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Cominetti, R.: Coupling the proximal point algorithm with approximation methods. J. Optim. Theory Appl. 95(3), 581–600 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  11. Gonzaga, C.G., Castillo, R.A.: A nonlinear programming algorithm based on non-coercive penalty functions. Math. Program., Ser. A 96(1), 87–101 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  12. Gilbert, J.C., Gonzaga, C.G., Karas, E.: Examples of ill-behaved central paths in convex optimization. Math. Program., Ser. A 103(1), 63–94 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  13. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    MATH  Google Scholar 

  14. Rockafellar, R.T.: Convex Analysis. Princeton Landmarks in Mathematics. Princeton University Press, Princeton (1997). Reprint of the 1970 original, Princeton Paperbacks

    MATH  Google Scholar 

  15. Attouch, H.: Viscosity solutions of minimization problems. SIAM J. Optim. 6(3), 769–806 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  16. Iusem, A.N., Svaiter, B.F., Da Cruz Neto, J.X.: Central paths, generalized proximal point methods, and Cauchy trajectories in Riemannian manifolds. SIAM J. Control Optim. 37(2), 566–588 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  17. Auslender, A., Teboulle, M.: Asymptotic Cones and Functions in Optimization and Variational Inequalities. Springer Monographs in Mathematics. Springer, New York (2003)

    MATH  Google Scholar 

  18. Auslender, A., Crouzeix, J.P., Fedit, P.: Penalty-proximal methods in convex programming. J. Optim. Theory Appl. 55, 1–21 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  19. Kaplan, A.: On a convex programming method with internal regularization. Sov. Math. Dokl. 19, 795–799 (1978)

    MATH  Google Scholar 

  20. Kaplan, A., Tichatschke, R.: Proximal point methods in view of interior-point strategies. J. Optim. Theory Appl. 98(2), 399–429 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  21. Konnov, I.V.: Combined relaxation methods for the search for equilibrium points and solutions of related problems. Izv. Vysš. Učebn. Zaved., Mat. 37(2), 46–53 (1993)

    MathSciNet  Google Scholar 

  22. Solodov, M.V., Svaiter, B.F.: A hybrid projection-proximal point algorithm. J. Convex Anal. 6(1), 59–70 (1999)

    MathSciNet  MATH  Google Scholar 

  23. Solodov, M.V., Svaiter, B.F.: A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Anal. 7(4), 323–345 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  24. Brøndsted, A., Rockafellar, R.T.: On the subdifferentiability of convex functions. Proc. Am. Math. Soc. 16, 605–611 (1965)

    Google Scholar 

  25. Cominetti, R., Dussault, J.-P.: Stable exponential-penalty algorithm with superlinear convergence. J. Optim. Theory Appl. 83(2), 285–309 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  26. Cominetti, R., Pérez-Cerda, J.M.: Quadratic rate of convergence for a primal-dual exponential penalty algorithm. Optimization 39(1), 13–32, (1997)

    Article  MathSciNet  MATH  Google Scholar 

  27. Dussault, J.-P.: Numerical stability and efficiency of penalty algorithms. SIAM J. Numer. Anal. 32(1), 296–317 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  28. Facchinei, F., Fischer, A., Kanzow, C.: On the accurate identification of active constraints. SIAM J. Optim. 9(1), 14–32 (1999) (electronic)

    Article  MathSciNet  Google Scholar 

  29. Oberlin, C., Wright, S.J.: Active set identification in nonlinear programming. SIAM J. Optim. 17(2), 577–605 (2006) (electronic)

    Article  MathSciNet  MATH  Google Scholar 

  30. Wright, S.J.: An algorithm for degenerate nonlinear programming with rapid local convergence. SIAM J. Optim. 15(3), 673–696 (2005) (electronic)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Miguel Carrasco.

Additional information

Communicated by Jean-Pierre Crouzeix.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Alvarez, F., Carrasco, M. & Champion, T. Dual Convergence for Penalty Algorithms in Convex Programming. J Optim Theory Appl 153, 388–407 (2012). https://doi.org/10.1007/s10957-011-9967-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-011-9967-3

Keywords

Navigation