Skip to main content
Log in

A class of convergent primal-dual subgradient algorithms for decomposable convex programs

  • Published:
Mathematical Programming Submit manuscript

Abstract

In this paper we develop a primal-dual subgradient algorithm for preferably decomposable, generally nondifferentiable, convex programming problems, under usual regularity conditions. The algorithm employs a Lagrangian dual function along with a suitable penalty function which satisfies a specified set of properties, in order to generate a sequence of primal and dual iterates for which some subsequence converges to a pair of primal-dual optimal solutions. Several classical types of penalty functions are shown to satisfy these specified properties. A geometric convergence rate is established for the algorithm under some additional assumptions. This approach has three principal advantages. Firstly, both primal and dual solutions are available which prove to be useful in several contexts. Secondly, the choice of step sizes, which plays an important role in subgradient optimization, is guided more determinably in this method via primal and dual information. Thirdly, typical subgradient algorithms suffer from the lack of an appropriate stopping criterion, and so the quality of the solution obtained after a finite number of steps is usually unknown. In contrast, by using the primal-dual gap, the proposed algorithm possesses a natural stopping criterion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • S. Agmon, “The relaxation method for linear inequalities“,Canadian Journal of Mathematics 6 (1954) 382–392.

    Google Scholar 

  • M.S. Bazaraa and H.D. Sherali, “On the choice of step size in subgradient optimization“,European Journal of Operational Research 7 (1981) 380–388.

    Google Scholar 

  • M.S. Bazaraa and C.M. Shetty,Nonlinear Programming: Theory and Applications (John Wiley and Sons, New York, New York, 1979).

    Google Scholar 

  • D.P. Bertsekas, “Combined primal-dual and penalty methods for constrained minimization“,SIAM Journal of Control 13 (1975) 521–544.

    Google Scholar 

  • G. Bitran and A. Hax, “On the solution of convex knapsack problems with bounded variables”,Proceedings of the IX International Symposium on Mathematical Programiming, Budapest (1976) 357–367.

  • J.D. Buys, “Dual algorithms for constrained optimization problems”, Unpublished Ph.D. Thesis, University of Leiden (The Netherlands, 1972).

    Google Scholar 

  • G. Cohen and D.L. Zhu, “Decomposition coordination methods in large scale optimization problems: The nondifferentiable case and the use of augmented lagrangians“, in: J.B. Cruz, ed.,Advances in Large Scale Systems 1 (JAI Press Inc., 1984) pp. 203–266.

  • M.L. Fisher, “Lagrangian relaxation methods for combinatorial optimization“,Management Science 27 (1981) 1–18.

    Google Scholar 

  • M. Fukushima, “A descent algorithm for nonsmooth convex optimization“,Mathematical Programming 30 (2) (1984) 163–175.

    Google Scholar 

  • A.M. Geoffrion, “Generalized Benders' decomposition“,Journal of Optimization Theory and Applications 10 (4) (1972) 237–260.

    Google Scholar 

  • P.E. Gill, W. Murray and M.H. Wright,Practical optimization (Academic Press, New York, New York, 1981).

    Google Scholar 

  • J.L. Goffin, “Convergence results on a class of variable metric subgradient methods, in: O. Mangasarian, R. Meyer and S. Robinson, eds.,Nonlinear Programming 4 (1981) pp. 283–325.

  • E.G. Gol'shtein, “A generalized gradient method for finding saddlepoints,”Matekon 10 (3) (1974) 36–52.

    Google Scholar 

  • M. Held and R.M. Karp, “The traveling-salesman problem and minimum spanning trees: Part II“,Mathematical Programming 1 (1971) 6–26.

    Google Scholar 

  • M. Held, P. Wolfe and H.P. Crowder, “Validation of subgradient optimization“,Mathematical Programming 6 (1974) 62–88.

    Google Scholar 

  • L.G. Khacijan, “A polynomial algorithm in linear programming“,Doklady Akademiia Nauk SSSR, 224 (1979) 1093–1096, Translated inSoviet Mathematics Doklady 20 191–194.

    Google Scholar 

  • K. Kiwiel, “An aggregate subgradient method for nonsmooth convex minimization“,Mathematical Programming 27 (1983) 320–341.

    Google Scholar 

  • G.M. Korpelevich, “The extragradient method for finding saddle points and other problems“,Makedon 13 (4) (1977) 35–49.

    Google Scholar 

  • C. Lemarechal, J. Strodiot and A. Bihain, “On a bundle algorithm for nonsmooth optimization”,Nonlinear Programming Study No. 4 (Academic Press, New York, 1981) pp. 245–282.

    Google Scholar 

  • D. Maistroskii, “Gradient methods for finding saddlepoints“,Matekon 13 (1977) 3–22.

    Google Scholar 

  • T. Motzkin and I.J. Schoenberg, “The relaxation method for linear inequalities“,Canadian Journal of Mathematics 6 (1954) 393–404.

    Google Scholar 

  • B.T. Poljak, “A general method of solving extremum problems“,Soviet Mathematics Doklady 8(3) (1967) 593–597.

    Google Scholar 

  • B.T. Poljak, “Minimization of unsmooth functionals“,USSR Computational Mathematics and Mathematical Physics 9 (1969) 14–29.

    Google Scholar 

  • R.T. Rockafellar, “A dual approach to solving nonlinear programming problems by unconstrained optimization“,Mathematical Programming 5 (1973a) 354–373.

    Google Scholar 

  • R.T. Rockafellar, “The multiplier method of Hestenes and Powell applied to convex programming“,Journal of Optimization Theory and Applications 12 (1973b) 555–562.

    Google Scholar 

  • R.T. Rockafellar, “Augmented Lagrange multiplier functions and duality in nonconvex programming“,SIAM Journal on Control and Optimization 12 (1974) 268–285.

    Google Scholar 

  • S. Sen and D.S. Yakowitz, “A primal-dual subgradient algorithm for time staged capacity expansion planning”, SIE Working Paper Series, 84-002, Department of Systems and Industrial Engineering, The University of Arizona (Tucson, Arizona, 1984).

    Google Scholar 

  • H.D. Sherali and D.C. Myers, “Algorithmic strategies for using subgradient optimization with Lagrangian relaxation in solving mixed-integer programming problems”, Working Paper, Department of Industrial Engineering and Operations Research, Virginia Polytechnic Institute and State University (Blacksburg, Virginia, 1984).

    Google Scholar 

  • N.Z. Shor, “Generalized gradient methods of non-differentiable optimization employing space dilatation operators“, in: A. Bachem, M. Grotschel and B. Korte, eds.Mathematical Programming: The State of the Art (Bonn, W. Germany, 1983) pp. 501–529.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sen, S., Sherali, H.D. A class of convergent primal-dual subgradient algorithms for decomposable convex programs. Mathematical Programming 35, 279–297 (1986). https://doi.org/10.1007/BF01580881

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01580881

Key words

Navigation