Skip to main content
Log in

Two “well-known” properties of subgradient optimization

  • FULL LENGTH PAPER
  • Published:
Mathematical Programming Submit manuscript

Abstract

The subgradient method is both a heavily employed and widely studied algorithm for non-differentiable optimization. Nevertheless, there are some basic properties of subgradient optimization that, while “well known” to specialists, seem to be rather poorly known in the larger optimization community. This note concerns two such properties, both applicable to subgradient optimization using the divergent series steplength rule. The first involves convergence of the iterative process, and the second deals with the construction of primal estimates when subgradient optimization is applied to maximize the Lagrangian dual of a linear program. The two topics are related in that convergence of the iterates is required to prove correctness of the primal construction scheme.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Anstreicher, K.M., Wolsey, L.A.: On dual solutions in subgradient optimization. Center for Operations Research and Econometrics. Louvain-la-Neuve, Belgium (1992, working paper)

  2. Bahiense L., Maculan N., Sagastizábal C. (2002). The volume algorithm revisited: relation with bundle methods. Math. Program. 94: 41–69

    Article  MATH  MathSciNet  Google Scholar 

  3. Barahona F., Anbil R. (2000). The volume algorithm: producing primal solutions with a subgradient method. Math. Program. 87: 385–399

    Article  MATH  MathSciNet  Google Scholar 

  4. Barahona F., Anbil R. (2002). On some difficult linear programs coming from set partitioning. Discret. Appl. Math. 118: 3–11

    Article  MATH  MathSciNet  Google Scholar 

  5. Correa R., Lemaréchal C. (1993). Convergence of some algorithms for convex minimization. Math. Program. 62: 261–275

    Article  Google Scholar 

  6. Dubost L., Gonzalez R., Lemaréchal C. (2005). A primal-proximal heuristic applied to the French unit-commitment problem. Math. Program. 104: 129–151

    Article  MATH  MathSciNet  Google Scholar 

  7. Ermol’ev Yu. (1976). Methods of stochastic programming. Nauka, Moscow

    MATH  Google Scholar 

  8. Goffin J.L. (1977). On the convergence rates of subgradient optimization methods. Math. Program. 13: 329–347

    Article  MATH  MathSciNet  Google Scholar 

  9. Held M., Wolfe P., Crowder H. (1974). Validation of subgradient optimization. Math. Program. 6: 62–88

    Article  MATH  MathSciNet  Google Scholar 

  10. Larsson T., Liu Z. (1997). A Lagrangian relaxation scheme for structured linear programs with application to multicommodity network flow. Optimization 40: 247–284

    Article  MATH  MathSciNet  Google Scholar 

  11. Larsson T., Patriksson M., Strömberg A.-B. (1996). Conditional subgradient optimization—theory and applications. Eur. J. Oper. Res. 88: 382–403

    Article  MATH  Google Scholar 

  12. Larsson T., Patriksson M., Strömberg A.-B. (1998). Ergodic convergence in subgradient optimization. Optim. Methods Softw. 9: 93–120

    Article  MATH  MathSciNet  Google Scholar 

  13. Larsson T., Patriksson M., Strömberg A.-B. (1999). Ergodic, primal convergence in dual subgradient schemes for convex programming. Math. Program. 86: 283–312

    Article  MATH  MathSciNet  Google Scholar 

  14. Lemaréchal C.: (2001). Lagrangian relaxation. In: Jünger, M., Nadef, D. (eds) Computational Combinatorial Optimization, pp 112–156. Springer, Heidelberg

    Chapter  Google Scholar 

  15. Nemirovskii, A.: Private communication (1993)

  16. Polyak B.T. (1967). A general method for solving extremum problems. Soviet Math Doklady 8: 593–597

    MATH  Google Scholar 

  17. Polyak B.T. (1977). Subgradient methods: a survey of Soviet research. In: Lemaréchal, C.L., Mifflin, R. (eds) Nonsmooth Optimization, Proceedings of a IIASA Workshop, March 28–April 8, 1977. Pergamon Press, New York

    Google Scholar 

  18. Polyak B.T. (1987). Introduction to Optimization. Optimization Software, Inc., New York

    Google Scholar 

  19. Rudin W. (1976). Principles of Mathematical Analysis, 3rd edn. McGraw-Hill, New York

    Google Scholar 

  20. Shepilov M.A. (1976). Method of the generalized gradient for finding the absolute minimum of a convex function. Cybernetics 12: 547–553

    Article  Google Scholar 

  21. Sherali H.D., Choi G. (1996). Recovery of primal solutions when using subgradient optimization methods to solve Lagrangian duals of linear programs. Oper. Res. Lett. 19: 105–113

    Article  MATH  MathSciNet  Google Scholar 

  22. Shor N.Z. (1985). Minimization Methods for Non-Differentiable Functions. Springer, Berlin

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kurt M. Anstreicher.

Additional information

Dedicated to B.T. Polyak on the occassion of his 70th birthday.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Anstreicher, K.M., Wolsey, L.A. Two “well-known” properties of subgradient optimization. Math. Program. 120, 213–220 (2009). https://doi.org/10.1007/s10107-007-0148-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-007-0148-y

Keywords

Mathematics Subject Classification (2000)

Navigation