Skip to main content
Log in

An entire space polynomial-time algorithm for linear programming

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

We propose an entire space polynomial-time algorithm for linear programming. First, we give a class of penalty functions on entire space for linear programming by which the dual of a linear program of standard form can be converted into an unconstrained optimization problem. The relevant properties on the unconstrained optimization problem such as the duality, the boundedness of the solution and the path-following lemma, etc, are proved. Second, a self-concordant function on entire space which can be used as penalty for linear programming is constructed. For this specific function, more results are obtained. In particular, we show that, by taking a parameter large enough, the optimal solution for the unconstrained optimization problem is located in the increasing interval of the self-concordant function, which ensures the feasibility of solutions. Then by means of the self-concordant penalty function on entire space, a path-following algorithm on entire space for linear programming is presented. The number of Newton steps of the algorithm is no more than \(O(nL\log (nL/ {\varepsilon }))\), and moreover, in short step, it is no more than \(O(\sqrt{n}\log (nL/{\varepsilon }))\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Grotschel, M., Lovasz, L., Schrijver, A.: The ellipsoid method and its consequences in combinatorial optimization. Combinatorica 1(2), 169–197 (1981)

    Article  Google Scholar 

  2. Frenk, J.B.G., Gromicho, J., Zhang, S.: A deep cut ellipsoid algorithm for convex programming: theory and applications. Math. Program. 63(1–3), 83–108 (1994)

    Article  Google Scholar 

  3. Kort, B.W., Bertsekas, D.P.: A new penalty function method for constrained minimization. In: Proceedings of the IEEE Conference on Decision and Control (New Orleans, 1972), pp. 162–166 (1972)

  4. Cominetti, R.: Asymptotic convergence of the steepest descent method for the exponential penalty in linear programming. J. Convex Anal. 2(1–2), 145–152 (1995)

    Google Scholar 

  5. Alvarez, F., Cominetti, R.: Primal and dual convergence of a proximal point exponential penalty method for linear programming. Math. Program. Ser. A 93, 87–96 (2002)

    Article  Google Scholar 

  6. Fang, S.C., Tsao, H.S.J.: On the entropic perturbation and exponential penalty methods for linear programming. J. Optim. Theory Appl. 89(2), 461–466 (1996)

    Article  Google Scholar 

  7. Polyak, R.: Modified barrier functions. Math. program. 54, 177–222 (1992)

    Article  Google Scholar 

  8. Griva, I.: Numerical experiments with an interior-exterior point method for nonlinear programming. Comput. Optim. Appl. 29(2), 173–195 (2004)

    Article  Google Scholar 

  9. Kort, B.W., Bertsekas, D.P.: Multiplier methods for convex programming. In: Proceedings IEEE Conference on Decision and Control, San Diego, California, pp. 428–432 (1973)

  10. Tseng, P., Bertsekas, D.: On the convergence of the exponential multipliers method for convex programming. Math. Program. 60, 1C19 (1993)

    Article  Google Scholar 

  11. Polyak, R., Teboulle, M.: Nonlinear rescaling and Proximal-like methods in convex optimization. Math. program. 76, 265–284 (1997)

    Google Scholar 

  12. Polyak, R.: Nonlinear rescaling vs. smoothing technique in convex optimization. Math. Program. Ser. A 92, 197–235 (2002)

    Article  Google Scholar 

  13. Yamashita, H., Tanabe, T.: A primal-dual exterior point method for nonlinear optimization. SIAM J. Optim. 20(6), 3335–3363 (2010)

    Article  Google Scholar 

  14. Polyak, R., Griva, I.: Primal-dual nonlinear rescaling method for convex optimization. J. Optim. Theory Appl. 122(1), 111–156 (2004)

    Article  Google Scholar 

  15. Polyak, R.: Primal-dual exterior point method for convex optimization. Optim. Methods Softw. 23(1), 141–160 (2007)

    Article  Google Scholar 

  16. Griva, I., Polyak, R.: 1.5-Q-superlinear convergence of an exterior-point method for constrained optimization. J. Global Optim. 40(4), 679–695 (2008)

    Article  Google Scholar 

  17. Griva, I., Polyak, R.: Primal-dual nonlinear rescaling method with dynamic scaling parameter update. Math. Program. Ser. A 106, 237–259 (2006)

    Article  Google Scholar 

  18. Nesterov, Y., Nemirovskii, A.: Interior-Point Polynomial Methods in Convex Programming. Society for Industrial and Applied Mathematics (1994)

  19. Renegar, J.: A Mathematical View of Interior-Point Methods in Convex Optimization. Society for Industrial and, Applied Mathematics (2001)

  20. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

  21. Wright, M.H.: The interior-point revolution in optimization: history, recent developments, and lasting consequences. Bull. Am. Math. Soc. 42, 39–56 (2005)

    Article  Google Scholar 

  22. Karmarkar, N.K.: A new polynomial time algorithm for linear programming. Combinatorica 4, 373–395 (1984)

    Article  Google Scholar 

  23. Danchi, J., Moore, J.B., Huibo, J.: Self-concordant functions for optimization on smooth manifolds. J. Global Optim. 38(3), 437–457 (2007)

    Article  Google Scholar 

  24. Bercu, G., Postolache, M.: Class of self-concordant functions on Riemannian manifolds. Balkan J. Geom. Appl. 14(2), 13–20 (2009)

    Google Scholar 

  25. Quiroz, E.A., Oliveira, P.R.: New Results on Linear Optimization Through Diagonal Metrics and Riemannian Geometry Tools, Technical Report ES-654/04. Federal University of Rio de Janeiro, PESC COPPE (2004)

  26. Roos, C., Mansouri, H.: Full-Newton step polynomial-time methods for linear optimization based on locally self-concordant barrier functions(Manuscript). Department of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands (2006)

  27. Jin, Z., Bai, Y.: Polynomial-time interior-point algorithm based on a local self-concordant finite barrier function. J. Shanghai Univ. 13(4), 333–339 (2009). (English edition)

    Article  Google Scholar 

  28. Kojima, M., Megiddo, N., Mizuno, S.: A primal-dual infeasible-interior-point algorithm for linear programming. Math. Program. 61(1–3), 263–280 (1993)

    Article  Google Scholar 

  29. Salahi, M., Terlaky, T., Zhang, G.: The complexity of self-regular proximity based infeasible IPMs. Comput. Optim. Appl. 33(2), 157–185 (2006)

    Article  Google Scholar 

  30. Roos, C.: A full-Newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM J. Optim. 16(4), 1110–1136 (2006)

    Article  Google Scholar 

  31. Mansouri, H., Roos, C.: Simplified O(nL) infeasible interior-point algorithm for linear optimization using full-Newton step. Optim. Methods Softw. 22(3), 519–530 (2007)

    Article  Google Scholar 

  32. Burke, Jim, Song, Xu: A non-interior predictor-corrector path following algorithm for the monotone linear complementarity problem. Math. Program. 87(1), 113–130 (2000)

    Google Scholar 

  33. Zhao, Yun-Bin, Li, Duan: A globally and locally superlinear convergent non-interior-point algorithm for $P_{0}$ LCPS. SIAM J. Optim. 13(4), 1195–1221 (2003)

    Article  Google Scholar 

  34. Hotta, K., Inaba, M., Yoshise, A.: A complexity analysis of a smoothing method using CHKS-function for monotone linear complementarity problems. Comput. Optim. Appl. 17, 183–201 (2000)

    Article  Google Scholar 

  35. Burke, J., Xu, S.: Complexity of a noninterior path-following method for the linear complementarity problem. J. Optim. Theory Appl. 112(1), 53–76 (2002)

    Article  Google Scholar 

  36. Nesterov, Y.: Constructing Self-Concordant Barriers for Convex Cones (March 2006). CORE Discussion Paper No. 2006/30. Available at SSRN: http://ssrn.com/abstract=921790

  37. Shevchenko, O.: Recursive construction of optimal self-concordant barriers for homogeneous cones. J. Optim. Theory Appl. 140(2), 339–354 (2009)

    Article  Google Scholar 

  38. Papa Quiroz, E.A., Oliveira, P.R.: New Self-Concordant Barrier for the Hypercube. J. Optim. Theory Appl. 135(3), 475–490 (2007)

    Article  Google Scholar 

  39. Renegar, J.: A polynomial-time algorithm, based on Newton’s method, for linear programming. Math. Program. 40(1–3), 59–93 (1988)

    Article  Google Scholar 

  40. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Google Scholar 

  41. Alexandrov, A.D.: Convex Polyhedra. Springer, Berlin (2005)

    Google Scholar 

  42. Khachiyan, L.G.: A Polynomial Algorithm in Linear Programming. Doklady Akademiia Nauk SSSR, 244,1093C1096 (1979). (English translation: Soviet Mathematics Doklady, 20(1),191C194 (1979))

  43. Gács, P., Lovász, L.: Khachian’s algorithm for linear programming. Math. Program. Study 14, 61–68 (1981)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Da Gang Tian.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tian, D.G. An entire space polynomial-time algorithm for linear programming. J Glob Optim 58, 109–135 (2014). https://doi.org/10.1007/s10898-013-0048-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-013-0048-z

Keywords

Navigation