Continuous Optimization
Stabilization of Mehrotra's primal–dual algorithm and its implementation

https://doi.org/10.1016/j.ejor.2003.07.024Get rights and content

Abstract

In this paper we apply a stabilization procedure proposed by Kovačević-Vujčić and Ašić to the Mehrotra's primal–dual interior-point algorithm for linear programming. Transformations of the dual problem corresponding to the stabilization procedure are considered. The stabilization procedure and Mehrotra's algorithm are implemented in the package MATHEMATICA. A number of highly degenerate test examples are used to compare the modified Mehrotra's method with respect to the original one and the codes PCx, LIPSOL and MOSEK. We also provide numerical results on some examples from the Netlib test set.

Introduction

Primal–dual algorithms are designed for linear programming problems in the standard form:minimizecTxsubjecttoAx=b,x⩾0,where c,x∈Rn, b∈Rm, A is an m×n real matrix and cT is transpose of the vector c. The dual problem for (1.1) ismaximizebTλsubjecttoATλ+s=c,s⩾0,where λ∈Rm and s∈Rn and bT, AT denote transpose of the vector b and the matrix A, respectively. It is well-known that the vector xRn is a solution of (1.1) if and only if there exist vectors sRn and λRm such that the following conditions hold:ATλ+s=c,Ax=b,xisi=0,i=1,…,n,(x,s)⩾0.

The feasible sets of , will be denoted by X and Y, respectively. Further, let X0={x|Ax=b,x>0},Y0={(λ,s)|ATλ+s=c,s>0}. As it is usual in the context of interior-point methods we shall assume that the sets X0 and Y0 are nonempty. This assumption implies that both sets X and Y are nonempty and bounded, where X and Y are the sets of optimal solutions to problems , , respectively.

All primal–dual methods generate iterates (xt,λt,st) that satisfy the bounds (1.3d) strictly, that is, xt>0 and st>0, and instead the condition (1.3c) deal with the condition xitsit=τ,i=1,…,n, where τ→0. Primal–dual strictly feasible set will be denoted by F0, i.e. F0={(x,λ,s)|Ax=b,ATλ+s=c,(x,s)>0}. It is well-known that there exist at least one primal solution x∈X and one dual solution ,s)∈Y such that x+s>0 and such solutions are called strictly complementary solutions. Note that if (x,λ,s) and (x,s) are two strictly complementary solution pairs, then xi>0 if and only if xi>0, i=1,…,n, and similarly, si>0 if and only if si>0, i=1,…,n, i.e. the sets of indices of positive coordinates are the same for all strictly complementary optimal pairs. Let B={j|xj>0},N={j|sj>0}. It is easy to see that BN=∅. The partition of the set {1,…,n} into B and N induces the partition of A into AB and AN, where AB(AN) is the submatrix of A with columns whose indices are in B(N). Similar notations will be used for partitions of x, s and c. The sets X and Y can now be represented asX={x∈Rn|ABxB=b,xB⩾0,xN=0},Y={(λ,s)∈Rm×Rn|sB=cB−ABTλ=0,sN=cN−ANTλ⩾0}.According to [3], [9] the optimal face X is degenerate if rank(AB)<m.

Let us mention that interior-point methods require to solve at tth iteration a system of linear equations with the coefficient matrix of the form AD2tAT. One of the most serious sources of numerical difficulties is the fact that in the case when the optimal face X is degenerate the matrices involved tend to become extremely ill-conditioned as the optimal points are approached. There are several papers which address properties of the related system(ADt2AT)u=ADtd.

Its solution u=(ADt2AT)−1ADtd has an interesting property that it is uniformly bounded in spite of possible ill-conditionedness of the matrix ADt2AT. The first result of that type was obtained by Dikin [6], who showed that∥u∥⩽maxJ∈Ω(A)∥(AJT)−1dJ∥,where Ω(A) is the set of column indices of all nonsingular m×m submatrices of A and AJ is a submatrix of A with columns whose indices are in J. This result has been rediscovered by Stewart [11] and Ben-Tal and Teboulle [4]. Stewart moreover proves that∥(ADt2AT)−1ADtis uniformly bounded with respect to t and gives an explicit bound in terms of singular values of A. Nevertheless, the fact that solution is bounded does not imply that it can be computed in a stable way. Computational techniques most commonly used are Cholesky decomposition and preconditioned conjugate gradients. In presence of degeneracy both approaches have numerical problems. In the case of conjugate gradients the problem is how to choose a good preconditioner. Cholesky decomposition is usually stabilized by a heuristic rule that pivots smaller than a threshold value ε should be skipped. Under suitable assumptions Wright [16] shows that this heuristic approach has theoretical grounds, i.e. the distance between the exact and the approximate solution is bounded by ε. Nevertheless, in the papers [3], [9] it is shown that in the case of degeneracy, the interior-point methods can be seriously affected by ill-conditioning, even in their most robust implementations, such as PCx [5], [10], [15] and HOPDM [7]. Also, in [3], [9] a stabilization technique based on Gaussian elimination is presented which reformulates the original problem in an equivalent way such that ill-conditionedness is avoided. Our goal is to develop an implementation of this stabilization algorithm. The developed software solves a class of ill-conditioned test examples from [3], [9] which could not be solved by the codes PCx and HOPDM.

The paper is organized as follows. In Section 2 we restate the Mehrotra's primal–dual interior-point method [10] and the corresponding algorithm for the stabilization of the Mehrotra's method, established in [3], [9]. We also investigate transformations of the dual problem corresponding to the stabilization procedure.

In Section 3 we illustrate the effects of the stabilization procedure on a set of highly degenerate test problems. We also report results of the developed software on some of the Netlib test problems.

Section snippets

Stabilization procedure for Mehrotra's algorithm

In the existing literature there are several variants of the Mehrotra's algorithm, all quite similar. In this section we briefly describe a variant of the Mehrotra's algorithm similar to the variant used in the code PCx (see for example [5], [10], [15]).

Algorithm 2.1

  • Step 1.

    Generate the starting iterate (xt,λt,st),t=0.

  • Step 2.

    Calculate the residualsrb=Axt−b,rc=ATλt+st−cand check the following stopping criterion [2]:|cTxt−bTλt|1+|cTxt|⩽ϵ.

    If the stopping criterion is satisfied, return the output xt; otherwise, go to Step 3.

  • Step 3.

Numerical experiences

In this section we report computational experiments with our implementation of Mehrotra's primal–dual algorithm. Two variants of the algorithm have been implemented in the package MATHEMATICA [13], [14]. The first version is a direct implementation of Algorithm 2.1. The modified version combines Algorithm 2.1 with Procedure 2.1 in order to improve numerical performance if problems with numerical stability are detected. The code can be found at //www.pmf.ni.ac.yu/people/pecko/linprog/linprog.htm

Conclusion

We describe Mehrotra's primal–dual interior-point method. The algorithm is mainly taken from [15]. Also, we describe a stabilization technique which is based on Gaussian elimination procedure and an equivalent reformulation of the original linear programming problem, proposed in [3], [9]. Mehrotra's algorithm as well as the stabilization procedure are implemented in the programming package MATHEMATICA. We show that the modification of Mehrotra's algorithm which is based on the stabilization

Acknowledgements

The authors would like to thank the anonymous referees for useful suggestions.

References (17)

There are more references available in the full text version of this article.

This research was carried out within the project “Graph Theory and Mathematical Programming with Applications in Chemistry and Transportation” which is financed by Serbian Ministry of Science, Technology and Development. The research was completed while the third author was visiting Technische Universität München under the DAAD program and Institute for Mathematics and its Applications, University of Minnesota.

View full text