A new version of the Improved Primal Simplex for degenerate linear programs

https://doi.org/10.1016/j.cor.2009.03.020Get rights and content

Abstract

The Improved Primal Simplex (IPS) algorithm [Elhallaoui I, Metrane A, Desaulniers G, Soumis F. An Improved Primal Simplex algorithm for degenerate linear programs. SIAM Journal of Optimization, submitted for publication] is a dynamic constraint reduction method particularly effective on degenerate linear programs. It is able to achieve a reduction in CPU time of over a factor of three on some problems compared to the commercial implementation of the simplex method CPLEX. We present a number of further improvements and effective parameter choices for IPS. On certain types of degenerate problems, our improvements yield CPU times lower than those of CPLEX by a factor of 12.

Introduction

We consider the solution of linear programs in standard formminimizexRncTxsubject to Ax=b,x0,where cRn is the cost, A is the m×n constraint matrix and bRm is the right-hand side. We are particularly interested in the so-called degenerate problems, on which the simplex algorithm [3] is likely to encounter degenerate pivots and possibly cycle.

In the presence of degeneracy, a common strategy to ensure convergence consists in imposing specific rules on the choice of a new basis, e.g., [1], [10], [15]. A different approach, proposed in [2] and developed in [16], [14], perturbs the right-hand side associated to degenerate variables. Although both strategies alleviate cycling, they do not improve the performance of the simplex algorithm on degenerate problems.

On the other hand, Pan uses a reduced basis; one with a smaller number of variables and constraints [13]. His method starts with a feasible initial guess and identifies its p nonzero components. The m-p constraints corresponding to the zero variables are removed, to leave a p×p nondegenerate basis. To preserve feasibility, variables that cannot become basic without violating one of the m-p eliminated constraints must be temporarily removed from the problem. Such variables are said to be incompatible—a definition which will be generalized in Section 2.1. The resulting reduced problem is solved and the reduced costs are computed by means of its dual variables. By convention, dual variables of (LP) corresponding to the m-p eliminated constraints are set to zero. At the next iteration, if an incompatible variable is to become basic, one of the eliminated constraints must be added to the reduced problem. When compared to his own primal simplex, Pan reports gains of a factor of 4–5.

Omer [12] improves on the method of [13] by allowing further reductions while the reduced problem is being solved. This dynamic reduction method appears promising in terms of number of iterations. However, its implementation does not lend itself to comparisons with commercial implementations of the simplex algorithm. It is interesting to note that most problems tested in [12], [13] have vanishing optimal dual variables and this characteristic gives advantage to their algorithms.

In the same vein, [5] developed a dynamic constraint aggregation method, DCA, for partitioning problems. The method groups constraints that become identical with respect to nonzero basic variables, a stage referred to as aggregation. In the reduction phase, a single row per constraint group remains in the reduced problem. Once the reduced problem is solved, dual variables are recovered by disaggregating which corresponds to solving a shortest-path problem. The dual variables then allow to compute the reduced costs of incompatible variables. As in [12], DCA makes provision for reaggregation, which allows to further reduce the reduced problem if the latter becomes too degenerate.

The multi-phase dynamic constraint aggregation, MDCA [6], further improves on the DCA. The MDCA examines incompatible variables about to be added to the reduced problem in order of their number of incompatibilities. First, variables with a single incompatibility are considered. If none of them is selected to become basic, attention turns to variables with two incompatibilities. If again none of those is selected, all incompatible variables are added to the reduced problem. On a set of large bus driver scheduling problems in a column-generation framework, MDCA reduces the solution time by a factor of more than five over the classical column-generation method.

The Improved Primal Simplex IPS [8] combines ideas from [13] and DCA. Instead of disaggregating the dual variables to compute the reduced costs, the incompatible variables that are to become basic are obtained from the solution of a complementary problem. The authors of [8] show that when the reduced problem is not degenerate, the objective function decrease at each iteration. In practice, reducing the problem until all degeneracy disappears is computationally costly. Partial reduction, however, yields good results.

In this paper, we provide a number of strategies and parameter values that speed up IPS. We pay attention to the number of variables in the reduced problem and hence, on the number of variables selected by the complementary problem. We also discuss how the former should be solved. To some extent, we optimize the number of further reductions of the reduced problem and propose a procedure that only adds independent rows to the reduced problem. Compared to the standard simplex method, we obtain a reduction of CPU time of a factor of 12 on some instances of fleet assignment (FA) problems.

The rest of this paper is organized as follows. Section 2 summarizes the results of [13], [8]. In Section 3 we compare different methods to find feasible solution to the complementary problem and to solve it. In Section 4, we present refinements of IPS. Numerical experimentations on realistic instances is reported in Section 5. Finally, we discuss our results and conclude in Section 6.

All tests in this paper are performed on a 2.8 GHz PC running Linux and all comparisons are made with CPLEX 9.1.3.

If xRn and I{1,,n} is an index set, we denote xI the subvector of x indexed by I. Similarly, if A is an m×n matrix, we denote AI the m×|I| matrix whose columns are index by I. If J={1,,n}I, we allow ourselves to write x=(xI,xJ) even though the indices in I and J may not appear in order. In the same way, we denote with superior indices the subset of rows associated with the variables in the index set.

The vector of all ones with dimension dictated by the context is denoted e and we note A-T the inverse of the transpose of A.

Section snippets

Background

In this section, we summarize the key properties of dynamic constraint aggregation developed in [13], [8]. We start with [13] and the construction of the reduced problem. Next, we review the disaggregation of dual variables of [8] and recall the complete algorithm.

Solving the complementary problem

In [8], authors find feasible solutions to (SD) with the primal simplex. The simplified dual (SD) must be solved at each iteration of the algorithm, which can be too costly. Moreover, all equality constraints of (SD) have a zero right-hand side, except for one. The primal simplex algorithm applied to (SD) is thus likely to perform a large number of degenerate pivots. In this section, we compare different solution methods for finding feasible solutions for (SD) and also for solving (SD) to

New strategies and parameter values to speed up IPS

The improvements to the IPS that we now describe comprise three parts: solving (RP), reducing (RP) and solving (SD). We will call major iteration a solution of (RP) followed by a solution of (SD). In our exposition below, we allow interruptions in the solution of (RP) to further reduce it. The resulting improved IPS is named IPS-2.

The solution of any linear program via IPS, CPLEX or IPS-2 starts with an initial basis computed by the Phase-1 algorithm of CPLEX. The CPU times reported in the

Results

In this section, we report numerical experimentations with all strategies described above and parameter choices calibrated on our VCS instances. Below, we run tests on both the VCS and the fleet assignment (FA) instances. Their characteristics are given below.

Conclusion

In this article, we present improvements for the IPS [8]. More precisely, we choose a suitable algorithm to solve the complementary problem and we propose a strategy to obtain a predetermined number of variables to enter in the reduced problem. Furthermore, we propose a method to enter only independent rows in (RP) and we execute less reduction of (RP). The overall effect of these improvements is a better balance between the times spent on the three main subprocess of IPS (resolution of (RP),

References (15)

  • P.-Q. Pan

    A basis deficiency-allowing variation of the simplex method for linear programming

    Computers and Mathematics with Applications

    (1998)
  • R.G. Bland

    New finite pivoting rules for the simplex method

    Mathematical of Operations Research

    (1977)
  • A. Charnes

    Optimality and degeneracy in linear programming

    Econometrica

    (1952)
  • G.B. Dantzig

    Linear programming and extensions

    (1963)
  • T.A. Davis et al.

    An unsymmetric-pattern multifontal method for sparse LU factorization

    SIAM Journal on Matrix Analysis and Applications

    (1997)
  • I. Elhallaoui et al.

    Dynamic aggregation of set partitioning constraints in column generation

    Operations Research

    (2005)
  • Elhallaoui I, Metrane A, Soumis F, Desaulniers G. Multi-phase dynamic constraint aggregation for set partitioning type...
There are more references available in the full text version of this article.

Cited by (12)

  • Tools for primal degenerate linear programs: IPS, DCA, and PE

    2016, EURO Journal on Transportation and Logistics
    Citation Excerpt :

    The two main ideas used to obtain these results are the subspace basis update along with the compatible restricted master problem and the multiple improving directions. On 10 instances involving 2,000 constraints and up to 10,000 variables for simultaneous vehicle and crew scheduling problems in urban mass transit systems (VCS), IPS reduces CPU times by a factor of 3.53 compared to CPLEX’s PS (Elhallaoui et al. 2011; Raymond et al. 2010b). These set partitioning problems have degeneracy levels of about 50 %.

  • About the minimum mean cycle-canceling algorithm

    2015, Discrete Applied Mathematics
    Citation Excerpt :

    As a final note, this work is part of a much broader plan which includes generalizations to linear programming as well as understanding the ramifications with the Improved Primal Simplex method [8,18] in order to extract necessary adjustments required to recuperate some of the properties established herein.

  • The positive edge criterion within COIN-OR

    2014, Computers and Operations Research
    Citation Excerpt :

    The goal of these methods is solely to ensure convergence. Among the efforts that aim at improving the performance of the primal simplex method on degenerate problems, Rogers et al. [16] consider static aggregation of constraints, Shetty and Taylor [18], Pan [13], and Raymond et al. [15] develop dynamic constraint aggregation approaches, and du Merle et al. [6], Oukil et al. [12], and Ben Amor et al. [1] use dual variable stabilization within column generation. [13] introduces a notion of compatibility in variables that is used to identify nondegenerate pivots.

  • Row-reduced column generation for degenerate master problems

    2014, European Journal of Operational Research
    Citation Excerpt :

    In particular, certain dual variable stabilization approaches explicitly use perturbation, see Ben Amor, Desrosiers, and Frangioni (2009) for a stabilized column generation framework and the many references therein. Recently, a new line of research emerged for coping with primal degeneracy in linear programming, namely the improved primal simplex method (IPS) (Elhallaoui, Metrane, Desaulniers, & Soumis, 2010; Raymond, Soumis, & Orban, 2010). Our work generalizes IPS for solving degenerate linear programs and the dynamic constraints aggregation method (Elhallaoui, Desaulniers, Metrane, & Soumis, 2008; Elhallaoui, Villeneuve, Soumis, & Desaulniers, 2005; Elhallaoui, Metrane, Soumis, & Desaulniers, 2010) for solving LP relaxations of set partitioning problems (by column generation) stemming from vehicle routing and crew scheduling applications.

  • Stabilized dynamic constraint aggregation for solving set partitioning problems

    2012, European Journal of Operational Research
    Citation Excerpt :

    The resulting SDCA method is, however, applicable only to set partitioning problems. Generalizing this method to a wider class of problems seems difficult but not impossible since DCA has recently been generalized to linear programs (Raymond et al., 2010). The third contribution is more practical.

View all citing articles on Scopus
View full text