An algorithm for the multiparametric 0–1-integer linear programming problem relative to the constraint matrix

https://doi.org/10.1016/S0167-6377(00)00034-1Get rights and content

Abstract

The multiparametric 0–1-Integer Linear Programming (0–1-ILP) problem relative to the constraint matrix is a family of 0–1-ILP problems in which the problems are related by having identical objective and right-hand-side vectors. In this paper we present an algorithm to perform a complete multiparametric analysis.

Introduction

The need for parametric analysis in Mathematical Programming arises from the uncertainty in the data. The most important surveys of parametric methods in integer linear programming (ILP) have been published by Geoffrion and Nauss [4], Holm and Klein [5] and Jenkins [8]. Most work has been done on changes in the right-hand side or in the objective vector with a scalar parameter guiding the perturbation of the data. Changes in elements of the constraint matrix have been considered by Jenkins [7] with a scalar parameter guiding the perturbation in such a manner that for the maximization case the feasible region increases continuously . The multiparametric ILP problem relative to the right-hand-side vector has been considered by Crema [2] and some of the ideas presented here may be considered as generalizations of those presented in that work. In the case of multiple changes on the constraint matrix, in such a manner that there is no scalar guiding the perturbation, no deep results are known up to now [8], [1].

Let L and U be matrices such that L∈Zm×n and U∈Zm×n with LijUij for all (i,j)∈I×J={1,…,m}×{1,…,n}. Let H={A:A∈Zm×n,Lij⩽Aij⩽Uij∀(i,j)∈I×J}. Let Ω={(i,j)∈I×J:Lij<Uij}. In this paper Ω≠∅. Let SH defined by linear constraints in Aij where (i,j)∈I×J.

The multiparametric 0–1-ILP problem relative to S is a family of 0–1-ILP problems in which the problems are related by having identical objective and right-hand-side vectors. A member of the family is defined as

(P(A))maxctxs.t.Ax⩽b,x∈{0,1}nwhere c∈Rn,b∈Zm and AS.

Because all the constraints can be put in the form ⩽ and for the sake of standardization the members of the family were written in a maximization form with all the constraints in the form ⩽.

We use the following standard notation: if T is an optimization problem then F(T) denotes its set of feasible solutions, if D is a matrix Di∗ denotes its ith row vector.

Constraints in the form ⩾ can be treated as follows: let us consider the constraint atxa0 where Lj′⩽ajUj′, then we use Ai1x⩽bi1=−a0 with −Uj′=Li1jAi1jUi1j=−Lj′.

Equality constraints can be treated as follows: let us consider the constraint atx=a0 where Lj′⩽ajUj′. In this case we use two constraints,Ai1∗x⩽bi1=a0whereLj′=Li1j⩽Ai1j⩽Ui1j=Ujandatx⩾a0whereLj′⩽aj⩽Ujand now the second constraint can be put in the form Ai2x⩽bi2=−a0 such as we explain above. Note that in this case the parameters of constraints i1 and i2 are not independent and their relations (Ai2j=−Ai1j for all j) must be included to define S.

We say that a multiparametric analysis is complete after finding an optimal solution for P(A) (if it exists) for all AS. In this paper we present an algorithm to perform a complete multiparametric analysis. Our algorithm works by choosing an appropiate finite sequence of non-parametric ILP problems in such a manner that the solutions of the problems in the sequence provide us with a complete multiparametric solution. This kind of approach was introduced by Jenkins [6], [7], [8] to the single parametric case.

In Section 2 we present the theoretical results and the algorithm. Computational experience is presented in Section 3.

Section snippets

Theoretical results and the algorithm

Let Q(1) be a non-linear integer problem in (x,A) defined as(Q(1))maxctxs.t.Ax⩽b,x∈{0,1}n,A∈S.

Observe that Aij is a decision variable in Q(1) for all (i,j)∉Ω.

Remark 1

(i) By construction of Q(1), if F(Q(1))=∅ then F(P(A))=∅ for all AS. (ii) Since F(Q(1)) is a finite set, if F(Q(1))≠∅ then there exists an optimal solution for Q(1).

Lemma 1

Let us suppose that F(Q(1))≠∅ and let (x(1),A(1)) be an optimal solution for Q(1). Let W(1)={A:A∈S,Ax(1)⩽b}. If AW(1) then x(1) is an optimal solution for P(A).

Proof

If AW(1) then

Computational experience

Our algorithm, that may be implemented by using any software capable of solving ILP problems, has been implemented in XL-FORTRAN using the OSL package of IBM [9] that uses a Branch and Bound algorithm based on linear relaxations to solve ILP problems. The experiments were performed on a RISC/6000 multiuser environment at the Computer Science Department Laboratory (UCV). The problem considered was the 0–1-MK problem [3]. Our experimental results are preliminary since more problems should be

Acknowledgments

The financial assistance by CDCH-UCV (project 03.13.3602.95) made the research upon which this paper is based possible and is gratefully acknowledged. We also thank the referees for their helpful comments.

References (9)

  • A. Crema

    A contraction algorithm for the multiparametric integer linear programming problem

    European J. Oper. Res.

    (1997)
  • B. Bank et al.

    Parametric integer optimization

    Math. Res.

    (1988)
  • B. Gavish et al.

    Efficient algorithms for solving multiconstraint zero-one knapsack problems to optimality

    Math. Programming

    (1985)
  • A.M. Geoffrion et al.

    Parametric and postoptimality analysis in integer linear programming

    Management Sci.

    (1977)
There are more references available in the full text version of this article.

Cited by (0)

View full text