Stochastics and Statistics
IIS branch-and-cut for joint chance-constrained stochastic programs and application to optimal vaccine allocation

https://doi.org/10.1016/j.ejor.2010.04.019Get rights and content

Abstract

We present a new method for solving stochastic programs with joint chance constraints with random technology matrices and discretely distributed random data. The problem can be reformulated as a large-scale mixed 0–1 integer program. We derive a new class of optimality cuts called IIS cuts and apply them to our problem. The cuts are based on irreducibly infeasible subsystems (IIS) of an LP defined by requiring that all scenarios be satisfied. We propose a method for improving the upper bound of the problem when no cut can be found. We derive and implement a branch-and-cut algorithm based on IIS cuts, and refer to this algorithm as the IIS branch-and-cut algorithm. We report on computational results with several test instances from optimal vaccine allocation. The computational results are promising as the IIS branch-and-cut algorithm gives better results than a state-of-the-art commercial solver on one class of problems.

Introduction

In stochastic programming, instead of assuming that all parameter values are deterministically known, a subset of the parameters of a mathematical program are given probability distributions. This paper concerns stochastic programs with joint chance constraints, which can be formulated as follows:SP:Mincxs.t.AxbPT(ω˜)xr(ω˜)αx0.

In formulation (1), xRn1 is the decision variable vector, cRn1 is the cost parameter vector, ARn×m1 is the deterministic constraint matrix, and bRm1 is the deterministic right-hand side vector. In this formulation, uncertainty appears as the multi-dimensional random variables ω˜, that gives rise to the random technology matrix T(ω˜)Rn×m2 and the random right-hand side vector r(ω˜)Rm2. Individual outcomes (scenarios) of the random variable are represented as realizations ω  Ω of the sample space. The aim of such a formulation is to find a minimum cost strategy while allowing a subset of the constraints to be violated an acceptable amount of time.

Chance constraints are best suited to optimization problems in which satisfying a certain set of constraints is desirable but may not be done almost surely. For example, it may be too expensive to satisfy the constraints almost surely. Thus chance constraints are used to find optimal solutions such that an acceptable level of reliability is achieved. Applications include telecommunications with companies needing to guarantee a given quality-of-service to their customers [13], air-quality management with the requirement that pollution not exceed prescribed limits too often [2], and also inventory control problems where the upper and lower bounds on stock have to be violated with low probability [15].

In general, stochastic programs with joint chance constraints are hard to solve. The main reason for this is that given a general distribution of the parameter values, the feasible space of the problem is nonconvex [7]. Also, in the case where the random parameters have continuous distributions, evaluating the feasibility of a single candidate solution can require an extremely hard integration [27]. Most research into solution methods for SP (1) has focused either on identifying probability distributions for the parameters with the property that the feasible space is convex, or on methods for solving the problem when the parameters have discrete distributions.

In the case when all the randomness appears in the right-hand side of the formulation, results on distributions that allow the chance constraints to be formulated as convex programs are given in [7], [25]. In the case of discrete probability distributions, most solution methods use integer programming (IP) or other discrete programming techniques. When only the right-hand side is random, solution methods based on a nonconvex reformulation using p-efficient points are given in [13], [12], [29]. IP formulations and solution approaches for the discretely distributed case are addressed in [9], [19]. Another branch of research has been methods based on sampling approximations [18], [21], [22]. These methods use the sample average approximation method [30] to obtain an approximation by replacing the actual probability distribution by an empirical distribution corresponding to a random sample. A different sampling approach, proposed by [8], is to obtain the approximation of a chance-constrained optimization problem by sampling a finite number of its constraints.

When the technology matrix T(ω) is stochastic, problem SP is significantly harder to solve than for the case in which all the uncertainty appears in the right-hand side. With the matrix T(ω) allowed to be continuously distributed, Prekopa [26] presents a few parameter distributions which make the problem convex. In the case where the parameters are joint-normally distributed, An and Eheart [2] present a method for finding upper and lower bounds on the optimal objective function of the problem using the extreme cases of correlation between the random variables. However, there are applications for which these types of distribution assumptions are too stringent. Consequently, there has been a lot of interest in formulations with discretely distributed random parameters, which are often created by sampling.

Given problem SP with decision variables that are pure integer, Tayur et al. [32] solve the problem using an algebraic geometry approach and Aringhieri [3] applies a tabu search for finding feasible solutions. For problems with random T(ω) and possibly continuous decision variables, a deterministic equivalent problem can be formulated as a ‘big-M’ mixed 0–1 integer program [20]. An advantage to such a formulation is that it may be solved directly by a commercial mixed-integer programming (MIP) solver. However, it often results in extremely weak linear programming (LP) relaxations that hinder finding the optimal solution. To strengthen the LP relaxation, Ruszczyński [28] derives cutting planes based on precedence constraint knapsack polyhedra and gives a specialized branch-and-cut algorithm for the problem.

This paper focuses on theoretical results that can be used for general solution techniques for SP. The main significance of these results is that they are valid for joint chance-constrained problems with discretely distributed random technology matrices and right-hand side vectors. We introduce a new class of optimality cuts, called irreducibly infeasible subsystem (IIS) cuts, for strengthening the LP relaxations of the deterministic equivalent MIP reformulation of SP. We also present a method for improving the upper bound found by the algorithm for the case when no IIS cut can be identified. We then derive a branch-and-cut method based on the IIS cuts, termed ‘IIS branch-and-cut’ algorithm’, and discuss its implementation. Finally, we apply the IIS branch-and-cut algorithm to randomly generated large-scale instances arising in optimal vaccine allocation for epidemic prevention.

The rest of the paper is organized as follows: In the next section we give some background on the problem and present the MIP reformulation of the problem that can be solved directly. In Section 3 we derive IIS cuts and an upper bound improvement strategy for the IIS branch-and-cut algorithm. We present and discuss an implementation of the IIS branch-and-cut algorithm in Section 4 and give computational results in Section 5. Finally, we finish with a summary and point further research topics in Section 6.

Section snippets

Preliminaries

We make the following assumptions on SP throughout the rest of the paper:

  • (A1)

    The random variable ω˜ is discretely distributed with ∣Ω < ∞.

  • (A2)

    Bounds 0  x  U on x are included in the constraint set Ax  b.

  • (A3)

    The polyhedron P1={xRn1|Axb} and is compact.

Assumption (A1) makes the problem tractable while assumption (A2) is needed solely to make the implementation of the cut generation LP more clear and does not restrict the application of the results of this paper. Assumption (A3) is mainly needed to keep the

IIS cuts

Let us begin by defining some further notation we will use throughout the rest of the paper. At an arbitrary node of a branch-and-bound (BAB) search tree, let LΩ and UΩ denote the sets of all scenarios such that zω is set to 0 and zω is set to 1, respectively. Also, let u  ϵ denote the current incumbent objective value minus a sufficiently small value. We refer to a scenario ω being forced into the problem at a node of the BAB tree if the binary decision variable zω is set to 0. This is the

A branch-and-cut algorithm

We are now in a position to illustrate the use of the IIS ideas developed in the previous section within a branch-and-cut framework. We show how the IIS cuts and the upper bound improvement fit into an exact method for solving problem (2). Let k denote the node index and K denote the total number of nodes in the BAB search tree. The set of all zω that are set to 0 or 1 at node k are given by Lk and Uk, respectively. The set of open nodes in the search tree is given by N, while an individual

Computational results

We now present some computational results showing the effectiveness of the IIS branch-and-cut algorithm in solving formulation (2). We ran our tests on instances from an application developed in Tanner et al. [31] involving the optimal allocation of vaccines under parameter uncertainty. We created five random replications of each problem size to guard against pathological cases and better evaluate the robustness of the computational results. The instances were created by sampling uniformly from

Conclusion

In this paper we have derived a class of optimality cuts for jointly chance-constrained stochastic programs with random technology matrices. We have defined an upper bound generating formulation of the problem that allows cuts to be generated at every non-integer point of the linear relaxation of the problem. The cuts are derived in order to identify sets of scenarios that cannot all be satisfied in the optimal solution to the problem. We also have given a method for improving the upper bound

Acknowledgments

The authors are grateful to the anonymous referees’ useful comments which greatly helped improve the presentation of this paper.

References (32)

  • R. Aringhieri

    Solving chance-constrained programs combining tabu search and simulation

  • J.F. Benders

    Partitioning procedures for solving mixed-variable programming problems

    Numerische Mathematik

    (1962)
  • J. Birge et al.

    Introduction to Stochastic Programming

    (1997)
  • M.C. Campi, S. Garatti, Chance-constrained optimization via randomization: Feasibility and optimality, Optimization...
  • M.S. Cheon et al.

    A branch-reduce-cut algorithm for the global optimization of probabilistically constrained linear programs

    Mathematical Programming

    (2006)
  • John W. Chinnneck

    Finding a useful subset of constraints for analysis in an infeasible linear program

    INFORMS Journal on Computing

    (1997)
  • Cited by (59)

    • Efficient and effective large-scale vaccine distribution

      2023, International Journal of Production Economics
    View all citing articles on Scopus
    View full text