Continuous Optimization
A fresh view on the tolerance approach to sensitivity analysis in linear programming

https://doi.org/10.1016/j.ejor.2004.01.050Get rights and content

Abstract

The tolerance approach to sensitivity analysis in linear programming aims at finding a unique numerical value (tolerance) representing the maximum absolute perturbation which can be applied simultaneously and independently on each right-hand-side or objective coefficient without affecting the optimality of the given basis. Some extensions have been proposed in the literature, which allow for individual tolerances for each coefficient, thus enlarging the tolerance region. In this paper we review the main results concerning the approach, giving new and simpler proofs, and we propose an efficient geometric algorithm returning a tolerance region that is maximal with respect to inclusion. We compare our method with the existing ones on two examples, showing how a priori information can be naturally exploited by our algorithm to further enlarge individual tolerances.

Introduction

In practical applications of optimization models, data may be either imprecise estimates of real values or functions of parameters under the control of the decision maker. This fact confers great relevance to the analysis of the sensitivity of the optimal solutions and the optimal objective value to data perturbations.

A traditional approach to sensitivity analysis in linear programming is to consider the stability of a given optimal basis with respect to perturbations of the right-hand-side (RHS) and the objective (OBJ) vectors, the so-called RIM vectors. This is a well-established field, but a practical interpretation of such stability conditions by decision-makers is not straightforward. Indeed, from a theoretical point of view, it is easy to describe exactly in terms of linear inequalities a critical region defined as the set of all RIM vector perturbations which do not affect the optimality of the current basis [8]. From a practical point of view, the description of such a multidimensional set may be difficult to interpret. This drawback may be overcome by restricting attention to variations of a single entry of the RIM vectors at a time. In other words, for every coefficient, we look for the critical interval where such a coefficient may vary without affecting the optimality of the current basis, while all other coefficients are fixed to their original values. Such an approach is known as ordinary sensitivity analysis and it is usually implemented in commercial packages for linear optimization. A major deficiency of ordinary sensitivity analysis is that the obtained critical intervals are not valid when different coefficients vary simultaneously. There are two approaches to sensitivity analysis which try to surmount this deficiency: the 100% rule by S.P. Bradley, A.C. Hax and T.L. Magnanti, and the tolerance approach by R.E. Wendell.

The 100% rule [1] exploits two facts: (i) the intervals obtained from ordinary sensitivity analysis are subsets of the critical region, and (ii) the critical region is convex. Hence, any convex combination of points contained in critical intervals is contained in the critical region and thus the corresponding perturbation preserves the optimality of the current basis while allowing simultaneous variations on the RIM coefficients. Unfortunately, the 100% rule obtains a set which is again difficult to interpret.

The tolerance approach [10] focuses on simultaneous and independent variations of the RIM coefficients. More precisely, the tolerance approach returns a unique numerical value (tolerance) representing the maximum absolute perturbation which can be applied simultaneously and independently on each RIM coefficient without affecting the optimality of the given basis. In this sense, the tolerance approach tries to resolve the trade-off between allowing multidimensional perturbations and giving clear results. The original set of allowed perturbations has been extended in two ways: (i) by allowing individual tolerances for each coefficient [14], [11], [7], and (ii) by using a priori information on the allowed coefficient variation intervals, through the application of a suitable algorithm [9]. For a detailed survey on the tolerance approach and its application to different optimization models see [12]. See also [13] for a characterization of the potential loss of optimality for variations of the cost coefficients beyond the maximum tolerance.

In this paper, we review the tolerance approach for general linear programming models giving new and simpler proofs, using only the fundamentals of linear algebra and linear optimization. Moreover, we suggest a geometric algorithm which fits in a unique framework the original approach and the two extensions (i) and (ii) above, and returns regions of allowed perturbation containing those obtained in [9], [11], [14]. Our work originates from the observation that the original tolerance approach and its subsequent extensions may give unsatisfactory results on very simple cases, as witnessed by the following example.

Consider the problem:maxx1+x2subjecttox1−x2+x3=−1,x1+x2+x4=3,x1,x2,x3,x4⩾0,where x1 and x2 correspond to an optimal basis. This basis remains optimal for all RHS vectors (u1,u2)T such that u1+u2⩾0 and −u1+u2⩾0 (see Section 2). Assume that we are interested in simultaneous and independent additive perturbations around the given RHS vector (−1,3)T. Then the original Wendell's approach [10] returns a unique allowable perturbation interval for all RHS coefficients, namely [−1,+1]; the corresponding set of allowable RHS vectors is depicted in Fig. 1(a) (see Section 3, Theorem 1). Moreover, the extension proposed in [14], [11] returns a distinct allowable perturbation interval for each RHS coefficient, namely [−1,2] for the first coefficient and [−1,+∞] for the second one, resulting in the larger region depicted in Fig. 1(b) (see Section 3, Theorem 2). However, a graphical inspection reveals that the even larger region depicted in Fig. 1(c) may be allowed.

As a matter of fact, we will show that our algorithm applied to the above example returns exactly the region depicted in Fig. 1(c). In general, we prove that the tolerance regions we get are maximal with respect to inclusion, and dominate those obtained in [11], [14]. We discuss our results on two examples taken from the literature, and we show how a priori information can be exploited in a natural way to further enlarge the obtained tolerance regions.

Section snippets

Preliminaries and notation

In this Section we give some notation and state some known facts in a convenient way.

A polyhedron PRd is the solution set of a system of linear inequalities: P={x∈Rd:Ax⩽b}, where A is a real r×d matrix and b is a real r-vector. If b=0 then P is called a polyhedral cone. A box is a polyhedron of the form {x∈Rd:l⩽x⩽u}, where l and u are d-vectors defined over the extended reals: R∪{±∞}. We denote by B(l,u) the box identified by vectors l and u. Note that B(l,u) is nonempty if and only if lu and

Perturbations and tolerances

In this section we review the tolerance approach for RIM vectors [10] and its extensions [7], [11], [14]. The review is based on a polyhedral interpretation, which allows for proofs using only fundamentals of linear algebra and linear optimization, without resorting to concepts from calculus. We also introduce a notation which allows to substantially unify the RHS and OBJ cases, considerably shortening the analysis with respect to the existing literature, where the two cases need separate

Computing individual tolerances

We first put the problem of computing individual tolerances in a purely geometric setting, and then we suggest a general algorithm.

An example

In this section we illustrate the results and methods described above by means of problem (1). Using the notation introduced in Section 2, consider the basis B={1,2}. AsAB−1=1/21/2−1/21/2and AN=I2, one easily obtains P(B)={u∈R2:(1/2)u1+(1/2)u2⩾0,−(1/2)u1+(1/2)u2⩾0} and D(B)={v∈R4:(1/2)v1−(1/2)v2−v3⩾0,(1/2)v1+(1/2)v2−v4⩾0}. Since the RHS vector belongs to P(B) and the OBJ vector belongs to D(B), the given basis is optimal. We analyze the RHS tolerances with respect to additive variations, i.e.,

Discussion

In this section we compare the tolerances returned by Algorithm 1 with those of Wendell's original tolerance approach and extensions which subsequently appeared in the literature. Furthermore, we discuss the use of a priori information within our method to further enlarge tolerances. We use two illustrative examples. The first is a very small example used by all papers on the tolerance approach considered here; it is the product-mix problem by Dantzig [2, pp. 50–55], which may be written as

Conclusions

The tolerance approach to sensitivity analysis in linear programming is an interesting alternative to ordinary sensitivity analysis, since the intervals it generates allow for simultaneous and independent variations on the estimated coefficients. We have reviewed the main results concerning the tolerance approach, giving new proofs, and we have proposed a geometric algorithm to obtain improved individual tolerances whose corresponding tolerance regions are maximal with respect to inclusion; the

Acknowledgements

The author thanks the anonymous referees whose comments and suggestions improved the quality and the readability of the paper.

References (14)

  • H.-F. Wang et al.

    Multi-parametric analysis of the maximum tolerance in a linear programming problem

    European Journal of Operational Research

    (1993)
  • S.P. Bradley et al.

    Applied Mathematical Programming

    (1977)
  • G.B. Dantzig

    Linear Programming and Extensions

    (1963)
  • K. Fukuda, cdd and cddplus home page. Available from...
  • K. Fukuda et al.

    Double Description Method Revisited

    (1996)
  • The Math Works, Inc., Optimization Toolbox User's Guide, version 2,...
  • The Math Works, Inc., Using MATLAB, version 6,...
There are more references available in the full text version of this article.

Cited by (0)

View full text