Skip to main content
Log in

Allocating multiple defensive resources in a zero-sum game setting

  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

This paper investigates the problem of allocating multiple defensive resources to protect multiple sites against possible attacks by an adversary. The effectiveness of the resources in reducing potential damage to the sites is assumed to vary across the resources and across the sites and their availability is constrained. The problem is formulated as a two-person zero-sum game with piecewise linear utility functions and polyhedral action sets. Linearization of the utility functions is applied in order to reduce the computation of the game’s Nash equilibria to the solution of a pair of linear programs (LPs). The reduction facilitates revelation of structure of Nash equilibrium allocations, in particular, of monotonicity properties of these allocations with respect to the amounts of available resources. Finally, allocation problems in non-competitive settings are examined (i.e., situations where the attacker chooses its targets independently of actions taken by the defender) and the structure of solutions in such settings is compared to that of Nash equilibria.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Berkovitz, L. D. (2002). Convexity and optimization inn. New York: Wiley.

    Book  Google Scholar 

  • Bier, V. M., Haphuriwat, N., Menoyo, J., Zimmerman, R., & Culpen, A. M. (2008). Optimal resource allocation for defense of targets based on differing measures of attractiveness. Risk Analysis, 23, 763–770.

    Article  Google Scholar 

  • Blackett, D. W. (1954). Some Blotto games. Naval Research Logistics, 1(1), 55–60.

    Article  Google Scholar 

  • Canbolat, P., Golany, B., Mund, I., & Rothblum, U. G. (2012). A stochastic competitive R&D race where “winner takes all”. Operations Research, 60(3), 700–715.

    Article  Google Scholar 

  • Charnes, A. (1953). Constrained games and linear programming. Proceedings of the National Academy of Sciences, 38, 639–641.

    Article  Google Scholar 

  • Cottle, R. W., Johnson, E., & Wets, R. (2007). George B. Dantzig (1914–2005). Notices of the American Mathematical Society, 54, 344–362.

    Google Scholar 

  • Dantzig, G. B. (1957). Discrete-variable extremum problems. Operations Research, 5(2), 266–277.

    Article  Google Scholar 

  • Franke, J., & Öztürk, T. (2009). Conflict networks. Tech. report, RUB, Department of Economics.

  • Golany, B., Kaplan, E. H., Marmur, A., & Rothblum, U. G. (2009). Nature plays with dice—terrorists do not: allocating resources to counter strategic versus probabilistic risks. European Journal of Operational Research, 192(1), 198–208.

    Article  Google Scholar 

  • Katoh, N., & Ibaraki, T. (1998). Resource allocation problems. In D. Z. Du & P. M. Pardalos (Eds.), Handbook of combinatorial optimization (pp. 159–260). Dordrecht: Kluwer Academic.

    Google Scholar 

  • Levitin, G., & Hausken, K. (2009). Intelligence and impact contests in systems with redundancy, false targets, and partial protection. Reliability Engineering and System Safety, 94, 1927–1941.

    Article  Google Scholar 

  • Luss, H. (1992). Minimax resource allocation problems: optimization and parametric analysis. European Journal of Operational Research, 60, 76–86.

    Article  Google Scholar 

  • Luss, H. (1999). On equitable resource allocation problems: a lexicographic minimax approach. Operations Research, 47, 361–378.

    Article  Google Scholar 

  • Luss, H. (2012). Equitable resource allocation: models, algorithms, and applications. New York: Wiley.

    Book  Google Scholar 

  • Martin, D. H. (1975). On the continuity of the maximum in parametric linear programming. Journal of Optimization Theory and Applications, 17(3–4), 205–210.

    Article  Google Scholar 

  • Papadimitriou, C. H. (2001). Algorithms, games and the internet. In STOC’01, proceedings of the thirty-third annual ACM symposium on theory of computing.

  • Powell, R. (2007). Defending against terrorist attacks with limited resources. American Political Science Review, 101, 527–541.

    Article  Google Scholar 

  • Roberson, B. (2006). The Colonel Blotto game. Economic Theory, 29, 10024.

    Article  Google Scholar 

  • Wolfe, P. (1956). Linear inequalities and related systems In H. W. Kuhn & A. W. Tucker (Eds.), Determinateness of polyhedral games. Princeton: Princeton University Press.

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Pelin Canbolat, Edward H. Kaplan, and Hanan Luss for comments. B. Golany and U.G. Rothblum were supported in part by the Daniel Rose Technion-Yale Initiative for Research on Homeland Security and Counter-Terrorism. N. Goldberg was supported in part by the Daniel Rose Technion-Yale Initiative for Research on Homeland Security, and Counter-Terrorism, the Center for Absorption in Science of the Ministry of Immigrant Absorption and the Council of Higher Education, State of Israel.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to N. Goldberg.

Additional information

Published posthumously. Uriel G. Rothblum passed away unexpectedly on March 26, 2012, while this paper was under review.

Appendices

Appendix A: A minmax proposition for zero-sum bilinear games

The following records a classic characterization of Nash equilibria for two-person, zero-sum games with bilinear payoffs and with action sets that are polytopes (see Charnes 1953 and Wolfe 1956). It is included for the sake of completeness.

Proposition A

Suppose XR n and YR n are polytopes and QR m×n. Consider the game where X is the set of options to player I, Y is the set of options to player II, and upon selection of xX and yY the payoff of player II to player I is y Qx. Then

  1. (a)

    There exists a Nash equilibrium and max xX min yY y Qx=min yY max xX y Qx.

  2. (b)

    (x ,y ) is a Nash equilibrium if and only if

    $$ x^* \in \operatorname {argmax}\limits _{x \in X} \Bigl[\min_{y \in Y} \ y^\top Qx \Bigr] $$
    (20)

    and

    $$ y^* \in \operatorname {argmin}\limits _{y \in Y} \Bigl[\max_{x \in X} \ y^\top Qx \Bigr]. $$
    (21)
  3. (c)

    Suppose X={x∈ℝnAxa} and Y={y∈ℝmByb}, where (A,a)∈ℝp×n×ℝp and (C,b)∈ℝq×m×ℝq (with p and q as positive integers). Then:

    1. (i)

      x satisfies (20) if and only if for some λ ∈ℝq, (x ,λ ) solves the LP

      (22)
    2. (ii)

      y satisfies (21) if and only if for some μ ∈ℝp, (y ,μ ) solves the LP

      (23)
    3. (iii)

      The LP’s in (22) and (23) are duals of each other and their common optimal objective value equals max xX min yY y Qx=min yY max xX y Qx.

Proof

Consider the representation of X and Y given in (c). As X and Y are compact, continuity arguments show that the maxima and minima in (a) are well-defined (and there is no need to use sup’s and inf’s). Further, standard LP duality shows that for each x∈ℝn,

$$\min\bigl\{y^\top Qx \bigm| By \geq b \bigr\} = \max\biggl \{b^\top\lambda\biggm| \begin{array}{l}C^\top\lambda= Qx\cr\noalign{\vspace{3pt}} \lambda\geq0 \end{array} \biggr\} $$

and for each y∈ℝm,

$$\max\bigl\{y^\top Qx \bigm| Ax \leq a \bigr\} = \min\biggl\{ \mu^\top a \biggm| \begin{array}{l} \mu^\top A = y^\top Q\cr\noalign{\vspace{3pt}} \mu \geq0 \end{array} \biggr\}, $$

proving (c). It further follows that the maxmin and minmax of (a) equal the optimal objective values of the LP’s in (22) and (23), respectively. As the latter are dual LP’s with finite optimal objective function, (a) follows. Finally, part (b) now follows from standard arguments. □

Remark

When X and Y are unbounded polyhedra, the maxmin and minmax in part (a) of Proposition A are supinf and infsup, respectively. Still, if one of these expressions is finite, then the sup’s and inf’s can be replaced by max’s and min’s, respectively, and the conclusions and proof of Proposition A hold. The next example has unbounded X and Y for which Proposition A does not apply.

Example 3

Let

As \(\binom{1}{ y}^{\top}Q\binom{x}{1}=x-y\), we have that the supinf equals −∞ while the infsup equals +∞.

Appendix B: Proof of Proposition 1

Proof

(a): Consider \(x\in\mathcal{X}(C)\). If ∑ jM a ij x ij b i for each iN, then \(\hat {\theta}(\pi)\leq \sum_{i\in N}\pi_{i}(b_{i}-\sum_{j\in M}a_{ij}x_{ij})_{+}=\sum_{i\in N}\pi _{i}(b_{i}-\sum_{j\in M}a_{ij}x_{ij})\); taking a minimum over x satisfying (7b)–(7d), yields \(\hat{\theta}(\pi)\leq\theta^{*}(\pi)\). Next assume that \(\hat {x}\) is optimal for (6). Let x′ coincide with \(\hat {x}\) except that for each i with \(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij} < 0\), the \(\hat {x}_{ij}\)’s are reduced so that \(b_{i}-\sum_{j\in M}a_{ij} x'_{ij} = 0\). It then follows that \(x'\in\mathcal{X}(C)\) and \(0\leq b_{i}-\sum_{j\in M}a_{ij}x'_{ij}=(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij})_{+}\) for each iN. In particular, x′ is feasible for (7a)–(7d) and \(\hat {\theta}(\pi)=\sum_{i\in N}\pi_{i}(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij})_{+}=\sum_{i\in N}\pi_{i}(b_{i}-\sum_{j\in M}a_{ij}x'_{ij})\geq\theta^{*}(\pi)\).

(b): If x is feasible for (7a)–(7d), then it is (trivially) feasible for (6) and (by feasibility for (7c))

$$\sum_{i\in N}\pi_i\biggl(b_i- \sum_{j\in M}a_{ij}x_{ij}\biggr)_+= \sum_{i\in N}\pi_i\biggl(b_i- \sum_{j\in M}a_{ij}x_{ij}\biggr). $$

As \(\hat {\theta}(\pi)=\theta^{*}(\pi)\), x is optimal for (6) if and only if it is optimal for (7a)–(7d). □

Appendix C: Proof of Proposition 2

Proof

Write LP (7a)–(7d) with equality constraints replacing inequalities by adding nonnegative slack variable z j , for jM and s i , for iN, to the corresponding m constraints of (7b) and the constraints of (7c), respectively. The coefficient matrix has full rank, and standard results from LP assure that this LP has a basic optimal solution (x′,s,z)∈ℝn×m×ℝn×ℝm with at most m+n variables that are strictly positive; in particular, x′ is optimal for (7a)–(7d). For each iL(x′)∪ν(x′) we have s i >0. As n=|U(x′)|+|L(x′)|+|ν(x′)|, it follows that

Further, as \(|\{j\in M \mid x'_{ij}>0 \}|\geq1\) for each iU(x′)∪ν(x′), it follows that

So, ν(x′)≤m. □

Appendix D: Proof of Proposition 4

Proof

For \(x\in\mathcal{X}\) let N (x)≡{iNb i −∑ jM a ij x ij <0}, and N NN .

(a) and (b): Assume that (x ,w ) is a Nash equilibrium of the linearized game, i.e.,

$$ \max_{w\in\mathcal{W}}\hat{u}_W\bigl(x^*,w \bigr)=\hat{u}_W\bigl(x^*,w^*\bigr)= \min_{x\in\mathcal{X}}\hat {u}_W\bigl(x,w^*\bigr). $$
(24)

It follows from the left-hand side of (24) and the explicit expression for \(\hat{u}_{W}(\cdot,\cdot)\) in (8) that \(w^{*}_{i}=0\) for each iN (x ); consequently, \(\hat{u}_{W}(x^{*},w^{*})=u_{W}(x^{*},w^{*})\). As \(u_{W}(x,w)\geq \hat{u}_{W}(x,w)\) for each \((x,w)\in\mathcal {X}\times\mathcal{W}\), we conclude that for each \(x\in\mathcal{X}\)

$$ u_W\bigl(x^*,w^*\bigr)=\hat{u}_W\bigl (x^*,w^*\bigr)\leq \hat{u}_W\bigl(x,w^*\bigr) \leq u_W\bigl(x,w^*\bigr). $$
(25)

Consider any \(w\in\mathcal{W}\). Define \(\hat {w}=\hat {w}(x^{*})\in\mathbb{R}^{n}\) by

As \(u_{W}(x^{*},w)=\sum_{i\in N}w_{i}(b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij})_{+} =\sum_{i\in N}\hat {w}_{i}(b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij})=\hat{u}_{W}(x^{*},\hat {w})\), the first equality of (24) and \(u_{W}(x^{*},w^{*})=\hat{u}_{W}(x^{*},w^{*})\) imply that

$$ u_W\bigl(x^*,w^*\bigr)=\hat{u}_W\bigl (x^*,w^*\bigr)\geq \hat{u}_W\bigl(x^*, \hat {w}\bigr)=u_W\bigl(x^*,w\bigr). $$
(26)

By (25)–(26), (x ,w ) is a Nash equilibrium of the original game, and \(V=u_{W}(x^{*},w^{*})=\hat {u}_{W}(x^{*},w^{*})=\hat {V}\). Of course, V≥0 as u W ≥0.

(c): Assume that V>0. In view of (a), it suffices to show that a Nash equilibrium (x ,w ) of the original game is a Nash equilibrium of the linearized game. As \(\sum_{i\in N} w^{*}_{i}(b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij})=u_{W}(x^{*},w^{*})=\max_{w\in\mathcal{W}} u_{W}(x^{*},w)=V>0\),

(27)

and therefore \(\hat{u}_{w}(x^{*},w^{*})=\sum_{i\in N}w^{*}_{i}V\). Consider \(w\in\mathcal{W}\). It follows from \(u_{W}(x^{*},w^{*})=\sum_{i\in N}w^{*}_{i}(V)_{+}=\sum_{i\in N}w^{*}_{i}V=\hat {u}_{W}(x^{*},w^{*})\), (4), and \(u_{W}(x^{*},w)\geq \hat{u}_{W}(x^{*},w)\) that

$$ \hat{u}_W\bigl(x^*,w^* \bigr)=u_W\bigl(x^*,w^*\bigr)\geq u_W\bigl(x^*,w\bigr)\geq \hat{u}_W\bigl(x^*,w\bigr). $$
(28)

To complete the proof we show that \(\hat{u}_{W}(x^{*},w^{*})\leq \hat{u}_{W}(x,w^{*})\) for each \(x\in\mathcal{X}\). To do this, we argue that if \(x^{*}_{ij}>0\) and \(w^{*}_{i}>0\), then \(w^{*}_{i}a_{ij}\geq w^{*}_{s}a_{sj}\) for each sN. This inequality is trivial if either \(w^{*}_{s}=0\) or a sj =0. Alternatively, if \(w^{*}_{s}> 0\), a sj >0 and \(w^{*}_{i}a_{ij}<w^{*}_{s}a_{sj}\), then (27) and V>0 assure that \(b_{s}-\max_{j\in M}a_{ij}x^{*}_{ij}=V>0\), and a shift of a small amount of resource j from \(x^{*}_{ij}\) to \(x^{*}_{sj}\) will result in an allocation x′ with sN (x′) and u W (x′,w )<u W (x ,w ), contradicting the first equality in (4). So, for each j, \(\{i\in N:w^{*}_{i}>0 \mbox{ and }x^{*}_{ij}>0\}\subseteq \operatorname {argmax}_{i\in N} \{w^{*}_{i}a_{ij}\}\); hence, a standard result about the knapsack problem (Dantzig 1957) assures that \(x^{*}\in \operatorname {argmax}_{x\in\mathcal{X}}[\sum_{i\in N}(w^{*}_{i}a_{ij})x_{ij}]\). Consequently,

 □

Appendix E: Proof of Proposition 5

Proof

(a) and (b): Except for the equality in (13), (a) follows from Proposition 3(b′), after substituting the explicit expression of \(\hat{u}\). Next, (b) follows from the application of Proposition A of Appendix A with

(29)

(the 0’s in the last column of (29) are, respectively, in ℝm−2, ℝ, ℝm×(n−2), ℝm−2 and ℝ),

Next, the equality in (13) follows from the fact that for each v∈ℝn, \(\max_{w\in\mathcal{W}} \sum_{i\in N} v_{i}w_{i} = \max_{i\in N} (v_{i})_{+}\).

(c): By (a)–(b), there exists \(\hat {x}\in\mathcal{X}\) satisfying (13). Let x coincide with \(\hat {x}\) except that for each i with \(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij} < 0\), the \(\hat {x}_{ij}\)’s are reduced so that \(b_{i}-\sum_{j\in M}a_{ij} x^{*}_{ij} = 0\). Then \(\max_{i\in N} (b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij})_{+}=\max _{i\in N} (b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij})_{+}\), implying that x satisfies (13) and \(b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}\geq0\) for each iN. □

Appendix F: Proof of Lemma 1

Proof

(a): Assume that (x′,θ ) is optimal for (11a)–(11d) and u≡max{iNb i θ } (as (0,θ=b 1) is feasible for (11a)–(11d), b 1θ and therefore u is well-defined). By the feasibility of x′ for (11a)–(11d), \(b_{i}-\sum_{j\in M}a_{ij}x'_{ij}\leq\theta^{*}\) for all iN. Let x be defined in the following way: (i) for iu reduce the \(x'_{ij}\)’s so that \(\theta^{*} = b_{i}-\sum_{j \in M} a_{ij} x_{ij}^{*}\) while maintaining the nonnegativity (this is possible as b i θ for i=1,…,u), and (ii) or i>u, set \(x_{ij}^{*} = 0\) for all jM. For each jM \(\sum_{i\in N}x^{*}_{ij}\leq\sum_{i\in N}x'_{ij}\leq C_{j}\), so, \(x^{*}\in\mathcal{X}\). Further, since \(b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}= b_{i}<\theta^{*}\) for all i>u (following from the definition of u and (5)), (x ,θ ) is feasible for (11a)–(11d) and I(x )=[u]. Also, since (x ,θ ) has objective value that equals the optimal one, (x ,θ ) is optimal for (11a)–(11d). Following from (the strict version of) (5) we have that b n <⋯<b u+1<θ b u <⋯<b 1 and therefore {iNb i =θ }∈{{u},∅}. For i=1,…,u−1, b i >θ and \(\theta ^{*}=b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}\) imply that \(x^{*}_{ij}>0\) for some j, and therefore iP(x ). Similarly, if θ <b u , then uP(x ) and {iNb i =θ }=∅. In the remaining case, {u}={iNb i =θ }. Thus, as \(x^{*}_{ij}=0\) for all i>u, it follows that I(x )=[u]⊆P(x )∪{iNb i =θ }⊆[u]. Hence, P(x )∪{iNb i =θ }=[u]=I(x ), in particular, I(x ) is consecutive. We also conclude that if {iN:b i =θ }=∅, then P(x )=I(x )=[u], and alternatively, if {iN:b i =θ }={u}, then P(x )∈{[u],[u−1]}. So, in either case P(x ) is consecutive.

(b): Feasibility of (x ,θ ) for (11a)–(11d) assures that \(\theta^{*}\geq \hat {\theta}\equiv\max_{i\in N}\{ b_{i}- \sum_{j\in M}a_{ij}x^{*}_{ij}\}\). Further, the inequality must hold as equality—otherwise, as θ >0, \(\hat {\theta}_{+}=\max\{\hat {\theta} ,0\}<\theta^{*}\) and \((x^{*},\hat {\theta}_{+})\) would be feasible for (11a)–(11d) with objective value \(\hat {\theta}_{+}<\theta^{*}\), yielding a contradiction to the optimality of (x ,θ ). As \(\theta^{*}=\max_{i\in N}\{b_{i}-\sum _{j\in M}a_{ij}x^{*}_{ij}\}\), (15) implies that \(I(x^{*}) = \operatorname {argmax}_{i\in N}\{b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}\}=\{i\in N:b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}=\theta^{*}\}\). Finally, to see that \(I^{*}\equiv\{i\in N:b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}=\theta^{*}\}\subseteq P(x^{*})\cup\{i\in N \mid b_{i}=\theta^{*} \}\), observe that if vI P(x ), then \(\theta^{*}=b_{v}-\sum_{j\in M}a_{ij}x^{*}_{ij}=b_{v}\).

(c): Assume that a>0 and (x ,θ ) is optimal for (11a)–(11d). We first prove, by contradiction, that P(x )⊆I(x ). So, assume that P(x )∖I(x )≠∅ and kP(x )∖I(x ); in particular, kI(x ) and part (b) and feasibility for (11b) assure that \(h\equiv\theta^{*}-[b_{k}-\sum_{j\in M}a_{kj}x^{*}_{kj}]>0\). Also, as kP(x ), there must exist some qM with \(x^{*}_{kq}>0\). Let \(\hat {x}\in\mathcal{X}\) coincide with x except that \(x^{*}_{kq}\) is decreased by \(\epsilon\in(0,\frac{h}{a_{kq}})\) and this quantity is equally distributed to \(x^{*}_{iq}\) for iI(x ). It follows from a>0 and part (b) that

So, \(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij}<\theta^{*}\) for each iN and \(\hat {\theta}\equiv(\max_{i\in N}\{b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij}\})_{+}\) satisfies \(0\leq \hat {\theta}<\theta^{*}\). As \((\hat {x},\hat {\theta})\) is feasible for (11a)–(11d) with objective value \(\hat {\theta}< \theta^{*}\), we get a contradiction to the optimality of (x ,θ ).

By the above paragraph, if iI(x ), then iP(x ) and \(\theta^{*} > b_{i} - \sum_{j \in M} a_{ij} x_{ij}^{*} = b_{i}\), assuring that b i θ . Consequently, {iN:b i =θ }⊆I(x ). Combining this conclusion with the above paragraph and part (b), implies that I(x )=P(x )∪{iN:b i θ }.

We next prove, again by contradiction, that P(x ) is consecutive. Assume that uNP(x ) while (u+1)∈P(x ). As we established that P(x )⊆I(x ), (u+1)∈I(x ). By (u+1)∈I(x ) combined with part (b), by (u+1)∈P(x ) combined with a>0, by the strict version of (5), and by uP(x )

$$\theta^*=b_{u+1}-\sum_{j\in M}a_{u+1,j}x^*_{u+1,j}<b_{u+1}<b_u=b_u- \sum_{j\in M}a_{uj}x^*_{uj}, $$

implying that (x ,θ ) is infeasible for (11a)–(11d) and thereby establishing a contradiction. To prove I(x ) is consecutive, assume for the sake of deriving a contradiction that uNI(x ) and u+1∈I(x ). As we established P(x )⊆I(x ), necessarily uP(x ). It then follows that

$$\theta^* > b_u - \sum_{j \in M} a_{uj} x_{uj}^* = b_u > b_{u+1} \geq b_{u+1} - \sum_{j \in M} a_{u+1,j} x_{u+1,j}^* = \theta^*, $$

establishing a contradiction.

We next prove, again by contradiction, that \(\sum_{i \in N} x_{ij}^{*} = C_{j}\) for each jM. Assume that \(\sum_{i \in N} x_{iq}^{*} < C_{q}\) for some qM. Let \(\hat {x}\in\mathcal{X}\) coincide with x except that \(\epsilon =C_{q} - \sum_{i\in N}x^{*}_{iq}\) is equally distributed among all the \(x^{*}_{iq}\)’s. It then follows from a>0 that for each iN, \(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij} < b_{i} - \sum_{j\in M}a_{ij}x^{*}_{ij}\leq\theta^{*}\) and therefore \(\hat {\theta}\equiv [\max_{i\in N}\{b_{i}-\sum_{j\in M}x^{*}_{ij}\}]_{+}\) satisfies \(0\leq \hat {\theta}< \theta^{*}\). So, \((\hat {x}, \hat {\theta} )\) is feasible for (11a)–(11d) with objective value \(\hat {\theta}< \theta^{*}\), yielding a contradiction to the optimality of (x ,θ ).

(d): Assume that (x′,θ ) is optimal for (11a)–(11d). Let D≡{iNb i θ } and q≡|D|. Evidently, the \(x'_{ij}\)’s for (i,j)∈D×M can be reduced to 0 without affecting feasibility or optimality to (11a)–(11d). Thus, it can be assumed that \(x'_{ij}=0\) for all (i,j)∈D×M. It follows that constraints (11b) for iD and variables x ij for (i,j)∈D×M can be dropped from LP (11a)–(11d) and each optimal solution of the reduced problem corresponds to an optimal solution of (11a)–(11d) itself (by appropriately adding zero variables). The reduced LP has n+mq constraints, and m(nq)+1 variables. Adding slack and surplus variables results in a standard form LP with m(nq)+1+n+m nonnegative variables n+mq equality constraints whose constraint matrix has full row-rank. This LP has a basic optimal solution, say \((\hat {x},\hat {\theta}=\theta^{*})\), with at most m+nq nonzero variables, one of which is \(\hat {\theta}\). As \(b_{i}>\theta^{*}=\hat {\theta}\) for iND, the feasibility for (11b) implies that \(|\{j\in M \mid \hat {x}_{ij}>0 \}|\geq1\) for each iND. As |ND|+1≥nq+1 positive variables out of at most m+nq were accounted for, it follows that

$$\sum_{i\in N\setminus D}\bigl[\bigl|\{j\in M \mid \hat {x}_{ij}>0 \}\bigr|- 1\bigr]\leq(m+n-q)-(n-q+1)=m-1. $$

Augmenting \(\hat {x}\) with the zero variables corresponding to (i,j)∈D×M yields an optimal solution (x ,θ ) of (11a)–(11d) with the properties asserted in (d). □

Rights and permissions

Reprints and permissions

About this article

Cite this article

Golany, B., Goldberg, N. & Rothblum, U.G. Allocating multiple defensive resources in a zero-sum game setting. Ann Oper Res 225, 91–109 (2015). https://doi.org/10.1007/s10479-012-1196-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-012-1196-0

Keywords

Navigation