Abstract
This paper investigates the problem of allocating multiple defensive resources to protect multiple sites against possible attacks by an adversary. The effectiveness of the resources in reducing potential damage to the sites is assumed to vary across the resources and across the sites and their availability is constrained. The problem is formulated as a two-person zero-sum game with piecewise linear utility functions and polyhedral action sets. Linearization of the utility functions is applied in order to reduce the computation of the game’s Nash equilibria to the solution of a pair of linear programs (LPs). The reduction facilitates revelation of structure of Nash equilibrium allocations, in particular, of monotonicity properties of these allocations with respect to the amounts of available resources. Finally, allocation problems in non-competitive settings are examined (i.e., situations where the attacker chooses its targets independently of actions taken by the defender) and the structure of solutions in such settings is compared to that of Nash equilibria.
Similar content being viewed by others
References
Berkovitz, L. D. (2002). Convexity and optimization in ℝn. New York: Wiley.
Bier, V. M., Haphuriwat, N., Menoyo, J., Zimmerman, R., & Culpen, A. M. (2008). Optimal resource allocation for defense of targets based on differing measures of attractiveness. Risk Analysis, 23, 763–770.
Blackett, D. W. (1954). Some Blotto games. Naval Research Logistics, 1(1), 55–60.
Canbolat, P., Golany, B., Mund, I., & Rothblum, U. G. (2012). A stochastic competitive R&D race where “winner takes all”. Operations Research, 60(3), 700–715.
Charnes, A. (1953). Constrained games and linear programming. Proceedings of the National Academy of Sciences, 38, 639–641.
Cottle, R. W., Johnson, E., & Wets, R. (2007). George B. Dantzig (1914–2005). Notices of the American Mathematical Society, 54, 344–362.
Dantzig, G. B. (1957). Discrete-variable extremum problems. Operations Research, 5(2), 266–277.
Franke, J., & Öztürk, T. (2009). Conflict networks. Tech. report, RUB, Department of Economics.
Golany, B., Kaplan, E. H., Marmur, A., & Rothblum, U. G. (2009). Nature plays with dice—terrorists do not: allocating resources to counter strategic versus probabilistic risks. European Journal of Operational Research, 192(1), 198–208.
Katoh, N., & Ibaraki, T. (1998). Resource allocation problems. In D. Z. Du & P. M. Pardalos (Eds.), Handbook of combinatorial optimization (pp. 159–260). Dordrecht: Kluwer Academic.
Levitin, G., & Hausken, K. (2009). Intelligence and impact contests in systems with redundancy, false targets, and partial protection. Reliability Engineering and System Safety, 94, 1927–1941.
Luss, H. (1992). Minimax resource allocation problems: optimization and parametric analysis. European Journal of Operational Research, 60, 76–86.
Luss, H. (1999). On equitable resource allocation problems: a lexicographic minimax approach. Operations Research, 47, 361–378.
Luss, H. (2012). Equitable resource allocation: models, algorithms, and applications. New York: Wiley.
Martin, D. H. (1975). On the continuity of the maximum in parametric linear programming. Journal of Optimization Theory and Applications, 17(3–4), 205–210.
Papadimitriou, C. H. (2001). Algorithms, games and the internet. In STOC’01, proceedings of the thirty-third annual ACM symposium on theory of computing.
Powell, R. (2007). Defending against terrorist attacks with limited resources. American Political Science Review, 101, 527–541.
Roberson, B. (2006). The Colonel Blotto game. Economic Theory, 29, 10024.
Wolfe, P. (1956). Linear inequalities and related systems In H. W. Kuhn & A. W. Tucker (Eds.), Determinateness of polyhedral games. Princeton: Princeton University Press.
Acknowledgements
The authors would like to thank Pelin Canbolat, Edward H. Kaplan, and Hanan Luss for comments. B. Golany and U.G. Rothblum were supported in part by the Daniel Rose Technion-Yale Initiative for Research on Homeland Security and Counter-Terrorism. N. Goldberg was supported in part by the Daniel Rose Technion-Yale Initiative for Research on Homeland Security, and Counter-Terrorism, the Center for Absorption in Science of the Ministry of Immigrant Absorption and the Council of Higher Education, State of Israel.
Author information
Authors and Affiliations
Corresponding author
Additional information
Published posthumously. Uriel G. Rothblum passed away unexpectedly on March 26, 2012, while this paper was under review.
Appendices
Appendix A: A minmax proposition for zero-sum bilinear games
The following records a classic characterization of Nash equilibria for two-person, zero-sum games with bilinear payoffs and with action sets that are polytopes (see Charnes 1953 and Wolfe 1956). It is included for the sake of completeness.
Proposition A
Suppose X⊆R n and Y⊆R n are polytopes and Q∈R m×n. Consider the game where X is the set of options to player I, Y is the set of options to player II, and upon selection of x∈X and y∈Y the payoff of player II to player I is y ⊤ Qx. Then
-
(a)
There exists a Nash equilibrium and max x∈X min y∈Y y ⊤ Qx=min y∈Y max x∈X y ⊤ Qx.
-
(b)
(x ∗,y ∗) is a Nash equilibrium if and only if
$$ x^* \in \operatorname {argmax}\limits _{x \in X} \Bigl[\min_{y \in Y} \ y^\top Qx \Bigr] $$(20)and
$$ y^* \in \operatorname {argmin}\limits _{y \in Y} \Bigl[\max_{x \in X} \ y^\top Qx \Bigr]. $$(21) -
(c)
Suppose X={x∈ℝn∣Ax≤a} and Y={y∈ℝm∣By≥b}, where (A,a)∈ℝp×n×ℝp and (C,b)∈ℝq×m×ℝq (with p and q as positive integers). Then:
-
(i)
x ∗ satisfies (20) if and only if for some λ ∗∈ℝq, (x ∗,λ ∗) solves the LP
(22) -
(ii)
y ∗ satisfies (21) if and only if for some μ ∗∈ℝp, (y ∗,μ ∗) solves the LP
(23) -
(iii)
The LP’s in (22) and (23) are duals of each other and their common optimal objective value equals max x∈X min y∈Y y ⊤ Qx=min y∈Y max x∈X y ⊤ Qx.
-
(i)
Proof
Consider the representation of X and Y given in (c). As X and Y are compact, continuity arguments show that the maxima and minima in (a) are well-defined (and there is no need to use sup’s and inf’s). Further, standard LP duality shows that for each x∈ℝn,
and for each y∈ℝm,
proving (c). It further follows that the maxmin and minmax of (a) equal the optimal objective values of the LP’s in (22) and (23), respectively. As the latter are dual LP’s with finite optimal objective function, (a) follows. Finally, part (b) now follows from standard arguments. □
Remark
When X and Y are unbounded polyhedra, the maxmin and minmax in part (a) of Proposition A are supinf and infsup, respectively. Still, if one of these expressions is finite, then the sup’s and inf’s can be replaced by max’s and min’s, respectively, and the conclusions and proof of Proposition A hold. The next example has unbounded X and Y for which Proposition A does not apply.
Example 3
Let

As \(\binom{1}{ y}^{\top}Q\binom{x}{1}=x-y\), we have that the supinf equals −∞ while the infsup equals +∞.
Appendix B: Proof of Proposition 1
Proof
(a): Consider \(x\in\mathcal{X}(C)\). If ∑ j∈M a ij x ij ≤b i for each i∈N, then \(\hat {\theta}(\pi)\leq \sum_{i\in N}\pi_{i}(b_{i}-\sum_{j\in M}a_{ij}x_{ij})_{+}=\sum_{i\in N}\pi _{i}(b_{i}-\sum_{j\in M}a_{ij}x_{ij})\); taking a minimum over x satisfying (7b)–(7d), yields \(\hat{\theta}(\pi)\leq\theta^{*}(\pi)\). Next assume that \(\hat {x}\) is optimal for (6). Let x′ coincide with \(\hat {x}\) except that for each i with \(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij} < 0\), the \(\hat {x}_{ij}\)’s are reduced so that \(b_{i}-\sum_{j\in M}a_{ij} x'_{ij} = 0\). It then follows that \(x'\in\mathcal{X}(C)\) and \(0\leq b_{i}-\sum_{j\in M}a_{ij}x'_{ij}=(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij})_{+}\) for each i∈N. In particular, x′ is feasible for (7a)–(7d) and \(\hat {\theta}(\pi)=\sum_{i\in N}\pi_{i}(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij})_{+}=\sum_{i\in N}\pi_{i}(b_{i}-\sum_{j\in M}a_{ij}x'_{ij})\geq\theta^{*}(\pi)\).
(b): If x is feasible for (7a)–(7d), then it is (trivially) feasible for (6) and (by feasibility for (7c))
As \(\hat {\theta}(\pi)=\theta^{*}(\pi)\), x is optimal for (6) if and only if it is optimal for (7a)–(7d). □
Appendix C: Proof of Proposition 2
Proof
Write LP (7a)–(7d) with equality constraints replacing inequalities by adding nonnegative slack variable z j , for j∈M and s i , for i∈N, to the corresponding m constraints of (7b) and the constraints of (7c), respectively. The coefficient matrix has full rank, and standard results from LP assure that this LP has a basic optimal solution (x′,s,z)∈ℝn×m×ℝn×ℝm with at most m+n variables that are strictly positive; in particular, x′ is optimal for (7a)–(7d). For each i∈L(x′)∪ν(x′) we have s i >0. As n=|U(x′)|+|L(x′)|+|ν(x′)|, it follows that

Further, as \(|\{j\in M \mid x'_{ij}>0 \}|\geq1\) for each i∈U(x′)∪ν(x′), it follows that

So, ν(x′)≤m. □
Appendix D: Proof of Proposition 4
Proof
For \(x\in\mathcal{X}\) let N −(x)≡{i∈N∣b i −∑ j∈M a ij x ij <0}, and N ⊕≡N∖N −.
(a) and (b): Assume that (x ∗,w ∗) is a Nash equilibrium of the linearized game, i.e.,
It follows from the left-hand side of (24) and the explicit expression for \(\hat{u}_{W}(\cdot,\cdot)\) in (8) that \(w^{*}_{i}=0\) for each i∈N −(x ∗); consequently, \(\hat{u}_{W}(x^{*},w^{*})=u_{W}(x^{*},w^{*})\). As \(u_{W}(x,w)\geq \hat{u}_{W}(x,w)\) for each \((x,w)\in\mathcal {X}\times\mathcal{W}\), we conclude that for each \(x\in\mathcal{X}\)
Consider any \(w\in\mathcal{W}\). Define \(\hat {w}=\hat {w}(x^{*})\in\mathbb{R}^{n}\) by

As \(u_{W}(x^{*},w)=\sum_{i\in N}w_{i}(b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij})_{+} =\sum_{i\in N}\hat {w}_{i}(b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij})=\hat{u}_{W}(x^{*},\hat {w})\), the first equality of (24) and \(u_{W}(x^{*},w^{*})=\hat{u}_{W}(x^{*},w^{*})\) imply that
By (25)–(26), (x ∗,w ∗) is a Nash equilibrium of the original game, and \(V=u_{W}(x^{*},w^{*})=\hat {u}_{W}(x^{*},w^{*})=\hat {V}\). Of course, V≥0 as u W ≥0.
(c): Assume that V>0. In view of (a), it suffices to show that a Nash equilibrium (x ∗,w ∗) of the original game is a Nash equilibrium of the linearized game. As \(\sum_{i\in N} w^{*}_{i}(b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij})=u_{W}(x^{*},w^{*})=\max_{w\in\mathcal{W}} u_{W}(x^{*},w)=V>0\),

and therefore \(\hat{u}_{w}(x^{*},w^{*})=\sum_{i\in N}w^{*}_{i}V\). Consider \(w\in\mathcal{W}\). It follows from \(u_{W}(x^{*},w^{*})=\sum_{i\in N}w^{*}_{i}(V)_{+}=\sum_{i\in N}w^{*}_{i}V=\hat {u}_{W}(x^{*},w^{*})\), (4), and \(u_{W}(x^{*},w)\geq \hat{u}_{W}(x^{*},w)\) that
To complete the proof we show that \(\hat{u}_{W}(x^{*},w^{*})\leq \hat{u}_{W}(x,w^{*})\) for each \(x\in\mathcal{X}\). To do this, we argue that if \(x^{*}_{ij}>0\) and \(w^{*}_{i}>0\), then \(w^{*}_{i}a_{ij}\geq w^{*}_{s}a_{sj}\) for each s∈N. This inequality is trivial if either \(w^{*}_{s}=0\) or a sj =0. Alternatively, if \(w^{*}_{s}> 0\), a sj >0 and \(w^{*}_{i}a_{ij}<w^{*}_{s}a_{sj}\), then (27) and V>0 assure that \(b_{s}-\max_{j\in M}a_{ij}x^{*}_{ij}=V>0\), and a shift of a small amount of resource j from \(x^{*}_{ij}\) to \(x^{*}_{sj}\) will result in an allocation x′ with s∈N ⊕(x′) and u W (x′,w ∗)<u W (x ∗,w ∗), contradicting the first equality in (4). So, for each j, \(\{i\in N:w^{*}_{i}>0 \mbox{ and }x^{*}_{ij}>0\}\subseteq \operatorname {argmax}_{i\in N} \{w^{*}_{i}a_{ij}\}\); hence, a standard result about the knapsack problem (Dantzig 1957) assures that \(x^{*}\in \operatorname {argmax}_{x\in\mathcal{X}}[\sum_{i\in N}(w^{*}_{i}a_{ij})x_{ij}]\). Consequently,

□
Appendix E: Proof of Proposition 5
Proof
(a) and (b): Except for the equality in (13), (a) follows from Proposition 3(b′), after substituting the explicit expression of \(\hat{u}\). Next, (b) follows from the application of Proposition A of Appendix A with

(the 0’s in the last column of (29) are, respectively, in ℝm−2, ℝ, ℝm×(n−2), ℝm−2 and ℝ),

Next, the equality in (13) follows from the fact that for each v∈ℝn, \(\max_{w\in\mathcal{W}} \sum_{i\in N} v_{i}w_{i} = \max_{i\in N} (v_{i})_{+}\).
(c): By (a)–(b), there exists \(\hat {x}\in\mathcal{X}\) satisfying (13). Let x ∗ coincide with \(\hat {x}\) except that for each i with \(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij} < 0\), the \(\hat {x}_{ij}\)’s are reduced so that \(b_{i}-\sum_{j\in M}a_{ij} x^{*}_{ij} = 0\). Then \(\max_{i\in N} (b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij})_{+}=\max _{i\in N} (b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij})_{+}\), implying that x ∗ satisfies (13) and \(b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}\geq0\) for each i∈N. □
Appendix F: Proof of Lemma 1
Proof
(a): Assume that (x′,θ ∗) is optimal for (11a)–(11d) and u≡max{i∈N∣b i ≥θ ∗} (as (0,θ=b 1) is feasible for (11a)–(11d), b 1≥θ ∗ and therefore u is well-defined). By the feasibility of x′ for (11a)–(11d), \(b_{i}-\sum_{j\in M}a_{ij}x'_{ij}\leq\theta^{*}\) for all i∈N. Let x ∗ be defined in the following way: (i) for i≤u reduce the \(x'_{ij}\)’s so that \(\theta^{*} = b_{i}-\sum_{j \in M} a_{ij} x_{ij}^{*}\) while maintaining the nonnegativity (this is possible as b i ≥θ ∗ for i=1,…,u), and (ii) or i>u, set \(x_{ij}^{*} = 0\) for all j∈M. For each j∈M \(\sum_{i\in N}x^{*}_{ij}\leq\sum_{i\in N}x'_{ij}\leq C_{j}\), so, \(x^{*}\in\mathcal{X}\). Further, since \(b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}= b_{i}<\theta^{*}\) for all i>u (following from the definition of u and (5)), (x ∗,θ ∗) is feasible for (11a)–(11d) and I(x ∗)=[u]. Also, since (x ∗,θ ∗) has objective value that equals the optimal one, (x ∗,θ ∗) is optimal for (11a)–(11d). Following from (the strict version of) (5) we have that b n <⋯<b u+1<θ ∗≤b u <⋯<b 1 and therefore {i∈N∣b i =θ ∗}∈{{u},∅}. For i=1,…,u−1, b i >θ ∗ and \(\theta ^{*}=b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}\) imply that \(x^{*}_{ij}>0\) for some j, and therefore i∈P(x ∗). Similarly, if θ ∗<b u , then u∈P(x ∗) and {i∈N∣b i =θ ∗}=∅. In the remaining case, {u}={i∈N∣b i =θ ∗}. Thus, as \(x^{*}_{ij}=0\) for all i>u, it follows that I(x ∗)=[u]⊆P(x ∗)∪{i∈N∣b i =θ ∗}⊆[u]. Hence, P(x ∗)∪{i∈N∣b i =θ ∗}=[u]=I(x ∗), in particular, I(x ∗) is consecutive. We also conclude that if {i∈N:b i =θ ∗}=∅, then P(x ∗)=I(x ∗)=[u], and alternatively, if {i∈N:b i =θ ∗}={u}, then P(x ∗)∈{[u],[u−1]}. So, in either case P(x ∗) is consecutive.
(b): Feasibility of (x ∗,θ ∗) for (11a)–(11d) assures that \(\theta^{*}\geq \hat {\theta}\equiv\max_{i\in N}\{ b_{i}- \sum_{j\in M}a_{ij}x^{*}_{ij}\}\). Further, the inequality must hold as equality—otherwise, as θ ∗>0, \(\hat {\theta}_{+}=\max\{\hat {\theta} ,0\}<\theta^{*}\) and \((x^{*},\hat {\theta}_{+})\) would be feasible for (11a)–(11d) with objective value \(\hat {\theta}_{+}<\theta^{*}\), yielding a contradiction to the optimality of (x ∗,θ ∗). As \(\theta^{*}=\max_{i\in N}\{b_{i}-\sum _{j\in M}a_{ij}x^{*}_{ij}\}\), (15) implies that \(I(x^{*}) = \operatorname {argmax}_{i\in N}\{b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}\}=\{i\in N:b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}=\theta^{*}\}\). Finally, to see that \(I^{*}\equiv\{i\in N:b_{i}-\sum_{j\in M}a_{ij}x^{*}_{ij}=\theta^{*}\}\subseteq P(x^{*})\cup\{i\in N \mid b_{i}=\theta^{*} \}\), observe that if v∈I ∗∖P(x ∗), then \(\theta^{*}=b_{v}-\sum_{j\in M}a_{ij}x^{*}_{ij}=b_{v}\).
(c): Assume that a>0 and (x ∗,θ ∗) is optimal for (11a)–(11d). We first prove, by contradiction, that P(x ∗)⊆I(x ∗). So, assume that P(x ∗)∖I(x ∗)≠∅ and k∈P(x ∗)∖I(x ∗); in particular, k∉I(x ∗) and part (b) and feasibility for (11b) assure that \(h\equiv\theta^{*}-[b_{k}-\sum_{j\in M}a_{kj}x^{*}_{kj}]>0\). Also, as k∈P(x ∗), there must exist some q∈M with \(x^{*}_{kq}>0\). Let \(\hat {x}\in\mathcal{X}\) coincide with x ∗ except that \(x^{*}_{kq}\) is decreased by \(\epsilon\in(0,\frac{h}{a_{kq}})\) and this quantity is equally distributed to \(x^{*}_{iq}\) for i∈I(x ∗). It follows from a>0 and part (b) that

So, \(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij}<\theta^{*}\) for each i∈N and \(\hat {\theta}\equiv(\max_{i\in N}\{b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij}\})_{+}\) satisfies \(0\leq \hat {\theta}<\theta^{*}\). As \((\hat {x},\hat {\theta})\) is feasible for (11a)–(11d) with objective value \(\hat {\theta}< \theta^{*}\), we get a contradiction to the optimality of (x ∗,θ ∗).
By the above paragraph, if i∉I(x ∗), then i∉P(x ∗) and \(\theta^{*} > b_{i} - \sum_{j \in M} a_{ij} x_{ij}^{*} = b_{i}\), assuring that b i ≠θ ∗. Consequently, {i∈N:b i =θ ∗}⊆I(x ∗). Combining this conclusion with the above paragraph and part (b), implies that I(x ∗)=P(x ∗)∪{i∈N:b i ≡θ ∗}.
We next prove, again by contradiction, that P(x ∗) is consecutive. Assume that u∈N∖P(x ∗) while (u+1)∈P(x ∗). As we established that P(x ∗)⊆I(x ∗), (u+1)∈I(x ∗). By (u+1)∈I(x ∗) combined with part (b), by (u+1)∈P(x ∗) combined with a>0, by the strict version of (5), and by u∉P(x ∗)
implying that (x ∗,θ ∗) is infeasible for (11a)–(11d) and thereby establishing a contradiction. To prove I(x ∗) is consecutive, assume for the sake of deriving a contradiction that u∈N∖I(x ∗) and u+1∈I(x ∗). As we established P(x ∗)⊆I(x ∗), necessarily u∉P(x ∗). It then follows that
establishing a contradiction.
We next prove, again by contradiction, that \(\sum_{i \in N} x_{ij}^{*} = C_{j}\) for each j∈M. Assume that \(\sum_{i \in N} x_{iq}^{*} < C_{q}\) for some q∈M. Let \(\hat {x}\in\mathcal{X}\) coincide with x ∗ except that \(\epsilon =C_{q} - \sum_{i\in N}x^{*}_{iq}\) is equally distributed among all the \(x^{*}_{iq}\)’s. It then follows from a>0 that for each i∈N, \(b_{i}-\sum_{j\in M}a_{ij}\hat {x}_{ij} < b_{i} - \sum_{j\in M}a_{ij}x^{*}_{ij}\leq\theta^{*}\) and therefore \(\hat {\theta}\equiv [\max_{i\in N}\{b_{i}-\sum_{j\in M}x^{*}_{ij}\}]_{+}\) satisfies \(0\leq \hat {\theta}< \theta^{*}\). So, \((\hat {x}, \hat {\theta} )\) is feasible for (11a)–(11d) with objective value \(\hat {\theta}< \theta^{*}\), yielding a contradiction to the optimality of (x ∗,θ ∗).
(d): Assume that (x′,θ ∗) is optimal for (11a)–(11d). Let D≡{i∈N∣b i ≤θ ∗} and q≡|D|. Evidently, the \(x'_{ij}\)’s for (i,j)∈D×M can be reduced to 0 without affecting feasibility or optimality to (11a)–(11d). Thus, it can be assumed that \(x'_{ij}=0\) for all (i,j)∈D×M. It follows that constraints (11b) for i∈D and variables x ij for (i,j)∈D×M can be dropped from LP (11a)–(11d) and each optimal solution of the reduced problem corresponds to an optimal solution of (11a)–(11d) itself (by appropriately adding zero variables). The reduced LP has n+m−q constraints, and m(n−q)+1 variables. Adding slack and surplus variables results in a standard form LP with m(n−q)+1+n+m nonnegative variables n+m−q equality constraints whose constraint matrix has full row-rank. This LP has a basic optimal solution, say \((\hat {x},\hat {\theta}=\theta^{*})\), with at most m+n−q nonzero variables, one of which is \(\hat {\theta}\). As \(b_{i}>\theta^{*}=\hat {\theta}\) for i∈N∖D, the feasibility for (11b) implies that \(|\{j\in M \mid \hat {x}_{ij}>0 \}|\geq1\) for each i∈N∖D. As |N∖D|+1≥n−q+1 positive variables out of at most m+n−q were accounted for, it follows that
Augmenting \(\hat {x}\) with the zero variables corresponding to (i,j)∈D×M yields an optimal solution (x ∗,θ ∗) of (11a)–(11d) with the properties asserted in (d). □
Rights and permissions
About this article
Cite this article
Golany, B., Goldberg, N. & Rothblum, U.G. Allocating multiple defensive resources in a zero-sum game setting. Ann Oper Res 225, 91–109 (2015). https://doi.org/10.1007/s10479-012-1196-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-012-1196-0