Abstract
The paper analyzes an environment in which several firms compete over the development of a project. Each firm decides how much to invest in the project while adhering to firm-specific lower and upper investment bounds. The completion time of the project by a firm has exponential distribution with rate that depends linearly on the investment of the firm. The firm that completes the project first collects all its revenues whereas the remaining firms earn nothing. The paper establishes the existence and uniqueness of both the Nash equilibrium and the globally optimal solution, provides explicit representations parametrically in the interest rate, and constructs computationally efficient methods to solve these two problems. It also examines sensitivity of Nash equilibrium to marginal changes in lower and upper bounds.
Similar content being viewed by others
Notes
In fact, [1] showed that when B i =∞ for i∈N, in the unique Nash equilibrium, the investment of each firm i is always bounded by R i /4. Consequently, if the budget B i of firm i exceeds R i /4 for every i∈N, then the unique Nash equilibrium of the relaxed model is feasible for the model with the budget constraints and is therefore also a Nash equilibrium for the latter.
Uniqueness also follows from [3]; however, this paper develops a constructive argument that provides an efficient method to compute the Nash equilibrium in addition to proving its existence and uniqueness.
References
Canbolat, P.G., Golany, B., Mund, I., Rothblum, U.G.: A stochastic competitive R&D race where “winner takes all”. Oper. Res. 60, 700–715 (2012). doi:10.1287/opre.1120.1055
Gerchak, Y., Parlar, M.: Allocating resources to research and development projects in a competitive environment. IIE Trans. 31, 827–834 (1999)
Rosen, J.B.: Existence and uniqueness of equilibrium points for concave N-person games. Econometrica 33, 520–534 (1965)
Veinott, A.F.: Unpublished Lecture Notes in Supply-Chain Optimization-MS&E 361, Stanford University, HW 3, Problem 3 and its solution (2000)
Milgrom, P., Roberts, J.: Rationalizability, learning, and equilibrium in games with strategic complementarities. Econometrica 58, 1255–1277 (1990)
Acknowledgements
This research was partially supported by the Daniel Rose Yale University-Technion Initiative for Research on Homeland Security and Counter-Terrorism.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Boris Mordukhovich.
U.G. Rothblum deceased on March 12, 2012.
Appendix: Proofs
Appendix: Proofs
Proof of Theorem 3.1
If M≠∅, then (9) is a quadratic equation in which z 0 has a negative coefficient and z 2 has a positive coefficient; hence, it has a unique positive root. Alternatively, if M=∅, then (9) is the linear equation z−(∑ i∈J B i α i +∑ i∈K L i α i +ρ)=0, which has a unique (positive) root.
Consider a quadruple \((F,J,K,M)\in\mathcal{Q}\) and x ∗ defined by (10). Then (10) and (9) imply that
To prove that x ∗ is a Nash equilibrium, it is enough to verify (4)–(8) for corresponding scalars τ 1,…,τ n , σ 1,…,σ n . First observe that (10) and (b) assure \(L_{i}\le x^{*}_{i}\le B_{i}\) for i∈N, verifying (5). For i∈J, let τ i =0 and \(\sigma_{i}=\frac{R_{i}\alpha_{i}(F-\alpha_{i}B_{i})}{F^{2}}-1\). Then (7)–(8) are trivial and by (b), R i α i (F−α i B i )≥F 2, implying that σ i ≥0; so, (6) holds. Next, by (51),
which implies (4). For i∈K, let σ i =0 and \(\tau_{i}=1-\frac{R_{i}\alpha_{i}(F-\alpha_{i}L_{i})}{F^{2}}\). Then (7)–(8) are trivial and by (b), R i α i (F−α i L i )≤F 2, implying that τ i ≥0; so, (6) holds. By (51),
implying (4). For i∈M, let τ i =σ i =0. Then (6)–(8) are trivial. Also, by (10), \(F-\alpha_{i}x_{i}^{*}=\frac{F^{2}}{R_{i}\alpha_{i}}\) and by (51), \(\frac{R_{i}\alpha_{i}(\alpha^{T}x^{*}-\alpha_{i}x_{i}^{*}+\rho)}{(\alpha^{T}x^{*}+\rho)^{2}}=\frac{R_{i}\alpha_{i}(F-\alpha_{i}x^{*}_{i})}{F^{2}}=1\), verifying (4). Finally, showing that (F,J,K,M) satisfies (11)–(14) proves that the correspondence defined by (10) is one-to-one. Indeed, (51) verifies (11). Also, (10) and (b) imply that \(x^{*}_{i}=B_{i}\) for all i∈J, \(x^{*}_{i}=L_{i}\) for all i∈K and \(L_{i}<x^{*}_{i}<B_{i}\) for all i∈M. As J,K,M partition N (by (b)), \(L_{i}\le x^{*}_{i}\le B_{i}\) for each i∈N, so (12)–(14) follow.
Next, consider a Nash equilibrium x ∗. To show that (F,J,K,M) given by (11)–(14) belongs to \(\mathcal{Q}\), use the fact that x ∗ must satisfy (4)–(8) with some τ 1,…,τ n ,σ 1,…,σ n . For i∈M, (14) and (7)–(8) assure that τ i =σ i =0; hence, by (4) and (11),
Since \(x^{*}_{i}=B_{i}\) for i∈J and \(x^{*}_{i}=L_{i}\) for i∈K, \(\sum_{i\in M}\alpha_{i}x^{*}_{i}=F-(\sum_{i\in J}B_{i}\alpha_{i}+\sum_{i\in K}L_{i}\alpha_{i}+\rho)\). Dividing (52) by R i α i and summing over i∈M gives
So F satisfies (9), verifying Condition (a). To prove Condition (b), first observe that for i∈J, (7) implies that τ i =0; this and (4) imply that \(\frac{R_{i}\alpha_{i}(F-\alpha_{i}x_{i}^{*})}{F^{2}}=1+\sigma_{i}\ge1\), so \(B_{i}=x_{i}^{*}\le\frac{F(R_{i}\alpha_{i}-F)}{R_{i}\alpha_{i}^{2}}\). For i∈K, (8) implies that σ i =0; this together with (4) implies that \(\frac{R_{i}\alpha_{i}(F-\alpha_{i}x_{i}^{*})}{F^{2}}=1-\tau_{i}\le1\), so \(L_{i}=x_{i}^{*}\ge\frac {F(R_{i}\alpha_{i}-F)}{R_{i}\alpha_{i}^{2}}\). For i∈M, by (7) and (8), τ i =σ i =0; so \(L_{i}<x_{i}^{*}= \frac{F(R_{i}\alpha_{i}-F)}{R_{i}\alpha_{i}^{2}}<B_{i}\). As (12)–(14) and L≤x≤B assure that J,K,M partition N, (b) follows and so, \((F,J,K,M)\in \mathcal{Q}\). Finally, to verify that the correspondence defined by (10) is onto and that (11)–(14) define its inverse, it is enough to show that x ∗ is the image of (F,J,K,M) under (10). Indeed, for i∈M, (52) implies that \(x^{*}_{i}= \frac{F(R_{i}\alpha_{i}-F)}{R_{i}\alpha_{i}^{2}}\); as \(x_{i}^{*}=B_{i}\) for i∈J and \(x^{*}_{i}=L_{i}\) for i∈K, x ∗ satisfies (10). □
Proof of Corollary 3.1
The corollary follows immediately from (10) and Condition (b). □
Proof of Corollary 3.2
(i) Suppose \(x_{i}^{*}>L_{i}\), R j α j >R i α i and L j =0. By (15),
implying that R j α j >R i α i >F and \(\frac{F(R_{j}\alpha_{j}-F)}{R_{j}\alpha_{j}^{2}}>0=L_{j}\). Since also B j >0, it follows that \(x^{*}_{j}>0=L_{j}\). (ii) Multiply (15) by α i :
Suppose \(x_{i}^{*}>L_{i}\), R j α j >R i α i and B j α j >B i α i . Then
(iii) If \(x_{i}^{*}>L_{i}\), then by (15), \(\frac{F(R_{i}\alpha_{i}-F)}{R_{i}\alpha_{i}^{2}}>L_{i}\). If R j α j >R i α i , α j <α i and L j ≤L i for j∈N, then \(\frac{F(R_{j}\alpha_{j}-F)}{R_{j}\alpha_{j}^{2}}>\frac{F(R_{i}\alpha_{i}-F)}{R_{i}\alpha_{i}^{2}}>L_{i}\ge L_{j}\). Since also B j >L j , by (15), \(x^{*}_{j}>L_{j}\). □
Proof of Lemma 3.1
(i) Let 1≤t≤|Φ|. If v t =−1, then F t (ρ)=θ t +ζ t +ρ, so ρ t is trivially in the extended domain of F t (⋅). Alternatively, if v t ≥0, then
assuring that ρ t is in the extended domain of F t (⋅).
Next, verify that ρ t is in the extended domain of F t−1(⋅). This is trivial if v t−1=−1. Alternatively, if v t−1≥0, then necessarily t≥2 and the following equality holds:
Recall that \(\varphi_{t}\in\varPhi=[\cup\{\{\overline{S}_{i}, \underline {S}_{\,i}\}:i\in N \textrm{ with } B_{i}<R_{i}/4 \}]\cup[\cup\{\{\overline {T}_{i}, \underline{T}_{\,i}\}:i\in N\}]\), so the proof of (53) is divided in four cases.
Case 1: If \(\varphi_{t}\hspace{-0.5pt}=\hspace{-0.5pt}\overline{T}_{m}\), then \(J_{t}\hspace{-0.5pt}=\hspace{-0.5pt}J_{t-1}\), \(K_{t}\hspace{-0.5pt}=\hspace{-0.5pt}K_{t-1}\setminus\{m\}\), \(M_{t}\hspace{-0.5pt}=\hspace{-0.5pt}M_{t-1}\hspace{-0.5pt}\cup\hspace{-0.5pt}\{m\}\), \(v_{t}\hspace{-0.5pt}=\hspace{-0.5pt}v_{t-1}\hspace{-0.5pt}+\hspace{-0.5pt}1\), \(\gamma_{t}=\gamma_{t-1}+\frac{1}{R_{m}\alpha_{m}}\), θ t =θ t−1, ζ t =ζ t−1−L m α m , \(\varphi_{t}^{2}-R_{m}\alpha_{m}\varphi_{t}+R_{m}\alpha_{m}^{2}L_{m}=0\) (by (17)) and
verifying (53).
Case 2: If \(\varphi_{t}\hspace{-0.5pt}=\hspace{-0.5pt}\underline{T}_{m}\), then \(J_{t}\hspace{-0.5pt}=\hspace{-0.5pt}J_{t-1}\), \(K_{t}\hspace{-0.5pt}=\hspace{-0.5pt}K_{t-1} \hspace{-0.5pt}\cup\hspace{-0.5pt}\{m\}\), \(M_{t}\hspace{-0.5pt}=\hspace{-0.5pt}M_{t-1}\setminus\{m\}\), \(v_{t}\hspace{-0.5pt}=\hspace{-0.5pt}v_{t-1}\hspace{-0.5pt}-\hspace{-0.5pt}1\), \(\gamma_{t}=\gamma_{t-1}-\frac{1}{R_{m}\alpha_{m}}\), θ t =θ t−1, ζ t =ζ t−1+L m α m , \(\varphi_{t}^{2}-R_{m}\alpha_{m}\varphi_{t}+R_{m}\alpha_{m}^{2}L_{m}=0\) (by (17)) and
verifying (53).
Case 3: If \(\varphi_{t}\hspace{-1pt}=\hspace{-1pt}\overline{S}_{m}\) and \(B_{m}\hspace{-1pt}<\hspace{-1pt}R_{m}/4\), then \(J_{t}\hspace{-1pt}=\hspace{-1pt}J_{t-1}\hspace{-1pt}\cup\hspace{-1pt}\{m\}\), \(K_{t}\hspace{-1pt}=\hspace{-1pt}K_{t-1}\), \(M_{t}\hspace{-1pt}=\hspace{-1pt}M_{t-1}\hspace{-1pt}\setminus\hspace{-1pt}\{m\} \), v t =v t−1−1, \(\gamma_{t}=\gamma_{t-1}-\frac{1}{R_{m}\alpha_{m}}\), θ t =θ t−1+B m α m , ζ t =ζ t−1, \(\varphi_{t}^{2}-R_{m}\alpha_{m}\varphi_{t}+R_{m}\alpha_{m}^{2}B_{m}=0\) (by (16)) and
verifying (53).
Case 4: If \(\varphi_{t}\hspace{-1pt}=\hspace{-1pt}\underline{S}_{\,m}\) and \(B_{m}\hspace{-1pt}<\hspace{-1pt}R_{m}/4\), then \(J_{t}\hspace{-1pt}=\hspace{-1pt}J_{t-1}\hspace{-1pt}\setminus\hspace{-1pt}\{m\}\), \(K_{t}\hspace{-1pt}=\hspace{-1pt}K_{t-1}\), \(M_{t}\hspace{-1pt}=\hspace{-1pt}M_{t-1}\hspace{-1pt}\cup\hspace{-1pt}\{m\} \), v t =v t−1+1, \(\gamma_{t}=\gamma_{t-1}+\frac{1}{R_{m}\alpha_{m}}\), θ t =θ t−1−B m α m , ζ t =ζ t−1, \(\varphi_{t}^{2}-R_{m}\alpha_{m}\varphi_{t}+R_{m}\alpha_{m}^{2}B_{m}=0\) (by (16)) and
completing the verification of (53) in all four cases. It now follows from (53) that
proving that ρ t is in the extended domain of F t−1(⋅).
Next is the proof of the two equalities in (i). When v t ≥0, F t (ρ)=φ t if and only if
or equivalently (by squaring both sides, rearranging and dividing by 4γ t ),
the last equality holding by (24); in particular, F t (ρ t )=φ t . Similarly, when v t−1≥0, F t−1(ρ)=φ t if and only if
the last equality following from (53); in particular, F t−1(ρ t )=φ t . Next, if v t =−1, then γ t =0 and φ t =F t (ρ)=θ t +ρ is trivially equivalent to (54), so F t (ρ t )=φ t . Finally, assume that v t−1=−1 (i.e., M t−1=∅), in which case, v t =0 and either \(\varphi_{t}=\underline{S}_{\,m}\) for some m∈N with B m <R m /4 or \(\varphi_{t}=\overline{T}_{m}\) for some m∈N. If \(\varphi_{t}=\underline{S}_{\,m}\) for some m∈N with B m <R m /4, then θ t =θ t−1−B m α m , ζ t =ζ t−1, v t =0, \(\gamma_{t}=\frac{1}{R_{m}\alpha_{m}}\), (by (16)) \(\varphi_{t}^{2}-R_{m}\alpha_{m}\varphi_{t}+R_{m}\alpha_{m}^{2}B_{m}=0\) and
Alternatively, if \(\varphi_{t}=\overline{T}_{m}\), then θ t =θ t−1, ζ t =ζ t−1−L m α m , v t =0, \(\gamma_{t}=\frac{1}{R_{m}\alpha_{m}}\), (by (17))
and
(ii) For t=1,…,|Φ|−1, part (i) implies that F t (ρ t )=φ t >φ t+1=F t (ρ t+1). Since F t (⋅) is strictly increasing, ρ t >ρ t+1. □
Proof of Theorem 3.2
(i) The definition of F t (ρ) assures that (F,J,K,M)=(F t (ρ),J t ,K t ,M t ) satisfies Condition (a). To verify Condition (b), consider two cases: t=0 and t≥1. If t=0, then φ 1=ρ 1≤ρ, \(\{i\in N:\frac{F_{t}(\rho)(R_{i}\alpha_{i}-F_{t}(\rho))}{R_{i}\alpha_{i}^{2}}\le L_{i}\}=\{i\in N: F_{t}(\rho)>\overline{T}_{i}\}=\{i\in N: \varphi_{t}>\overline{T}_{i}\}=N=K_{t}\) and so J t =M t =∅. If t≥1, then the strict monotonicity of F t , part (i) of Lemma 3.1 and ρ t+1≤ρ<ρ t imply that φ t+1=F t (ρ t+1)≤F t (ρ)<F t (ρ t )=φ t , assuring that
Hence, N∖(J t ∪K t )=M t and in either case, (F,J,K,M)=(F t (ρ),J t ,K t ,M t ) satisfies (b).
To show that there is no other such quadruple, consider (F,J,K,M) satisfying (a)–(b). If J∪M=∅, then J=J 0, M=M 0 and (a) implies that F=F 0(ρ)=ρ. By (b) and J=M=∅, ρ 1=φ 1≤F 0(ρ)=ρ; as ρ t+1≤ρ<ρ t , t=0 and (F,J,K,M)=(F t (ρ),J t ,K t ,M t ). If J∪M≠∅, then φ 1>F. So, φ q+1≤F<φ q for a unique q=1,…,|Φ|,
and by (a), F=F q (ρ). Since φ q+1=F q (ρ q+1)≤F=F q (ρ)<φ q =F q (ρ q ) and F q (⋅) is strictly increasing, ρ q+1≤ρ<ρ q . As ρ t+1≤ρ<ρ t , it follows that necessarily q=t and (F,J,K,M)=(F t (ρ),J t ,K t ,M t ).
(ii) immediately follows from (i) and Theorem 3.1, where substituting (10) into \(U_{i}(x^{*})= (\frac{R_{i}\alpha_{i}}{F}-1 )x^{*}_{i}\) establishes (25). □
Proof of Lemma 5.1
(i) Since the global utility function U(⋅) is continuous on the compact nonempty set {x∈ℝn:L≤x≤B}, U(x) attains a maximum, say at x ⋆. Given v ⋆:=α T x ⋆, standard results in linear programming show that the problem of maximizing \(\sum_{i=1}^{n} x_{i} (\frac{R_{i}\alpha_{i}}{v^{\star}+\rho}-1 )\) over L≤x≤B with α T x=v ⋆ admits an optimal solution x # with \(L_{i}<x^{\#}_{i}<B_{i}\) for at most one i; any such x # has U(x #)=U(x ⋆) and is therefore globally optimal.
(ii) If \(x^{\star}_{i}>L_{i}\) and R i α i ≤α T x ⋆+ρ, then \((L_{i},x_{-i}^{\star})\) is feasible and
further, if either \(x_{-i}^{\star}\neq0\) or R i α i <α T x ⋆+ρ, then the inequality is strict, contradicting the optimality of x ⋆. Next assume \(x_{-i}^{\star}= 0\) and \(R_{i}\alpha_{i}=\alpha^{T}x^{\star}+\rho = \alpha_{i}x^{\star}_{i}+\rho\). For \(L_{i}<\delta<x_{i}^{\star}\ ({\le} B_{i})\), it then follows that \((\delta,x_{-i}^{\star})\) is feasible, \(\alpha_{i}\delta+\rho<\alpha_{i}x_{i}^{\star}+\rho=R_{i}\alpha_{i}\) and
contradicting the optimality of x ⋆.
(iii) Let x ⋆ be a globally optimal solution (existence follows from (i)). If R i α i ≤ρ for all i∈N, then (ii) implies that \(x^{\star}_{i}=L_{i}\) for all i∈N, i.e., x ⋆=L. Next, if L=0 and R i α i >ρ for some i∈N, then for \(0<\delta<\frac{R_{i}\alpha_{i}- \rho}{\alpha_{i}}\), x with x i =δ and x −i =0 is feasible and has \(U(x)=\delta (\frac{R_{i}\alpha_{i}}{\alpha_{i}\delta+\rho}-1)>0=U(0)\), assuring that 0 is not globally optimal. □
Proof of Lemma 5.2
(i) For t∈N and v>ω t−1,
and
If \(R_{t}\rho+\sum_{k=1}^{t-1}(R_{t}-R_{k})\alpha_{k}B_{k}+\sum_{k=t}^{n}(R_{t}-R_{k})\alpha_{k}L_{k}\le0\), then (55) is always negative and \(\widehat{U}^{t}(v)\) is strictly decreasing; in the alternative case, (56) is always negative and \(\widehat{U}^{t}(v)\) is strictly concave.
(ii) Consider t∈N∖{n} and v≥ω t . From (35) and (38),
with equality holding if and only if v=ω t . Further, differentiating this difference with respect to v and evaluating it at v=ω t yields
where the inequality follows from (38).
(iii) Equation (34) implies that the maximization problem defining U #(v) (by (37)) is an n-item continuous knapsack problem, where item i∈N has unit value \(a_{i}:=\frac {R_{i}\alpha_{i}}{v^{\star}+\rho}-1\), unit weight c i :=α i , lower bound L i and availability B i . The optimal solution of this problem is obtained by indexing the firms in decreasing order of \(\frac {a_{i}}{c_{i}}=\frac{R_{i}}{v+\rho}-\frac{1}{\alpha_{i}}\), first allocating L i to each i∈N and then distributing v−α T L in increasing of the firms’ indices up to their upper bounds. Due to the strict inequalities in (38), for \(v\in{\mathcal {I}}_{t}\), a unique optimal solution is \((B_{1},\dots,B_{t-1}, L_{t}+\frac{v-\omega_{t-1}}{\alpha_{t}},L_{t+1},\dots,L_{n})\) and (using (35) and (36)) \(U^{\#}(v)=\widehat{U}^{t}(v)=\widehat{U}(v)\). Since this is true for all t∈N, \(U^{\#}=\widehat{U}\) on [ω 0,ω n ].
(iv) To show that for each 1≤t≤n, there exists a value ω 0≤v t≤ω t such that \(\widehat{U}\) is strictly increasing on [ω 0,v t] and strictly decreasing on [v t,ω t ], use induction. This holds for t=1, since \(\widehat{U}(v)=\widehat {U}^{1}(v)\) on [ω 0,ω 1] by (iii) and \(\widehat{U}^{1}\) is unimodal by (i). Suppose that for 1≤t−1≤n−1, \(\widehat{U}(v)\) is strictly increasing on [ω 0,v t−1] and strictly decreasing on [v t−1,ω t−1] for some ω 0≤v t−1≤ω t−1. If v t−1=ω t−1, then \(\widehat{U}(v)\) is strictly increasing on [ω 0,v t] and strictly decreasing on [v t,ω t ] for some ω 0≤v t≤ω t (by (i)). Alternatively, if v t−1<ω t−1, then \(\frac{d\widehat {U}^{t-1}(\omega_{t-1})}{dv}<0\), by (39), \(\frac {d\widehat{U}^{t}(\omega_{t-1})}{dv}<0\) and, therefore, by the unimodality of \(\widehat{U}^{t}\), it must be strictly decreasing on [ω t−1,∞). This implies that \(\widehat{U}\) is strictly increasing on [ω 0,v t−1] and strictly decreasing on [v t−1,ω t ].
(v) Evidently,
the restriction ω 0≤v≤ω n can be imposed as {x∈ℝn:L≤x≤B,α T x=v}≠∅ if and only if ω 0≤v≤ω n . Let \(v^{\star}\in{\mathcal{I}}_{t^{\star}}\) be the unique maximizer of U # over [ω 0,ω n ] (existence and uniqueness follow from (iv)). Then any maximizer of U over X ⋆:={x∈ℝn:L≤x≤B,α T x=v ⋆} is a globally optimal solution and (iii) shows that x ⋆ given by (40) is such a maximizer. Now suppose x′ is a globally optimal solution. Then U #(α T x′)≥U(x′)≥U(x ⋆)=U #(v ⋆) and the uniqueness of v ⋆ implies that α T x′=v ⋆ and that x′ attains the maximum of U(x) over X ⋆; by (iii), x ⋆ is the unique maximizer, hence x′=x ⋆. □
Proof of Theorem 5.1
Let t ⋆ and v ⋆ be as in the statement of the theorem; both are well defined and unique, since \(\widehat{U}^{t^{\star}}\) is unimodal (by Lemma 5.2(i)). From Lemma 5.2(iv), U # is unimodal on [ω 0,ω n ], so it has a unique maximizer on this interval, say v′. If \(\omega_{t^{\star}-1}\le v'\le\omega_{t^{\star}}\), then v′=v ⋆, since \(\widehat{U}^{t^{\star}}\) has a unique maximizer. If \(v'>\omega_{t^{\star}}\), then for some k≥1,
contradicting the optimality of v′. Finally, if \(v'<\omega_{t^{\star}-1}\), then t ⋆>1. By definition of t ⋆, \(\widehat{U}^{t}\) is strictly increasing on [ω t−1,ω t ] for each t<t ⋆, so \(U^{\#}=\widehat{U}\) is strictly increasing on \([\omega_{0},\omega_{t^{\star}-1}]\), again contradicting the optimality of v′. The conclusion that x ⋆ is a unique globally optimal solution now follows from Lemma 5.2(v). Finally, observe that as \(x^{\star}_{i}>L_{i}\) for i≤t ⋆−1, Lemma 5.1(ii) assures that R i α i >ρ, implying that t ⋆−1≤|{i∈N:R i α i >ρ}|. □
Proof of Lemma 5.3
For v≥0, let \(f_{ij}(v):=\frac{R_{i}-R_{j}}{v+\rho}+\frac{1}{\alpha_{j}}-\frac {1}{\alpha_{i}}\). Then \(f'_{ij}(v)=\frac{R_{j}-R_{i}}{(v+\rho)^{2}}\) and, by (41), f ij (0)>0.
(i) If R i ≤R j , then \(f'_{ij}(v)=\frac{R_{j}-R_{i}}{(v+\rho)^{2}}\ge 0\) for each v≥0, implying that f ij (v) is increasing in v>0. As f ij (0)>0, conclude f ij (v)>0 for all v≥0, verifying (42). Next, if R i >R j and α i ≥α j , then f ij (v)>0 for all v≥0, again verifying (42).
(ii) If R i >R j and α i <α j , then \(f'_{ij}(v)<0\) for each v≥0, so f ij (v) is strictly decreasing in v≥0. Further, \(v_{ij}= \frac{R_{i}-R_{j}}{\frac{1}{\alpha_{i}}-\frac{1}{\alpha_{j}}}-\rho>0\) is the unique root of f ij , where the positivity follows from (41). The conclusions (43)–(45) now follow easily. □
Proof of Corollary 5.1
Equation (46) implies that
i.e., (41) holds for 1≤i<j≤n. Since (46) also implies the assumptions of Lemma 5.3(i) for 1≤i<j≤n, the corresponding conclusion of that lemma assures that (38) holds. Next, (47) implies that for v≥0, \(\frac{R_{i}}{v+\rho}-\frac{1}{\alpha_{i}}=\frac{1}{\alpha_{i}} [\frac{R_{i}\alpha_{i}}{v+\rho}-1 ]\) is strictly decreasing in i∈N, again verifying (38). If (48) holds, then (38) is trivially satisfied, as \(\widehat{U}(v)\) is not defined for \(v>\sum_{k=1}^{n} \alpha_{k}B_{k}\). Under any one of these three conditions, since (38) holds, the conclusions of Theorem 5.1 follow. □
Rights and permissions
About this article
Cite this article
Canbolat, P.G., Golany, B. & Rothblum, U.G. A Stochastic Competitive Research and Development Race Where “Winner Takes All” with Lower and Upper Bounds. J Optim Theory Appl 154, 986–1014 (2012). https://doi.org/10.1007/s10957-012-0066-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-012-0066-x