Abstract
We study a chance-constrained optimization problem where the random variable appearing in the chance constraint follows a normal distribution whose mean and variance both depend linearly on the decision variables. Such structure may arise in many applications, including the normal approximation to the Poisson distribution. We present a polynomial-time algorithm to solve the resulting nonconvex optimization problem, and illustrate the efficacy of our method using a numerical experiment.
Similar content being viewed by others
Code availability
References
Prèkopa, A.: Stochastic Programming. Mathematics and Its Applications, vol. 324. Kluwer Academic Publishers, Dordrecht (1995)
Charnes, A., Cooper, W.W., Symonds, G.H.: Cost horizons and certainty equivalents: an approach to stochastic programming of heating oil. Manag. Sci. 4(3), 235–263 (1958)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Shylo, O.V., Prokopyev, O.A., Schaefer, A.J.: Stochastic operating room scheduling for high-volume specialties under block booking. INFORMS J. Comput. 25(4), 682–692 (2013)
Mildebrath, D., Lee, T., Sinha, S., Schaefer, A.J., Gaber, A.O.: Characterizing rational transplant program response to outcome-based regulation. Oper. Res., To appear (2021)
Goyal, V., Ravi, R.: A PTAS for the chance-constrained knapsack problem with random item sizes. Oper. Res. Lett. 38, 161–164 (2010)
Song, Y., Luedtke, J.R., Küçükyavuz, S.: Chance-constrained binary packing problems. INFORMS J. Comput. 26(4), 735–747 (2014)
Chen, J., Chen, L., Sun, D.: Air traffic flow management under uncertainty using chance-constrained optimization. Transport. Res. Part B Methodol. 102, 124–141 (2017)
Katoh, N., Ibaraki, T.: A polynomial time algorithm for a chance-constrained single machine scheduling problem. Oper. Res. Lett. 2(2), 62–65 (1983)
Hillestad, R.J., Jacobsen, S.E.: Linear programs with an additional reverse convex constraint. Appl. Math. Optim. 6(1), 257–269 (1980)
Hillestad, R.J., Jacobsen, S.E.: Reverse convex programming. Appl. Math. Optim. 6(1), 63–78 (1980)
Tuy, H.: Convex programs with an additional reverse convex constraint. J. Optim. Theory Appl. 52(3), 463–486 (1987)
Saad, S.B., Jacobsen, S.E.: A level set algorithm for a class of reverse convex programs. Ann. Oper. Res. 25, 19–42 (1990)
Nocedal, J., Wright, S.J.: Numerical Optimization. Springer Series in Operations Research and Financial Engineering, 2nd edn. Springer, New York (2006)
Acknowledgements
The author would like to thank Andrew Schaefer, Temitayo Ajayi and Saumya Sinha for helpful comments and conversations.
Funding
This work was supported by the United States Department of Defense through the National Defense Science and Engineering Graduate (NDSEG) Fellowship Program.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A Solution to the single-edge subproblem
We now derive a solution for the single-edge subproblem (6), which is solved repeatedly in Algorithm 1. For the remainder of the section, we fix \(i\in {N}\) and \({\hat{a}},{\hat{b}},{\hat{c}}\in {\mathbb {R}}\). We are interested in solving \(\displaystyle \max _{\gamma \in [0,1]}\{c_i\gamma \mid {g(\gamma )\le 0}\}\) where
The function g is only defined for \(\gamma \ge \gamma _0=-{\hat{b}}/b_i\), and is differentiable only for \(\gamma >\gamma _0\), with derivative \(g'(\gamma )=a_i+\tfrac{1}{2}\phi {b_i}/\sqrt{b_i\gamma +{\hat{b}}}\). From the form of the derivative, we see that if \(a_i\ge 0\), the function g is monotonically increasing on its domain, and if \(a_i<0\), g increases monotonically up to a point \(\gamma _{\mathrm {max}}\), before decreasing monotonically as \(\gamma \rightarrow \infty\). Hence, there are several cases, depending on whether \(a_i\ge 0\) and \(g(\gamma _0)>0\):
-
Case 1 \(a_i\ge 0\) and \(g(\gamma _0)>0\). Then \(g(\gamma )>0\) for all \(\gamma \in [\gamma _0,\infty )\), and thus (6) is infeasible.
-
Case 2 \(a_i\ge 0\) and \(g(\gamma _0)\le 0\). Then g has a single root \(r_0\) that can be computed with the quadratic formula, and \(g(\gamma )\le 0\) for all \(\gamma \in [\gamma _0,r_0]\). If \(r_0<0\), then (6) is infeasible. Otherwise \(r_0\ge 0\), and the the optimal solution to (6) is
$$\begin{aligned} \gamma ^*={\left\{ \begin{array}{ll} \min \{1,r_0\},&{}c_i>0,\\ 0,&{}{\text {otherwise}}. \end{array}\right. } \end{aligned}$$ -
Case 3 \(a_i<0\) and \(g(\gamma _0)>0\). Then g has a single root \(r_0\), and \(g(\gamma )\le 0\) for all \(\gamma \in [r_0,\infty )\). If \(r_0>1\), then (6) is infeasible. Otherwise \(r_0\le 1\), and the optimal solution is
$$\begin{aligned} \gamma ^*={\left\{ \begin{array}{ll} 1,&{}c_i>0,\\ \max \{0,r_0\},&{}{\text {otherwise}}. \end{array}\right. } \end{aligned}$$ -
Case 4 \(a_i<0\) and \(g(\gamma _0)\le 0\). In this case, the function g has a single local maximum at \(\gamma _{\mathrm {max}}=-\big ({\hat{b}}-\tfrac{1}{4}(\phi {b_i}/a_i)^2\big )/b_i\).
-
Case 4a \(g(\gamma _{\mathrm {max}})>0\). Then g has two roots \(r_0\) and \(r_1\), and \(g(\gamma )\le 0\) for all \(\gamma \in [\gamma _0,r_0]\cup [r_1,\infty )\). If \(r_0<0<1<r_1\), then (6) is infeasible. If \(c_i>0\), then
$$\begin{aligned} \gamma ^*={\left\{ \begin{array}{ll} r_0,&{}r_0<1<r_1,\\ 1,&{}{\text {otherwise}}. \end{array}\right. } \end{aligned}$$Otherwise, if \(c_i\le 0\), then
$$\begin{aligned} \gamma ^*={\left\{ \begin{array}{ll} r_1,&{}r_0<0<r_1,\\ 0,&{}{\text {otherwise}}. \end{array}\right. } \end{aligned}$$ -
Case 4b \(g(\gamma _{\mathrm {max}})\le 0\). Then \(g(\gamma )\le 0\) for all \(\gamma \ge \gamma _0\), and
$$\begin{aligned} \gamma ^*={\left\{ \begin{array}{ll} 1,&{}c_i>0,\\ 0,&{}{\text {otherwise}}. \end{array}\right. } \end{aligned}$$
-
Appendix B Proof of Proposition 5
Lemma 13
Fix \(y\in [0,{\bar{b}}]\) such that (5) is feasible, and let \(\ell \in \{0,\dots ,L\}\) such that \(y\in [y_\ell ,y_{\ell +1}]\). Then there exists an optimal solution \(x^*\) for (5) such that \(x_i^*=0\) for all \(i\in N^0_\ell\) and \(x_i^*=1\) for all \(i\in {N^1_\ell }\).
Proof
Fix \(y\in [0,{\bar{b}}]\) such that (5) is feasible, and let \(\ell \in \{0,\dots ,L\}\) such that \(y\in [y_\ell ,y_{\ell +1}]\). Let \(x'\) be an optimal solution for (5). Suppose that \(x'_i>0\) for some \(i\in {N^0_\ell }\). The vector obtained by decreasing \(x'_i\) to 0 (and leaving all other coordinates unchanged) is feasible, because \(p_i(y)\ge 0\), and produces an objective value no worse than \(x'\), because \(c_i\le 0\). We conclude that this modified point is also optimal for (5). Repeating this argument for \(i\in {N^0_\ell }\) such that \(x'_i>0\), we obtain an optimal solution with \({\hat{x}}_i=0\) for all \(i\in {N^0_\ell }\). Now suppose that \({\hat{x}}_i<1\) for some \(i\in {N^1_\ell }\). If we increase \({\hat{x}}_i\) to 1, the resulting vector remains feasible (because \(p_i(y)\le 0\)) and produces an objective value no worse than \({\hat{x}}\) (because \(c_i\ge 0\)). Repeating this procedure for all \(i\in {N^1_\ell }\) such that \({\hat{x}}_i=1\), we obtain a solution \({\tilde{x}}\) such that \({\tilde{x}}_i=0\) for all \(i\in {N^0_\ell }\) and \({\tilde{x}}_i=1\) for all \(i\in {N^1_\ell }\), as desired. \(\square\)
Now consider the following fractional knapsack problem, parameterized by \(y\ge 0\),
where \(\ell \in \{0,\dots ,L\}\) such that \(y\in [y_\ell ,y_{\ell +1}]\).
Lemma 14
Suppose that \(v^*\in {\mathbb {R}}^{n_\ell }\) is optimal for (B1). Then the vector \(x^*\in {\mathbb {R}}^n\) given by
is optimal for (5).
Proof
For brevity, in the proof, we drop the subscript \(\ell\) on the index sets \(N^+\), \(N^-\), etc. Suppose that \(v^*\in {\mathbb {R}}^{n_\ell }\) is optimal for (B1), and consider the optimization problem obtained from (B1) by replacing the variables \(v_i\) for \(i\in {N^-}\) with \(1-v_i\):
By applying the variable transformation \(v_i\mapsto 1-v_i\) for \(i\in {N^-}\), we have that the vector \(v'\in {\mathbb {R}}^{n_\ell }\) given by
is optimal for (B2). Next, using the fact that \(c_i>0\), \(p_i(y)\ge 0\) for all \(i\in {N^+}\) and \(c_i<0\), \(p_i(y)\le 0\) for all \(i\in {N^-}\), the problem (B2) can be equivalently expressed as
and thus \(v'\) is optimal for (B3). The optimization problem (B3) may be equivalently expressed as
and thus the vector \(v''\in {\mathbb {R}}^n\) given by
is optimal for (B4). Note that (B4) is exactly (5), with the additional constraints (B4d) and (B4e). Consequently, the vector \(v''\) is feasible for (5). We now show that \(v''\) is optimal for (5). Suppose not. Then by Lemma 13, there exists \({\hat{v}}\in {\mathbb {R}}^n\) such that \(c^\top {\hat{v}}>c^\top v''\) and \({\hat{v}}_i=0\) for all \(i\in {N^0}\) and \({\hat{v}}_i=1\) for all \(i\in {N^1}\). Hence, the vector \({\hat{v}}\) is feasible for (B4), and produces a strictly better objective value than \(v''\). This contradicts the optimality of \(v''\) for (B4), and we conclude that \(v''\) is optimal for (5), as desired. \(\square\)
Note that, for all \(i\in {N^+}\cup {N^-}\) and \(y\not \in {Y^1}\), \(p_i(y)/c_i>0\), and thus \(|p_i(y)/c_i|=p_i(y)/c_i\). Lemma 15 follows immediately from the well-known greedy algorithm for fractional knapsack problems with strictly positive weight and cost coefficients, and its proof is omitted.
Lemma 15
For all \(\ell \in \{0,\dots ,L\}\) and \(y\in [y_\ell ,y_{\ell +1}]{\setminus} Y^1\), there exists \(v^*\in {\mathbb {R}}^{n_\ell }\) and \(k\in \{1,\dots ,n_\ell \}\) such that \(v^*\) is optimal for (B1), and
Proof of Proposition 5
Fix \(\ell \in \{0,\dots ,L\}\) and \(y'\in [y_\ell ,y_{\ell +1}]{\setminus} {Y^1}\). By Lemma 15, there exists a vector \(v^*\in {\mathbb {R}}^{n_\ell }\) and an index \(k\in \{1,\dots ,n_\ell \}\) satisfying \((*)\) such that \(v^*\) is optimal for (B1) with \(y=y'\). Hence, by Lemma 14, the vector \(x^*\in {\mathbb {R}}^n\) given by
is optimal for (5). To complete the proof, it suffices to show that \(x^*\in {E^k_\ell }\). By construction, we have that \(x^*_{\pi _\ell (k)}\in [0,1]\). Also, \(x^*_i=0\) for all \(i\in {N_\ell ^0}\), and \(x^*_i=1\) for all \(i\in {N_\ell ^1}\). If \(i\in N_\ell ^+\) and \(\pi _\ell ^{-1}(i)>k\), then \(x^*_i=v_i^*=0\). If \(i\in {N_\ell ^-}\) and \(\pi _\ell ^{-1}(i)<k\), then \(v_i^*=1\), so \(x_i^*=1-v_i^*=0\). Similarly, if \(i\in {N_\ell ^+}\) and \(\pi _\ell ^{-1}(i)<k\), then \(x_i^*=v_i^*=1\). Lastly, if \(i\in {N_\ell ^-}\) and \(\pi _\ell ^{-1}(i)>k\), then \(v_i^*=0\), so \(x_i^*=1-v_i^*=1\). We conclude that \(x^*\in E^k_\ell\), as desired. \(\square\)
Rights and permissions
About this article
Cite this article
Mildebrath, D. A polynomial-time algorithm for a nonconvex chance-constrained program under the normal approximation. Optim Lett 17, 265–282 (2023). https://doi.org/10.1007/s11590-022-01905-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-022-01905-6