Abstract
This paper focuses on finding a solution maximizing the joint probability of satisfaction of a given set of (independent) Gaussian bilateral inequalities. A specially structured reformulation of this nonconvex optimization problem is proposed, in which all nonconvexities are embedded in a set of 2-variable functions composing the objective. From this, it is shown how a polynomial-time solvable convex relaxation can be derived. Extensive computational experiments are also reported, and compared to previously existing results, showing that the approach typically yields feasible solutions and upper bounds within much sharper confidence intervals.
Similar content being viewed by others
References
Andrieu, L., Henrion, R., Römisch, W.: A model for dynamic chance constraints in hydro power reservoir management. Eur. J. Oper. Res. 207, 579–589 (2010)
Androulakis, I.P., Maranas, C.D., Floudas, C.A.: \(\alpha BB\): a global optimization method for general constrained nonconvex problems. J. Glob. Optim. 7, 337–363 (1995)
Bienstock, D., Chertkov, M., Harnett, S.: Chance-constrained optimal power flow: risk-aware network control under uncertainty. SIAM Rev. 56(3), 461–495 (2014)
Boyd, S., Barratt, C.: Linear Controller Design: Limits of Performance. Prentice Hall, Englewood Cliffs (1991)
Bremer, I., Henrion, R., Möller, A.: Probabilistic constraints via SQP solver: application to a renewable energy management problem. Comput. Manag. Sci. 12, 435–459 (2015)
Charnes, V., Cooper, W.: Chance-constrained programming. Manag. Sci. 6, 73–79 (1959)
Cheng, J., Lisser, A.: A second-order cone programming approach for linear programs with joint probabilistic constraints. Oper. Res. Lett. 40(5), 325–328 (2012)
Cheng, J., Houda, M., Lisser, A.: Second-order cone programming approach for elliptically distributed joint probabilistic constraints with dependent rows. Optimization-online DB FILE/2014/05/4363 (2015)
Fábián, C.I., Csizmás, E., Drenyovszki, R., van Ackooij, W., Vajnai, T., Kovács, L., Szántai, T.: Probability maximization by inner approximation. Acta Polytech. Hung. 15(1), 105–125 (2018)
Floudas, C.A.: Deterministic Global Optimization Theory, Methods and Applications. Kluwer, Dordrecht (2000)
Grant, M., Boyd, S.: The CVX users’ guide. CVX Research Inc., Version 2.1. http://cvxr.com/cvx/doc/cvx (2013). Accessed Dec 2018
Grötschel, M., Lovasz, L., Schrijver, A.: Geometric Algorithms and Combinatorial Optimization. Springer, Berlin (1988)
Henrion, R., Strugarek, C.: Convexity of chance constraints with independent random variables. Comput. Optim. Appl. 41, 263–276 (2008)
Henrion, R., Strugarek, C.: Convexity of chance constraints with dependent random variables: the use of copulae. In: Bertocchi, M., Consigli, G., Dempster, M. (eds.) Stochastic Optimization Methods in Finance and Energy. International Series in Operations Research and Management Science, vol. 163, pp. 427–439. Springer, Berlin (2011)
Jagannathan, R.: Chance-constrained programming with joint constraints. Oper. Res. 22, 358–372 (1974)
Küçükyavuz, S.: On mixing sets arising in chance-constrained programming. Math. Program. 132(1–2), 31–56 (2012)
Liu, X., Küçükyavuz, S., Luedtke, J.: Decomposition algorithm for two-stage chance constrained programs. Math. Program. Ser. B 157(1), 219–243 (2016)
Locatelli, M., Schoen, F.: Global Optimization Theory, Algorithms and Applications. SIAM, Philadelphia (2013)
Luedtke, J.: An integer programming and decomposition approach to general chance constrained mathematical programs. In: Eisenbrand, F., Shepherd, F.B. (eds.) Integer Programming and Combinatorial Optimization. Lecture Notes in Computer Science, vol. 6080, pp. 271–284. Springer, Berlin (2010)
Luedtke, J.: A branch-and-cut decomposition algorithm for solving chance constrained mathematical programs with finite support. Math. Program. 146(1–2), 219–244 (2014)
Luedtke, J., Ahmed, S.: A sample approximation approach for optimization with probabilistic constraints. SIAM J. Optim. 19, 674–699 (2008)
Luedtke, J., Ahmed, S., Nemhauser, G.L.: An integer programming approach for linear programs with probabilistic constraints. Math. Program. 122, 247–272 (2010)
Meyer, C.A., Floudas, C.A.: Convex envelopes of trilinear monomials with positive or negative domains. J. Glob. Optim. 29, 125–155 (2004)
Miller, L.B., Wagner, H.: Chance-constrained programming with joint constraints. Oper. Res. 13, 930–945 (1965)
Minoux, M., Zorgati, R.: Convexity of Gaussian chance constraints and of related probability maximization problems. Comput. Stat. 31(1), 387–408 (2016). https://doi.org/10.1007/s00180-015-0580-z
Minoux, M., Zorgati, R.: Global probability maximization for a Gaussian bilateral inequality in polynomial time. J. Glob. Optim. 68(4), 879–898 (2017). https://doi.org/10.1007/s10898-017-0501-5
Neumaier, A., Shcherbina, O., Huyer, W., Vinko, T.: A comparison of complete global optimization solvers. Math. Progr. 103, 335–356 (2005)
Prekopa, A.: Stochastic Programming. Kluwer, Dordrecht (1995)
Rikun, A.D.: A convex envelope formula for multilinear functions. J. Glob. Optim. 10, 425–437 (1997)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Sakalli, U.S., Baykoç, O.F., Birgören, B.: Stochastic optimization for blending problem in brass casting industry. Ann. Oper. Res. 186, 141–157 (2011)
Sherali, H.D., Alameddine, A.: An explicit characterization of the convex envelope of a bivariate bilinear function over special polytopes. Ann. Oper. Res. 25, 197–210 (1990)
Sherali, H.D., Tuncbilek, C.H.: A reformulation-convexification approach for solving nonconvex quadratic programming problems. J. Glob. Optim. 7, 1–31 (1995)
Shih, J.S., Frey, H.C.: Coal blending optimization under uncertainty. Eur. J. Oper. Res. 83, 452–465 (1995)
Souza Lobo, M., Vandenberghe, L., Boyd, S., Lebret, H.: Applications of second-order cone programming. Linear Algebra Appl. 284, 193–228 (1998)
Stein, O., Kirst, P., Steuermann, P.: An enhanced spatial branch-and-bound method in global optimization with nonconvex constraints. Research Report Karlsruhe Institute of Technology-Germany (22 March 2013)
Tawarmalani, M., Sahinidis, N.: Semidefinite relaxations of fractional programs via novel convexification techniques. J. Glob. Optim. 20, 137–158 (2001)
Tawarmalani, M., Sahinidis, N.: Convexification and Global Optimization in Continuous and Mixed-Integer Nonlinear Programming. Kluwer, Dordrecht (2002)
van Ackooij, W., Henrion, R., Moller, A., Zorgati, R.: On joint probabilistic constraints with Gaussian coefficient matrix. Oper. Res. Lett. 39, 99–102 (2011)
van Ackooij, W., de Oliveira, W.: Convexity and optimization with copulæstructured probabilistic constraints. Optim. J. Math. Program. Oper. Res. 65(7), 1349–1376 (2016)
Acknowledgements
Three anonymous Reviewers are gratefully acknowledged for all remarks and constructive comments which resulted in an improved revised version of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 Proposition A1
Let \(\gamma \ge 0\) and \(\rho : {\mathbb {R}}_{+}\rightarrow {\mathbb {R}}\) be defined as:
Then:
-
(i)
\(\rho \) can have at most one zero, and when such a zero exits, it necessarily belongs to the interval \(\left[ \sqrt{1+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}; \sqrt{2+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}\right] \);
-
(ii)
if \(\gamma > 0\), \(\rho \) is strictly decreasing on the above interval.
Proof
We first show that, for all \(z \ge \gamma > 0\),
To that aim, we consider the function \(\zeta : {\mathbb {R}}_{+}\rightarrow {\mathbb {R}}\) defined as:
Since \(\frac{d\zeta (z)}{dz}=(1-z(z+\gamma ))e^{-\frac{z^{2}}{2}}-e^{-\frac{z^{2}}{2}}=-z(z+\gamma )e^{-\frac{z^{2}}{2}}\), \(\zeta \) is strictly decreasing on \(\left. \right] 0, +\infty \left[ \right. \). Thus, to show that \(\zeta (z) \le 0\) (for \(z \ge \gamma \)), it suffices to show that \(\zeta (\gamma )=2\gamma e^{-\frac{\gamma ^{2}}{2}}-\sqrt{2\pi }(2\varPhi (\gamma )-1) \le 0\).
Let us consider the function \(\xi \) defined, for any \(\gamma \ge 0\), as:
Observing that: \(\frac{d\xi (\gamma )}{d\gamma }=-2\gamma ^{2}e^{-\frac{\gamma ^{2}}{2}} < 0\) for all \(\gamma >0\), and that \(\xi (0)=0\), we can conclude that \(\xi (\gamma ) \le 0\) for all \(\gamma \ge 0\), and from this it follows that \(\zeta (\gamma ) \le 0\).
As an immediate consequence of the above property, we deduce \(\rho (z) \ge 1-z(z+\gamma )\) and thus, for all \(z \ge \gamma \):
Any zero \({\bar{z}}\) of \(\rho (z)\) should thus meet the double-sided inequality \(1 \le {\bar{z}}({\bar{z}}+\gamma ) \le 2\) or equivalently:
which proves (i).
Now, to prove (ii), we are going to determine a majorant of the derivative \(\frac{d\rho (z)}{dz}\) on the above interval and show that it can be bounded from above by 0. The only tricky part is to determine a majorant of the derivative of
The latter reads:
On the interval \(\left[ \sqrt{1+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}; \sqrt{2+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}\right] \), we know that \(z(z+\gamma )-1\) can be bounded from above with 1, and from (29) that \(\dfrac{(z+\gamma )e^{-\frac{z^{2}}{2}}}{\sqrt{2\pi }\left( \varPhi (\gamma )+\varPhi (z)-1 \right) } \) can be bounded from above with 1. Thus,
and, using once more (29), we get:
We thus deduce the following upper bound for \(\dfrac{d\rho (z)}{dz}\) on the interval \(\left[ \sqrt{1+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}; \sqrt{2+\frac{\gamma ^{2}}{4}} -\frac{\gamma }{2}\right] \):
Since, in the interval under consideration, it holds: \(z(z+\gamma ) \ge 1\), we deduce that, if \(\gamma >0, \dfrac{d\rho (z)}{dz} < 0\) on this interval and this proves (ii).\(\square \)
1.2 Proposition A2
The function \({\tilde{\rho }}: {\mathbb {R}}_{+}\rightarrow {\mathbb {R}}\) defined as:
has a unique zero \({\bar{z}}\) on \({\mathbb {R}}_{+}\setminus {0}\), which necessarily lies with interval \(\left[ 1; \sqrt{2}\right] \). Moreover, \({\tilde{\rho }}\) is strictly decreasing on this interval.
Proof
We first show that, for all \(z > 0\),
The derivative of the function \(\zeta (z)=ze^{-\frac{z^{2}}{2}}-\sqrt{2\pi }\left( \varPhi (z)-\frac{1}{2} \right) \) is equal to \((1-z^{2})e^{-\frac{z^{2}}{2}}-e^{-\frac{z^{2}}{2}}=-z^{2}e^{-\frac{z^{2}}{2}}\), hence, \(\zeta \) is strictly decreasing on \(\left. \right] 0, +\infty \left[ \right. \) and since \(\zeta (0)=0\), it follows that \(\zeta (z) \le 0\) for all \(z \ge 0\), and this proves (31).
Now, if \({\bar{z}}\) is a zero of \({\tilde{\rho }}(z)\), it holds:
Since the right-hand side above is necessarily included in the interval \(\left[ 0;1 \right] \), it is seen that \({\bar{z}}\) has to belong to the interval \(\left[ 1;\sqrt{2} \right] \). To show strict monotonicity of \({\tilde{\rho }}\) on this interval, we compute the derivative:
Using (31) and the fact that \(z \ge 1\), we can derive the following upper bound for \(\dfrac{d{\tilde{\rho }}}{dz}\):
Thus \({\tilde{\rho }}\) is strictly deacreasing on \(\left[ 1;\sqrt{2} \right] \) and has a unique zero on this interval which can be determined to any prescribed accuracy by dichotomic search (a 6-digit approximation is 1.175461). \(\square \)
1.3 Proposition A3
For all \((\varepsilon ,y) \in D\) defined by Eq. (16), the following bounds are valid:
Proof
Denoting \(s=\gamma +\frac{y}{\alpha +\varepsilon }\), \(t=\frac{b-a-y}{\alpha +\varepsilon }-\gamma \) for conciseness, simple calculation leads to:
Since, on D, \(y \ge 0\) and \((b-a-y) \ge 0\), we can see that both terms in the numerator of the above expression are negative, and from this \(\frac{\partial \varphi (\varepsilon ,y)}{\partial \varepsilon } \le 0\) follows.
To get the lower bound, we use the fact that, for any \((\varepsilon ,y) \in D\), the double inequality \(t \ge s \ge \gamma \), holds, therefore:
-
both terms \(-y e^{\frac{-s^{2}}{2}}\) and \(-(b-a-y)e^{\frac{-s^{2}}{2}}\) can be bounded from below by \(-y e^{\frac{-\gamma ^{2}}{2}}\) and \(-(b-a-y)e^{\frac{-\gamma ^{2}}{2}}\) respectively;
-
\(\varPhi (s)+\varPhi (t)-1\) can be bounded from below by \(2\varPhi (\gamma )-1\);
-
\((\alpha +\varepsilon )^{2}\) can be bounded from below by \(\alpha ^{2}\).
From this, the lower bound given in (32) follows.
Now, \(\frac{\partial \varphi (\varepsilon ,y)}{\partial y}\) can be written:
The zero lower bound in (33) follows from \(s \le t\), and the upper bound in (33) follows from the fact that \(e^{\frac{-s^{2}}{2}}-e^{\frac{-t^{2}}{2}}\) can be bounded from above with \(e^{\frac{-\gamma ^{2}}{2}}\), and \(\varPhi (s)+\varPhi (t)-1 \ge 2\varPhi (\gamma )-1\) and \(\alpha +\varepsilon \ge \alpha \). \(\square \)
1.4 Proposition A4
Let \(\varOmega =\left\{ \left( \begin{array}{c} u \\ v \end{array} \right) : -B_{\varepsilon }^{+} \le u \le -B_{\varepsilon }^{-}, \; -B_{y}^{+} \le v \le -B_{y}^{-} \right\} \) and consider \(\left( \begin{array}{cc} {\hat{\varepsilon }}, {\hat{y}} \end{array} \right) ^{T}\) any given point in D. Then, for any \(\left( \begin{array}{cc} u, v \end{array} \right) ^{T} \notin \varOmega \), there exists \(\left( \begin{array}{cc} {\bar{u}}, {\bar{v}} \end{array} \right) ^{T} \in \varOmega \) such that:
As a consequence, the minimum value of the function \(\varphi ^{\sharp }(u,v)- u{\hat{\varepsilon }}-v{\hat{y}}\) for all \(\left( \begin{array}{cc} u, v \end{array} \right) ^{T} \in {\mathbb {R}}^2\) is necessarily attained at some point of \(\varOmega \).
Proof
Eight distinct cases have to be considered, depending on the values taken by u and v with respect to the values \(B_{\varepsilon }^{+}, B_{\varepsilon }^{-}, B_{y}^{+}, B_{y}^{-}\). The first five cases are listed below, and in each case, it will be seen that the corresponding value of \(\left( \begin{array}{cc} {\bar{u}}, {\bar{v}} \end{array} \right) ^{T}\) is the orthogonal projection of \(\left( \begin{array}{cc} u, v \end{array} \right) ^{T}\) on \(\varOmega \).
-
Case 1\(-B_{\varepsilon }^{-} \le u\) and \(v \le -B_{y}^{+}; \left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) =\left( \begin{array}{c} -B_{\varepsilon }^{-} \\ -B_{y}^{+} \end{array} \right) \);
-
Case 2\(u \in \left[ -B_{\varepsilon }^{+},-B_{\varepsilon }^{-} \right] \) and \(v \le -B_{y}^{+}; \left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) =\left( \begin{array}{c} u \\ -B_{y}^{+} \end{array} \right) \);
-
Case 3\(u \le -B_{\varepsilon }^{+} \) and \(v \le -B_{y}^{+}; \left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) =\left( \begin{array}{c} -B_{\varepsilon }^{+} \\ -B_{y}^{+} \end{array} \right) \);
-
Case 4\(u \le -B_{\varepsilon }^{+} \) and \(v \in \left[ -B_{y}^{+}, -B_{y}^{-} \right] ; \left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) =\left( \begin{array}{c} -B_{\varepsilon }^{+} \\ v \end{array} \right) \);
-
Case 5\(u \le -B_{\varepsilon }^{+}\) and \(-B_{y}^{-} \le v; \left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) =\left( \begin{array}{c} -B_{\varepsilon }^{+} \\ -B_{y}^{-} \end{array} \right) \).
In each of the above cases 1–5, let \(\left( \begin{array}{cc} {\bar{\varepsilon }}, {\bar{y}} \end{array} \right) ^{T}\) denote a point in D at which the function \(\varphi (\varepsilon ,y)+{\bar{u}}\varepsilon +{\bar{v}}y\) attains its maximum value, so that:
It is easily checked that in all cases 1–5, \(\left( \begin{array}{cc} {\bar{\varepsilon }}, {\bar{y}} \end{array} \right) ^{T}\) lies on the boundary of D. To be more precise, let us denote O, P, Q, N the 4 extreme points of D with respective coordinates:
(where \(y_{\max }=\frac{b-a}{2}-\gamma \alpha \)). Then, it is observed that:
-
in case 1, \(\left( \begin{array}{cc} {\bar{\varepsilon }}, {\bar{y}} \end{array} \right) ^{T}\) coincides with N; (This statement follows from the fact that, the gradient of the function \(\varphi (\varepsilon ,y)+{\bar{u}}\varepsilon +{\bar{v}}y\) being equal to: \( \left[ \begin{array}{cc} \frac{\partial \varphi (\varepsilon ,y)}{\partial \varepsilon } +{\bar{u}} ,&\, \frac{\partial \varphi (\varepsilon ,y)}{\partial y} +{\bar{v}} \end{array} \right] ^{T}\), in case 1 its first component is everywhere positive on D and its second component is everywhere negative on D; thus, no \(\left( \begin{array}{cc} \varepsilon , y \end{array} \right) ^{T} \in D\) can maximize the above function except \(\left( \begin{array}{c} {\bar{\varepsilon }} \\ {\bar{y}} \end{array} \right) = \left( \begin{array}{c} \beta \\ 0 \end{array} \right) \), which corresponds to N );
-
Similarly, in case 2, \(\left( \begin{array}{cc} {\bar{\varepsilon }}, {\bar{y}} \end{array} \right) ^{T}\) is located on the segment \(\left[ O, N \right] \); in case 3, it coincides with O; in case 4, it is located on the segment \(\left[ O, P \right] \); in case 5, it coincides with P.
Now, since \(\varphi ^{\sharp }(u,v)\) is defined as the maximum value over \(\left( \begin{array}{cc} \varepsilon , y \end{array} \right) ^{T} \in D\) of \(\varphi (\varepsilon , y)+u\varepsilon +vy\), it holds:
and, using (36), the following inequality can be readily deduced from the above:
As a consequence, (35) holds as soon as it can be proved that:
It is easily checked that (37) indeed holds in each of the cases 1–5. For instance, in case 1 it holds: \(u \ge {\bar{u}}, \, {\bar{\varepsilon }}=\beta \ge {\hat{\varepsilon }}, \, v \le {\bar{v}}, \, {\hat{y}} \ge {\bar{y}}=0\), from which (37) is deduced.
Similarly in case 2, it holds: \({\bar{u}}=u, \; v \le {\bar{v}}\), and \(y \ge {\bar{y}}=0\), from which (37) is also deduced.
For cases 3–5, similar reasoning would lead to the same conclusion.
Let us now turn to analyze the last 3 cases (6–8). In order to precisely define these cases, we have to consider the function of \(\varepsilon , \sigma : \left[ 0, \beta \right] \rightarrow {\mathbb {R}}\), the values of which are those of \(\varphi (\varepsilon ,y)+u\varepsilon +vy\), for all \(\left( \begin{array}{cc} \varepsilon , y \end{array} \right) ^{T}\) belonging to the \(\left[ P,Q \right] \) segment. Its analytic expression is given by Eq. (21) in the proof of Proposition 2, where it is shown that either \(\sigma \) is concave on \(\left[ 0, \beta \right] \), or there exists a value \(\varepsilon ^{0} \in \left[ 0, \right. \beta \left[ \right. \) of \(\varepsilon \) such that \(\sigma \) is concave on \(\left[ 0, \varepsilon ^{0} \right] \) and convex on \(\left[ \epsilon ^{0}, \beta \right] \). From this, it follows that there exists \({\tilde{\varepsilon }} \in \left[ 0, \beta \right] \) such that:
-
if \(u-\gamma v \in \left[ \delta _{{\mathrm{min}}}, \; \delta _{\max }\right] \triangleq \left[ -\frac{d \sigma }{d \varepsilon }(0); -\frac{d \sigma }{d \varepsilon }({\tilde{\varepsilon }}) \right] \), the maximum value of \(\sigma \) over \(\left[ 0, \beta \right] \) is attained for \({\bar{\varepsilon }} \in \left[ 0, {\tilde{\varepsilon }} \right] \);
-
if \(u-\gamma v > -\frac{d \sigma }{d \varepsilon }({\tilde{\varepsilon }})=\delta _{\max }\), the maximum value of \(\sigma \) over \(\left[ 0, \beta \right] \) is attained for \({\bar{\varepsilon }}=\beta \);
-
if \(u-\gamma v < -\frac{d \sigma }{d \varepsilon }(0)=\delta _{{\mathrm{min}}}\), it is attained for \({\bar{\varepsilon }}=0\).
(we recall that \(\sigma \) is a decreasing function of \(\varepsilon \), so that \(0 \le \delta _{{\mathrm{min}}} \le \delta _{\max }).\)
In view of the above:
-
Case 6 arises when \((-B_{\varepsilon }^{-} \le u) \; \vee \; (-B_{y}^{-} \le v)\) and \(u-\gamma v < \delta _{{\mathrm{min}}}\) (\(\vee \) denotes the logical connector “OR”).
-
Case 7 arises when \((-B_{\varepsilon }^{-} \le u) \; \vee \; (-B_{y}^{-} \le v)\) and \(u-\gamma v \in \left[ \delta _{{\mathrm{min}}}, \; \delta _{\max }\right] \).
-
Case 8 arises when \((-B_{\varepsilon }^{-} \le u) \; \vee \; (-B_{y}^{-} \le v)\) and \(u-\gamma v > \delta _{\max }\).
In all three cases 6–8, \(\left( \begin{array}{cc} {\bar{u}}, {\bar{v}} \end{array} \right) ^{T}\) is defined as follows: if L(u, v) denotes the line in \(\left( \begin{array}{cc} u, v \end{array} \right) ^{T}\) space defined as: \(L(u,v)=\left\{ \left( \begin{array}{c} u' \\ v' \end{array} \right) : u'-\gamma v'=u-\gamma v\right\} \), then \(\left( \begin{array}{cc} {\bar{u}}, {\bar{v}} \end{array} \right) ^{T}\) is the point closest to \(\left( \begin{array}{cc} u, v \end{array} \right) ^{T}\) in \(L(u,v) \bigcap \varOmega \).
For instance, if \(u-\gamma v-\gamma B_{y}^{-} \le -B_{\varepsilon }^{-}\), then \(\left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) = \left( \begin{array}{c} u-\gamma v -\gamma B_{y}^{-}\\ -B_{y}^{-} \end{array} \right) \).
If \(u-\gamma v-\gamma B_{y}^{-} > -B_{\varepsilon }^{-}\), then \(\left( \begin{array}{c} {\bar{u}} \\ {\bar{v}} \end{array} \right) = \left( \begin{array}{c} -B_{\varepsilon }^{-}\\ -(B_{\varepsilon }^{-}+u-\gamma v)/\gamma \end{array} \right) \).
To illustrate the type of reasoning leading to show that (37) [hence (35)] holds, let us consider Case 6, assuming that:
In that case, \({\bar{\varepsilon }}=0\) and \({\bar{y}}=y_{\max }\). Thus:
so that the sum of these two quantities reads:
Since in case 6, with \(\left( \begin{array}{cc} {\bar{u}}, {\bar{v}} \end{array} \right) ^{T}\) as defined above, \(v \ge -B_{y}^{-}\) and \(\left( \begin{array}{cc} {\hat{\varepsilon }}, {\hat{y}} \end{array} \right) ^{T} \in D\) implies \(y_{\max }-\gamma {\hat{\varepsilon }}-{\hat{y}} \ge 0\), (37) holds, thus proving (35).
The analysis of cases 7 and 8 could be carried out in a similar way, leading to the same conclusion.
This completes the proof of Proposition A4.\(\square \)
Rights and permissions
About this article
Cite this article
Minoux, M., Zorgati, R. Sharp upper and lower bounds for maximum likelihood solutions to random Gaussian bilateral inequality systems. J Glob Optim 75, 735–766 (2019). https://doi.org/10.1007/s10898-019-00756-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-019-00756-3