Skip to main content
Log in

A certified Branch & Bound approach for reliability-based optimization problems

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

Reliability-based optimization problems are optimization problems considering a constraint that measures reliability of the modelled system: the probability of a safety event with respect to controllable decision variables and uncertain random variables. Most solving approaches use approximate techniques for evaluating this reliability constraint. As a consequence, the reliability of the computed optimal decision is not guaranteed. In this paper, we investigate an interval-based Branch & Bound for solving globally reliability-based optimization problems with numerical guarantee. It combines an interval Branch & Bound framework with a certified reliability analysis technique. This technique considers the reliability constraint and induced safety region modelled within Probabilistic Continuous Constraint Programming paradigm. The certified reliability analysis is numerically handled by an interval quadrature algorithm. In addition, a new interval quadrature function for two random variables, based on linear models of the safety region is described. Two implementations of the Branch & Bound, which differ on how the certified reliability analysis is handled throughout the optimization process, are presented. A numerical study of these two variants shows the relevance of the interval linear model-based quadrature function.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Vector comparison must be understood component-wise.

  2. Assuming \(a_i > 0\). If \(a_i <0\), the range of \(x_1\) is from \(-(b+a_2 x_2)/a_1\) to \(\overline{x}_1\). If \(a_i =0\), then \(x_1\) and \(x_2\) and induced coefficients of the linear model can be swapped.

  3. Otherwise, all decision boxes in \(\mathcal {S}_{ out }\) are degenerated, reliable and one of them is a global optimum or the global upper bound and global lower bound are equal meaning a globally optimal solution has been found.

  4. Taking \(\varvec{y}' \subseteq \varvec{y}\), then each inner or outer random boxes with respect to \(\varvec{g}\) and \(\varvec{y}\) are also inner or outer for \(\varvec{y}'\). Only boundary boxes for \(\varvec{y}\) can have a different status for \(\varvec{y}'\).

  5. As the total number of iterations performed by the quadrature algorithm is proportional to the height of decision boxes due to the sharing.

  6. http://ben-martin.fr/files/publications/materials/RBO/detailedRBOExperiments.pdf.

References

  1. Aoues, Y., Chateauneuf, A.: Benchmark study of numerical methods for reliability-based design optimization. Struct. Multidiscipl. Optim. 41(2), 277–294 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  2. Benhamou, F., Goualard, F., Granvilliers, L., Puget, J.-F.: Revising hull and box consistency. In: International Conference on Logic Programming, pp. 230–244. MIT press (1999)

  3. Benhamou, F., McAllister, D., Van Hentenryck, P.: CLP (Intervals) revisited. In: International Symposium on Logic Programming, pp. 124–138 (1994)

  4. Carvalho, E.: Probabilistic Constraint Reasoning. Ph.D. Thesis, Universidade Nova de Lisboa (2012)

  5. Carvalho, E., Cruz, J., Barahona, P.: Safe reliability assessment through probabilistic constraint reasoning. In: Nowakowski, T., et al. (eds.) Safety and Reliability: Methodology and Applications, pp. 2269–2277. CRC Press, Boca Raton (2015)

    Google Scholar 

  6. Cheng, G., Xu, L., Jiang, L.: A sequential approximate programming strategy for reliability-based structural optimization. Comput. Struct. 84(21), 1353–1367 (2006)

    Article  Google Scholar 

  7. Deb, K., Gupta, S., Daum, D., Branke, J., Mall, A.K., Padmanabhan, D.: Reliability-based optimization using evolutionary algorithms. IEEE Trans. Evol. Comput. 13(5), 1054–1074 (2009)

    Article  Google Scholar 

  8. Enevoldsen, I., Sørensen, J.D.: Reliability-based optimization in structural engineering. Struct. Saf. 15(3), 169–196 (1994)

    Article  Google Scholar 

  9. Goldsztejn, A., Cruz, J., Carvalho, E.: Convergence analysis and adaptive strategy for the certified quadrature over a set defined by inequalities. J. Comput. Appl. Math. 260, 543–560 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  10. Goualard, F.: GAOL 3.1.1: Not Just Another Interval Arithmetic Library, 4.0th edn. Laboratoire d’Informatique de Nantes-Atlantique, Nantes (2006)

    Google Scholar 

  11. Granvilliers, L., Benhamou, F.: Algorithm 852: RealPaver: an interval solver using constraint satisfaction techniques. ACM Trans. Math. Softw. 32(1), 138–156 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  12. Hansen, E., Walster, G.W.: Global Optimization Using Interval Analysis—Revised and Expanded. CRC Press, Boca Raton (2003)

    Google Scholar 

  13. Jaulin, L., Kieffer, M., Didrit, O., Walter, E.: Applied Interval Analysis with Examples in Parameter and State Estimation, Robust Control and Robotics. Springer, Berlin (2001)

    MATH  Google Scholar 

  14. Kearfott, R.B.: Interval computations: introduction, uses, and resources. Euromath Bull. 2(1), 95–112 (1996)

    MathSciNet  Google Scholar 

  15. Kearfott, R.B.: Rigorous Global Search: Continuous Problems. Kluwer Academic Publishers, Dordrecht (1996)

    Book  MATH  Google Scholar 

  16. Kuschel, N., Rackwitz, R.: Two basic problems in reliability-based structural optimization. Math. Methods Oper. Res. 46(3), 309–333 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  17. Lhomme, O.: Consistency techniques for numeric CSPs. In: International Joint Conference on Artificial Intelligence, pp. 232–238 (1993)

  18. Maranas, C.D., Floudas, C.A.: Global optimization in generalized geometric programming. Comput. Chem. Eng. 21(4), 351–369 (1997)

    Article  Google Scholar 

  19. Martin, B., Goldsztejn, A., Granvilliers, L., Jermann, C.: Certified parallelotope continuation for one-manifolds. SIAM J. Numer. Anal. 51(6), 3373–3401 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Moore, R.: Interval Analysis. Prentice-Hall, Englewood Cliffs (1966)

    MATH  Google Scholar 

  21. Neumaier, A.: Interval Methods for Systems of Equations. Cambridge University Press, Cambridge (1991)

    Book  MATH  Google Scholar 

  22. Neumaier, A.: Complete search in continuous global optimization and constraint satisfaction. Acta Numer. 13, 271–369 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  23. Oliemann, N.J.: Methods for robustness programming. Ph.D. Thesis, Wageningen University (2008)

  24. Rahman, S., Wei, D.: Design sensitivity and reliability-based structural optimization by univariate decomposition. Struct. Multidiscipl. Optim. 35(3), 245–261 (2008)

    Article  Google Scholar 

  25. Trombettoni, G., Araya, I., Neveu, B., Chabert, G.: Inner regions and interval linearizations for global optimization. In: AAAI Conference on Artificial Intelligence (2011)

  26. Valdebenito, M.A., Schuller, G.I.: A survey on approaches for reliability-based optimization. Struct. Multidiscipl. Optim. 42(5), 645–663 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  27. Van Hentenryck, P., Michel, L., Deville, Y.: Numerica: A Modeling Language for Global Optimization. MIT press, Cambridge (1997)

    Google Scholar 

  28. Youn, B.D., Choi, K.K., Yang, R.-J., Gu, L.: Reliability-based design optimization for crashworthiness of vehicle side impact. Struct. Multidiscipl. Optim. 26(3–4), 272–283 (2004)

    Article  Google Scholar 

  29. Zhang, Y.M., He, X.D., Liu, Q.L., Wen, B.C.: An approach of robust reliability design for mechanical components. Proc. Inst. Mech. Eng. Part E J. Process Mech. Eng. 219(3), 275–283 (2005)

    Article  Google Scholar 

Download references

Acknowledgements

The authors are thankful to the Portuguese Foundation for Science and Technology for having Granted this work through the Project PROCURE (Probabilistic Constraints for Uncertainty Reasoning in Science and Engineering Applications), Ref. PTDC/EEI-CTP/1403/2012. The authors are also thankful to the anonymous referees for their useful remarks improving the quality of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benjamin Martin.

Appendices

Appendix 1: Proof of convergence of the quadrature algorithm

We state here the theorem of convergence of the quadrature algorithm over a decision box domain. This theorem is used to prove Corollary 1. To do so, we will use the results from [9], and we need to introduce some necessary notations.

Let \(\mathcal {H}_{\Box }^k(\varvec{y})\) be the set of boxes \(\varvec{x}\) maintained by Algorithm 1 at iteration k for the quadrature with respect to a decision box \(\varvec{y}\). Denote \(\mathcal {H}_{\Box }'^k(\varvec{y})\) the set \(\{ \varvec{x}\in \mathcal {H}_{\Box }^k(\varvec{y}): \mathrm {wid}(\varvec{I}^{\varvec{y},\varvec{g}}_{g,{\varPhi }}(\varvec{x})) > 0\}\). The set \(\mathcal {H}_{\Box }^k(\varvec{y})\) is decomposed into boundary boxes \(\mathcal{B}_k(\varvec{y})\) and inner boxes \(\mathcal{L}_k(\varvec{y})\), and \(\mathcal{B}'_k(\varvec{y})\),\(\mathcal{L}'_k(\varvec{y})\) denotes respectively \(\mathcal{B}_k(\varvec{y}) \cap \mathcal {H}_{\Box }'^k(\varvec{y})\), \(\mathcal{L}_k(\varvec{y}) \cap \mathcal {H}_{\Box }'^k(\varvec{y})\).

Additionally, we denote by \(\epsilon '_k\) the value

$$\begin{aligned} \epsilon '_k(\varvec{y}):= \displaystyle \max _{\varvec{x}\in \mathcal {H}_{\Box }'^k(\varvec{y})} \mathrm {wid}(\varvec{x}). \end{aligned}$$

Recall that \({\varPhi }\) is positive, continuously differentiable everywhere and bounded on \(\varvec{x}^\mathrm {init}\). This entails that the natural interval extension \(\varvec{{\varPhi }}\) is convergent, and that \(\varvec{{\varPhi }}(\varvec{x})\) is bounded.

Using notations from [9], the excess of a quadrature inclusion function \(\varvec{I}^{\varvec{y}}_{g,{\varPhi }}(\varvec{x})\) is defined by

$$\begin{aligned} \mathrm {exc}(\varvec{I}^{\varvec{y}}_{g,{\varPhi }}(\varvec{x})) := \frac{\mathrm {wid}(\varvec{I}^{\varvec{y}}_{g,{\varPhi }}(\varvec{x}))}{\mathrm {vol}(\varvec{x})}. \end{aligned}$$
(39)

A quadrature inclusion function \(\varvec{I}^{\varvec{y}}_{g,{\varPhi }}\) is weakly convergent inside \(\varvec{x}^\mathrm {init}\) if for any \(\varvec{x}\subseteq \varvec{x}^\mathrm {init}\),there exists a \(c>0\) such that \(\mathrm {exc}(\varvec{I}^{\varvec{y}}_{g,{\varPhi }}(\varvec{x})) \le c\). It is convergent if for any sequence \((\varvec{x}^k)_{k\in \mathbb {N}}\) with \(\varvec{x}^k \subseteq \varvec{x}^\mathrm {init}\), it satisfies

$$\begin{aligned} \lim _{k\rightarrow \infty } \mathrm {wid}(\varvec{x}^k) = 0 \implies \lim _{k\rightarrow \infty } \mathrm {exc}(\varvec{I}^{\varvec{y}}_{g,{\varPhi }}(\varvec{x})) = 0. \end{aligned}$$

Denote

$$\begin{aligned} \overline{r}_{\epsilon } := \sup \{\mathrm {exc}(\varvec{I}^{\varvec{y}}_{g,{\varPhi }}(\varvec{x})): \varvec{x}\subseteq \varvec{x}^\mathrm {init}, \mathrm {wid}(\varvec{x})\le \epsilon \}, \end{aligned}$$

then if \(\varvec{I}^{\varvec{y}}_{g,{\varPhi }}\) is convergent, then \(\lim _{\epsilon \rightarrow 0} \overline{r}_{\epsilon } = 0\)

A \(\varvec{g}\)-convergent quadrature inclusion function satisfies that the quadrature inclusion used for boundary boxes is at least weakly convergent, and the one for inner boxes is at least convergent (the case of (11) as shown in [9]).

Theorem 3

(Convergence) Let \((\varvec{y}^k)_{k\in \mathbb {N}}\) be an infinite convergent sequence of boxes included in \(\varvec{y}^\mathrm {init}\) with the decision \(\hat{y} \in \varvec{y}^k\) for all \(k\in \mathbb {N}\). Given that g is continuous, that \(\varvec{g}\) is convergent, and that \(\mathcal {H}_0( y):=\{x\in \varvec{x}^\mathrm {init}:g(x, y)\le 0,\exists i \; g_i(x, y) = 0\}\) satisfies \(\mathrm {vol}(\mathcal {H}_0(\hat{y})) = 0\), then Algorithm 1 with a \(\varvec{g}\)-convergent quadrature function like (11) satisfies:

$$\begin{aligned} \displaystyle \lim _{k\rightarrow \infty } \epsilon '_{\mu (k)}(\varvec{y}^k) = 0 \implies \displaystyle \lim _{k\rightarrow \infty } \mathrm {wid}(\varvec{P}_{\mu (k)}(\varvec{y}^k)) = 0, \end{aligned}$$
(40)

with \(\mu :\mathbb {N}\rightarrow \mathbb {N}\), \(\lim _{k\rightarrow \infty } \mu (k) = \infty \).

Proof

Recall that,

$$\begin{aligned} \varvec{P}_{\mu (k)}(\varvec{y}^k) :=&\sum _{\varvec{x}\in \mathcal {H}_{\Box }^{\mu (k)}(\varvec{y}^k)} \varvec{I}^{\varvec{y}^k,\varvec{g}}_{g,{\varPhi }}(\varvec{x}) \end{aligned}$$
(41)
$$\begin{aligned} =&\sum _{\varvec{x}\in \mathcal{B}_{\mu (k)}(\varvec{y}^k)} \Box ([0,0] \cup \varvec{{\varPhi }}(\varvec{x}))\mathrm {vol}(\varvec{x}) + \sum _{\varvec{x}\in \mathcal{L}_{\mu (k)}(\varvec{y}^k)} \varvec{T}_{{\varPhi }}(\varvec{x}). \end{aligned}$$
(42)

Therefore,

$$\begin{aligned} \mathrm {wid}(\varvec{P}_{\mu (k)}(\varvec{y}))&= \sum _{\varvec{x}\in \mathcal{B}_{\mu (k)}(\varvec{y}^k)} \mathrm {wid}(\Box ([0,0] \cup \varvec{{\varPhi }}(\varvec{x}))\mathrm {vol}(\varvec{x})) + \sum _{\varvec{x}\in \mathcal{L}_{k'}(\varvec{y}^k)} \mathrm {wid}(\varvec{T}_{{\varPhi }}(\varvec{x})) \end{aligned}$$
(43)
$$\begin{aligned}&= \sum _{\varvec{x}\in \mathcal{B}'_{\mu (k)}(\varvec{y}^k)} \mathrm {wid}(\Box ([0,0] \cup \varvec{{\varPhi }}(\varvec{x})))\mathrm {vol}(\varvec{x}) + \sum _{\varvec{x}\in \mathcal{L}'_{\mu (k)}(\varvec{y}^k)} \mathrm {wid}(\varvec{T}_{{\varPhi }}(\varvec{x})). \end{aligned}$$
(44)

The second summation satisfies

$$\begin{aligned} \sum _{\varvec{x}\in \mathcal{L}'_{\mu (k)}(\varvec{y}^k)} \mathrm {wid}(\varvec{T}_{{\varPhi }}(\varvec{x})) \le \overline{r}_{\epsilon '_{\mu (k)}(\varvec{y}^k)} \sum _{\varvec{x}\in \mathcal{L}'_{\mu (k)}(\varvec{y}^k)} \mathrm {vol}(\varvec{x}) \le \overline{r}_{\epsilon '_{\mu (k)}(\varvec{y}^k)} \mathrm {vol}(\varvec{x}^\mathrm {init}), \end{aligned}$$
(45)

noting that \(\mathrm {wid}(\varvec{T}_{{\varPhi }}(\varvec{x})) = \mathrm {exc}(\varvec{T}_{{\varPhi }}(\varvec{x})) \mathrm {vol}(\varvec{x})\), and that the excess is lower than the maximum excess \(\overline{r}_{\epsilon '_{\mu (k)}(\varvec{y}^k)}\). Since \(\epsilon _{\mu (k)}(\varvec{y}^k)\) converges to zero as \(\mu (k)\) (i.e. k) tends to infinity, then \(\overline{r}_{\epsilon _{\mu (k)}}(\varvec{y}^k)\) converges to zero due to the convergence of the quadrature inclusion \(\varvec{T}_{{\varPhi }}\). The limit of the sum is hence zero.

We are left with checking the limit of the first summation. Since the quadrature inclusion \(\Box ([0,0] \cup \varvec{{\varPhi }}(\varvec{x}))\mathrm {vol}(\varvec{x})\) for any \(\varvec{x}\subseteq \varvec{x}^\mathrm {init}\) is weakly convergent, there exists a \(c>0\) such that \(\mathrm {exc}(\Box ([0,0] \cup \varvec{{\varPhi }}(\varvec{x}))\mathrm {vol}(\varvec{x})) \le c\). This entails that,

$$\begin{aligned} \sum _{\varvec{x}\in \mathcal{B}'_{\mu (k)}(\varvec{y}^k)} \mathrm {wid}(\Box ([0,0] \cup \varvec{{\varPhi }}(\varvec{x})))\mathrm {vol}(\varvec{x}) \le c \sum _{\varvec{x}\in \mathcal{B}'_{\mu (k)}(\varvec{y}^k)} \mathrm {vol}(\varvec{x}) = c\; \mathrm {vol}(\cup \mathcal{B}'_{\mu (k)}(\varvec{y}^k)), \end{aligned}$$
(46)

where \(\cup \mathcal{B}'_{\mu (k)}(\varvec{y}^k)\) designates the union of boxes in \(\mathcal{B}'_{\mu (k)}(\varvec{y}^k)\). We study the limit of its volume.

First, we denote \(\mathcal{B}' = \lim _{k\rightarrow \infty } \cup \mathcal{B}'_{\mu (k)}(\varvec{y}^k)\). It is defined as the set of points x such that there exist an infinite sequence of boxes \((\varvec{x}^k)_{k\in \mathbb {N}}\) with \(\varvec{x}^k \in \mathcal{B}'_{\mu (k)}(\varvec{y}^k)\) and \(x\in \varvec{x}^k\) for all k. We need to show that the volume of \(\mathcal{B}'\) is zero in order to complete the proof. To do so, we show by contradiction that \(\mathcal{B}' \subseteq \mathcal {H}_0(\hat{y})\). Suppose not, then for a \(x\in \mathcal{B}'\), \(x\not \in \mathcal {H}_0(\hat{y})\), we have by definition of \(\mathcal {H}_0(\hat{y})\) either: (i) \(\exists i, g_i(x,\hat{y}) > 0\); (ii) \(g(x,\hat{y}) < 0\), (iii) \(g(x, \hat{y}) > 0\).

Let \((\varvec{x}^k)_{k\in \mathbb {N}}\) be an infinite sequence of boxes, with \(x\in \varvec{x}^k\) and \(\varvec{x}^k \in \mathcal{B}'_{\mu (k)}(\varvec{y}^k)\), for all k. By definition of boundary boxes, this imposes \(\inf \varvec{g}(\varvec{x}^k,\varvec{y}^k) \le 0\) and \(\exists i, \sup \varvec{g}_i(\varvec{x}^k,\varvec{y}^k)> 0\). Because \(\epsilon '_{\mu (k)}(\varvec{y}^k)\) converges to zero, so is the width of \(\varvec{x}^k\). The widths of the boxes \(\varvec{y}^k\) are also convergent to zero. Eventually, since \((x,\hat{y}) \in (\varvec{x}^k,\varvec{y}^k)\), \(\hat{z} = g(x,\hat{y}) \in \varvec{g}(\varvec{x}^k,\varvec{y}^k)\). Therefore, for any value \(z \in \varvec{g}(\varvec{x}^k,\varvec{y}^k)\):

$$\begin{aligned} \vert z - \hat{z} \vert \le \mathrm {wid}(\varvec{g}(\varvec{x}^k,\varvec{y}^k)) \end{aligned}$$

The right hand side converges to zero due to the convergence of \(\varvec{g}\). As a consequence all values within \(\varvec{g}(\varvec{x}^k,\varvec{y}^k)\) converge to \(\hat{z} = g(x,\hat{z})\). Hence, we can prove that if (i) \(\exists i\), \(g_i(x,\hat{y}) >0\), then there exists a \(\overline{k}\) such that \(\inf \varvec{g}_i(\varvec{x}^{\overline{k}},\varvec{y}^{\overline{k}}) >0\), contradicting \(\varvec{x}^k \in \mathcal{B}'_{\mu (k)}(\varvec{y}^k), \forall k \in \mathbb {N}\). A similar contradiction holds for cases (ii) and (iii). Therefore, \(x \in \mathcal {H}_0(\hat{y})\) which proves \(\mathcal{B}' \subseteq \mathcal {H}_0(\hat{y})\) and by hypothesis that \(\mathrm {vol}(\mathcal{B}') \le \mathrm {vol}(\mathcal {H}_0(\hat{y})) = 0\). This completes the proof. \(\square \)

Appendix 2: Benchmark problem descriptions

All the benchmark have \(n=2\) random variables and \(m=2\) decision variables. Random variables follow independent normal distributions. Number of constraints q modelling the safety region, initial domain of random and decision variables and random variables distributions are described in Table 6. Details on objective and constraint functions are described below. For simplicity, we denote \(z_i = x_i + y_i\).

Table 6 RBO benchmark problems characteristics

RBO1 This problem is taken and adapted from [1].

$$\begin{aligned} f(y)&:= y_2 \end{aligned}$$
(47)
$$\begin{aligned} g_1(x,y)&:= -\frac{z_1^2 z_2}{20} + 1 \end{aligned}$$
(48)
$$\begin{aligned} g_2(x,y)&:= z_1^2 + 8 z_2 -75 \end{aligned}$$
(49)
$$\begin{aligned} g_3(x,y)&:= -\frac{(z_1 + z_2 -5)^2}{30} - \frac{(z_1 - z_2 - 12)^2}{120} + 1 \end{aligned}$$
(50)

RBO2 This problem is taken from [7].

$$\begin{aligned} f(y)&:= -y_2 \end{aligned}$$
(51)
$$\begin{aligned} g_1(x,y)&:= -(x_1+y_1)^2 +1000(x_2+y_2) \end{aligned}$$
(52)
$$\begin{aligned} g_2(x,y)&:= (x_1+y_1)-(x_2+y_2)-200 \end{aligned}$$
(53)
$$\begin{aligned} g_3(x,y)&:= -(x_1+y_1)+3( x_2+y_2) -400 \end{aligned}$$
(54)

RBO3 This problem is taken from [1].

$$\begin{aligned} f(y)&:=y_1^2+y_2^2 \end{aligned}$$
(55)
$$\begin{aligned} g_1(x,y)&:= -0.2 y_1 y_2 (x_2)^2 + x_1 \end{aligned}$$
(56)

RBO4 This problem is a nonlinear programming problem from the GLOBAL Library, taken from [18], transformed into a RBO.

$$\begin{aligned} f(y)&:=y_1 \end{aligned}$$
(57)
$$\begin{aligned} g_1(x,y)&:= \frac{1}{4} z_1 + \frac{1}{2} z_2 - \frac{1}{16} z_1^2 - \frac{1}{16} z_2^2 - 1 \end{aligned}$$
(58)
$$\begin{aligned} g_2(x,y)&:= \frac{1}{14} z_1^2 + \frac{1}{14} z_2^2 + 1 - \frac{3}{7} z_1 - \frac{3}{7} z_2 \end{aligned}$$
(59)

RBO5 The definition of this problem with highly nonlinear safety region is inspired by a constraint problem from [19]. Here, \(\epsilon = 0.75\).

$$\begin{aligned} f(y)&:=y_1+y_2 \end{aligned}$$
(60)
$$\begin{aligned} g_1(x,y)&:= z_1^8 - (1-\epsilon ) z_1^6 + 4 z_1^6 z_2^2 - (3+15 \epsilon ) z_1^4 z_2^2 + 6 z_1^4 z_2^4 \end{aligned}$$
(61)
$$\begin{aligned}&\quad - (3-15 \epsilon ) z_1^2 z_2^4 + 4 z_1^2 z_2^6 - (1+\epsilon ) z_2^6 + z_2^8 \end{aligned}$$
(62)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Martin, B., Correia, M. & Cruz, J. A certified Branch & Bound approach for reliability-based optimization problems. J Glob Optim 69, 461–484 (2017). https://doi.org/10.1007/s10898-017-0529-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-017-0529-6

Keywords

Navigation