Skip to main content
Log in

A joint chance-constrained data envelopment analysis model with random output data

  • Original Paper
  • Published:
Operational Research Aims and scope Submit manuscript

Abstract

Data envelopment analysis (DEA) is a mathematical programming approach for evaluating the technical efficiency performances of a set of comparable decision-making units that transform multiple inputs into multiple outputs. The conventional DEA models are based on crisp input and output data, but real-world problems often involve random output data. The main purpose of the paper is to propose a joint chance-constrained DEA model for analyzing a real-world situation characterized by random outputs and crisp inputs. After developing the model, we carry out the following: First, we obtain an upper bound of this stochastic non-linear model deterministically by applying a piecewise linear approximation algorithm based on second-order cone programming; Second, we obtain a lower bound with use of a piecewise tangent approximation algorithm, which is also based on second-order cone programming; and then we use a numerical example to demonstrate the applicability of the proposed joint chance-constrained DEA framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Ackooij WV, Henrion R, Moller A, Zorgati R (2011) On joint probabilistic constraints with Gaussian coefficient matrix. Oper Res Lett 39:99–102

    Article  Google Scholar 

  • Alizadeh F, Goldfarb D (2003) Second-order cone programming. Math Program 95(1):3–51. https://doi.org/10.1007/s10107-002-0339-5

    Article  Google Scholar 

  • Banker RD, Maindiratta A (1992) Maximum likelihood estimation of monotone and concave production frontiers. J Prod Anal 3(4):401–441

    Article  Google Scholar 

  • Banker RD, Charnes A, Cooper WW (1984) Some method for estimating technical and scale inefficiencies in data envelopment analysis. Manag Sci 30(9):1078–1092

    Article  Google Scholar 

  • Beck A (2014) Introduction to nonlinear optimization - theory, algorithms, and applications with MATLAB. MOS-SIAM series on optimization, SIAM. https://doi.org/10.1137/1.9781611973655

  • Bruni ME, Conforti D, Beraldi P, Tundis E (2009) Probabilistically constrained models for efficiency and dominance in DEA. Int J Prod Econ 117(1):219–228

    Article  Google Scholar 

  • Charles V, Cornillier F (2017) Value of the stochastic efficiency in data envelopment analysis. Expert Syst Appl 81(15):349–357

    Article  Google Scholar 

  • Charles V, Kumar M (2014) Satisficing data envelopment analysis: an application to SERVQUAL efficiency. Measurement 51:71–80

    Article  Google Scholar 

  • Charnes A, Cooper WW (1963) Deterministic equivalents for optimizing and satisfying under chance constraints. Oper Res 11(1):18–39

    Article  Google Scholar 

  • Charnes A, Cooper WW, Symonds GH (1958) Cost horizons and certainty equivalents: an approach to stochastic programming of heating oil. Manag Sci 4:235–263

    Article  Google Scholar 

  • Charnes A, Cooper WW, Rhodes E (1978) Measuring the efficiency of decision making units. Eur J Oper Res 2(6):429–444

    Article  Google Scholar 

  • Cheng J, Lisser A (2012) A second-order cone programming approach for linear programs with joint probabilistic constraints. Oper Res Lett 40:325–328

    Article  Google Scholar 

  • Cooper WW, Huang Z, Li S (1996) Satisficing DEA models under chance constraints. Ann Oper Res 66(4):279–295

    Article  Google Scholar 

  • Cooper WW, Huang ZM, Lelas V, Li SX, Olesen OB (1998) Chance constrained programming formulations for stochastic characterizations of efficiency and dominance in DEA. J Prod Anal 9(1):530–579

    Article  Google Scholar 

  • Cooper WW, Deng H, Huang ZM, Li SX (2004) Chance constrained programming approaches to congestion in stochastic data envelopment analysis. Eur J Oper Res 155(2):487–501

    Article  Google Scholar 

  • Henrion R, Strugarek C (2008) Convexity of chance constraints with dependent random variables. Comput Optim Appl 41:263–276

    Article  Google Scholar 

  • Huang Z, Li SX (1996) Dominance stochastic models in data envelopment analysis. Eur J Oper Res 95(2):390–403

    Article  Google Scholar 

  • Izadikhah M, Farzipoor Saen R (2018) Assessing sustainability of supply chains by chance-constrained two-stage DEA model in the presence of undesirable factors. Comput Oper Res 100:343–367

    Article  Google Scholar 

  • Kuosmanen T (2008) Representation theorem for convex nonparametric least squares. Econ J 11:308–325

    Google Scholar 

  • Land KC, Lovell CAK, Thore S (1993) Chance constrained data envelopment analysis. Manag Decis Econ 14(6):541–554

    Article  Google Scholar 

  • Luedtke J, Ahmed S, Nemhauser GL (2010) An integer programming approach for linear programs with probabilistic constraints. Math Program 122:247–272

    Article  Google Scholar 

  • Miller LB, Wagner H (1965) Chance-constrained programming with joint constraints. Oper Res 13:930–945

    Article  Google Scholar 

  • Morita H, Seiford LM (1999) Characteristics on stochastic DEA efficiency. J Oper Res Soc Jpn 42(4):389–404

    Google Scholar 

  • Olesen OB (2006) Comparing and combining two approaches for chance constrained DEA. J Prod Anal 26(2):103–119

    Article  Google Scholar 

  • Olesen OB, Petersen NC (1995) Chance constrained efficiency evaluation. Manag Sci 41(3):442–457

    Article  Google Scholar 

  • Olesen O, Petersen N (2016) Stochastic data envelopment analysis—a review. Eur J Oper Res 251(1):2–21

    Article  Google Scholar 

  • Sengupta JK (1982) Efficiency measurement in stochastic input–output systems. Int J Syst Sci 13:273–287

    Article  Google Scholar 

  • Shiraz RK, Tavana M, Di Caprio D (2018a) Chance-constrained data envelopment analysis modeling with random-rough data. RAIRO-Oper Res 52(1):259–285

    Article  Google Scholar 

  • Shiraz RK, Hatami-Marbini A, Emrouznejad A, Fukuyama H (2018b) Chance-constrained cost efficiency in data envelopment analysis model with random inputs and outputs. Oper Res Int J. https://doi.org/10.1007/s12351-018-0378-1

    Article  Google Scholar 

  • Simon HA (1957) Models of man. Wiley, New York

    Google Scholar 

  • Sueyoshi T (2000) Stochastic DEA for restructure strategy: an application to a Japanese petroleum company. Omega 28(4):385–398

    Article  Google Scholar 

  • Talluri S, Narasimhana R, Nairb A (2006) Vendor performance with supply risk: a chance-constrained DEA approach. Int J Prod Econ 100:212–222

    Article  Google Scholar 

  • Tavana M, Shiraz RK, Hatami-Marbini A (2014) A new chance-constrained DEA model with Birandom input and output data. J Oper Res Soc 65:1824–1839

    Article  Google Scholar 

  • Tsolas I, Charles V (2015) Risk into bank efficiency: a satisficing DEA approach to assess the Greek banking crisis. Expert Syst Appl 42:3491–3500

    Article  Google Scholar 

  • Udhayakumar A, Charles V, Kumar M (2011) Stochastic simulation based genetic algorithm for chance constrained data envelopment analysis problems. Omega 39:387–397

    Article  Google Scholar 

  • Weber CA, Desai A (1996) Determination of paths to vendor market efficiency using parallel coordinates representation: a negotiation tool for buyers. Eur J Oper Res 90(1):142–155

    Article  Google Scholar 

  • Wu D, Lee CG (2010) Stochastic DEA with ordinal data applied to a multi-attribute pricing problem. Eur J Oper Res 207:1679–1688

    Article  Google Scholar 

Download references

Acknowledgements

This research is partially supported by the research grant GAČR 19-13946S awarded to Dr. Tavana by the Czech Science Foundation. Dr. Khanjani Shiraz received a grant from the Ministry of science, Research and Technology of the Islamic Republic of Iran in partial support of this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Madjid Tavana.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A. Proofs of theorems

Proof of Theorem 1

With the use of the standardized normal distribution (see Charnes and Cooper 1963) the chance constraints are transformed into a deterministic form as follows:

$$ Pr\left[ {\sum\limits_{r = 1}^{s} {u_{r} \tilde{y}_{rk} } \ge f} \right] \ge \alpha \Leftrightarrow Pr\left[ { - \sum\limits_{r = 1}^{s} {u_{r} \tilde{y}_{rk} } \le - f} \right] \ge \alpha \Leftrightarrow Pr\left[ {z \le \frac{{ - \sum\nolimits_{r = 1}^{s} {u_{r} y_{rk} } - f}}{{\sqrt {\text{var} \left[ {\sum\nolimits_{r = 1}^{s} {u_{r} \tilde{y}_{rk} } } \right]} }}} \right] \ge \alpha , $$

where the variable z has a normal standard distribution (with zero mean and unit variance). Then, we have:

$$ Pr\left[ {\sum\limits_{r = 1}^{s} {u_{r} \tilde{y}_{rk} } \ge f} \right] \ge \alpha \Leftrightarrow \sum\limits_{r = 1}^{s} {u_{r} y_{rk} } - \varPhi^{ - 1} \left( \alpha \right)\sqrt {\sum\limits_{r = 1}^{s} {u_{r}^{2} var\left( {\tilde{y}_{rk} } \right)} } \ge f, $$

where \( \varPhi^{ - 1} \left( \alpha \right) \) is the inverse cumulative distribution function of the standard normal distribution \( \varPhi (\alpha ) \). Since \( \tilde{y}_{rj} ,\,r = 1, \ldots ,s; \,\,j = 1, \ldots ,n \) are independent normal random variables, these joint constraints can be expressed as:

$$ \prod\limits_{j = 1}^{n} {Pr\left\{ {\sum\limits_{r = 1}^{s} {u_{r} \tilde{y}_{rj} } - \sum\limits_{r = 1}^{s} {v_{i} x_{ij} } \le 0} \right\}} = Pr\left[ {\sum\limits_{r = 1}^{s} {u_{r} \tilde{y}_{rj} } - \sum\limits_{r = 1}^{s} {v_{i} x_{ij} } \le 0,j = 1, \ldots ,n,} \right] \ge \alpha . $$

Therefore, we have

$$ \prod\limits_{j = 1}^{n} {Pr\left\{ {\sum\limits_{r = 1}^{s} {u_{r} \tilde{y}_{rj} } - \sum\limits_{r = 1}^{s} {v_{i} x_{ij} } \le 0} \right\}} \ge \alpha^{1} = \alpha^{{\sum\nolimits_{j = 1}^{n} {\lambda_{j} } }} ,\,\lambda_{j} \ge 0,\sum\limits_{j = 1}^{n} {\lambda_{j} } = 1. $$

which yields

$$ \begin{aligned} & \prod\limits_{i = 1}^{m} {Pr\left\{ {\sum\limits_{r = 1}^{s} {u_{r} \tilde{y}_{rj} } - \sum\limits_{r = 1}^{s} {v_{i} x_{ij} } \le 0} \right\}} \ge \alpha^{{\sum\nolimits_{j = 1}^{n} {\lambda_{j} } }} = \prod\limits_{j = 1}^{n} {\alpha^{{\lambda_{j} }} } ,\,\,\sum\limits_{j = 1}^{n} {\lambda_{j} } = 1,\,\lambda_{j} \ge 0, \\ & \Leftrightarrow Pr\left\{ {\sum\limits_{r = 1}^{s} {u_{r} \tilde{y}_{rj} } - \sum\limits_{r = 1}^{s} {v_{i} x_{ij} } \le 0} \right\} \ge \alpha^{{\lambda_{j} }} ,\,\,j = 1, \ldots ,n,\,\,\sum\limits_{j = 1}^{n} {\lambda_{j} } = 1,\,\lambda_{j} \ge 0. \\ \end{aligned} $$

This completes the proof.\( \square \)

Proof of Theorem 2

Under the assumption of the standardized normal distribution, the chance constraints in (3) can be converted into a deterministic form as follows:

$$ Pr\left\{ {\sum\limits_{r = 1}^{s} {u_{r} \tilde{y}_{rj} } - \sum\limits_{r = 1}^{s} {v_{i} x_{ij} } \le 0} \right\} \ge \alpha^{{\lambda_{j} }} \Leftrightarrow Pr\left[ {z \le \frac{{ - \sum\nolimits_{r = 1}^{s} {u_{r} y_{rj} } + \sum\nolimits_{r = 1}^{s} {v_{i} x_{ij} } }}{{\sqrt {\text{var} \left[ {\sum\nolimits_{r = 1}^{s} {u_{r} \tilde{y}_{rj} } } \right]} }}} \right] \ge \alpha^{{\lambda_{j} }} $$

where z is a normally distributed random variable with zero mean and unit variance. Then, we have:

$$ \sum\limits_{r = 1}^{s} {u_{r} y_{rj} } + \varPhi^{ - 1} \left( {\alpha^{{\lambda_{j} }} } \right)\sqrt {\sum\limits_{r = 1}^{s} {u_{r}^{2} var\left( {\tilde{y}_{rj} } \right)} } \le \sum\limits_{i = 1}^{m} {v_{i} x_{ij} } ,\,\,j = 1,2, \ldots ,n. $$

The proof is complete.\( \square \)

Proof of Theorem 3

Let \( \mathop {max}\nolimits_{t = 1, \ldots ,T} \left\{ {a_{t}^{{}} \lambda + b_{t}^{{}} } \right\} = z_{t} \) which yields \( \,z_{t} \ge a_{t}^{{}} \lambda_{j} + b_{t}^{{}} ,\,j = 1, \ldots ,N,\,t = 1, \ldots ,T. \) The constraint

$$ \sum\limits_{r = 1}^{s} {u_{r} y_{rj} } + \mathop {max}\limits_{t = 1, \ldots ,T} \left\{ {a_{t}^{{}} \lambda_{j} + b_{t}^{{}} } \right\}\sqrt {\sum\limits_{r = 1}^{s} {u_{r}^{2} var\left( {\tilde{y}_{rj} } \right)} } \le \sum\limits_{i = 1}^{m} {v_{i} x_{ij} } ,\,j = 1, \ldots ,N $$

can be converted to the following constraints:

$$ \begin{aligned} & \sum\limits_{r = 1}^{s} {u_{r} y_{rj} } + \sqrt {\sum\limits_{r = 1}^{s} {\left( {z_{tj} u_{r}^{{}} } \right)^{2} var\left( {\tilde{y}_{rj} } \right)} } \le \sum\limits_{i = 1}^{m} {v_{i} x_{ij} } ,\,j = 1, \ldots ,N, \\ & z_{tj} \ge a_{t}^{{}} \lambda_{j} + b_{t}^{{}} ,\,j = 1, \ldots ,N,t = 1, \ldots ,T, \\ & \sum\limits_{j = 1}^{N} {\lambda_{j} } = 1. \\ \end{aligned} $$

Let define new variables \( z_{t} u_{r}^{{}} = \bar{z}_{tr} ,\lambda_{j} u_{r} = \bar{u}_{rj} \). Then, we have:

$$ \begin{aligned} & z_{tj} \ge a_{t}^{{}} \lambda_{j} + b_{t}^{{}} \Rightarrow z_{t} u_{r} \ge a_{t}^{{}} \lambda_{j} u_{r} + b_{t}^{{}} u_{r} ,\,j = 1, \ldots ,N,\,r = 1, \ldots ,s \\ & \Rightarrow \bar{z}_{tr} \ge a_{t}^{{}} \bar{u}_{rj} + b_{t}^{{}} u_{r} ,,\,r = 1, \ldots ,s,\,t = 1, \ldots ,T, \\ & \sum\limits_{j = 1}^{N} {\lambda_{j} } = 1 \Rightarrow \sum\limits_{j = 1}^{n} {\bar{u}_{rj} } = u_{r} ,\,\,r = 1, \ldots ,s, \\ \end{aligned} $$

We can now conclude that the second set of constraints in Eq. (4) can be written as:

$$ \sum\limits_{r = 1}^{s} {u_{r} y_{rj} } + \mathop {max}\limits_{t = 1, \ldots ,T} \left\{ {a_{t}^{{}} \lambda_{j} + b_{t}^{{}} } \right\}\sqrt {\sum\limits_{r = 1}^{s} {u_{r}^{2} var\left( {\tilde{y}_{rj} } \right)} } \le \sum\limits_{i = 1}^{m} {v_{i} x_{ij} } ,\,j = 1, \ldots ,N. $$

Therefore, we have the following constraints:

$$ \begin{aligned} & \sum\limits_{r = 1}^{s} {u_{r} y_{rj} } + \sqrt {\sum\limits_{r = 1}^{s} {\bar{z}_{tr}^{2} var\left( {\tilde{y}_{rj} } \right)} } \le \sum\limits_{i = 1}^{m} {v_{i} x_{ij} } ,\,j = 1, \ldots ,N,t = 1, \ldots ,T, \\ & \bar{z}_{tr} \ge a_{t}^{{}} \bar{u}_{rj} + b_{t}^{{}} u_{r} ,\,r = 1, \ldots ,s,\,t = 1, \ldots ,T,\,\,j = 1, \ldots ,N, \\ & \sum\limits_{j = 1}^{n} {\bar{u}_{rj} } = u_{r} ,\,\,r = 1, \ldots ,s, \\ & \bar{z}_{tr} \ge 0,u_{r} \ge 0,\bar{u}_{rj} ,\,r = 1, \ldots ,s,t = 1, \ldots ,T,\,j = 1, \ldots ,n.\, \\ \end{aligned} $$

This completes the proof.\( \square \)

Proof of Theorem 4

To prove this theorem, consider the following three sets of constraints.

  1. (I)

    Since \( \alpha \ge 0.5 \), we have \( \varPhi^{ - 1} \left( \alpha \right) \ge 0 \), the expression \( - \sum\nolimits_{r = 1}^{s} {u_{r} y_{rk} } + \varPhi^{ - 1} \left( \alpha \right)\sqrt {\sum\nolimits_{r = 1}^{s} {u_{r}^{2} var\left( {\tilde{y}_{rk} } \right)} } \), is a convex function by Lemmas 1, 2 and 3.

  2. (II)

    \( \sum\nolimits_{r = 1}^{s} {u_{r} y_{rj} } - \sum\nolimits_{i = 1}^{m} {v_{i} x_{ij} } + \sqrt {\sum\nolimits_{r = 1}^{s} {\bar{z}_{tr}^{2} var\left( {\tilde{y}_{rj} } \right)} } \le 0,\,j = 1, \ldots ,N,t = 1, \ldots ,T, \) is a convex function.

  3. (III)

    The linear functions \( \bar{z}_{tr} \ge a_{t}^{{}} \bar{u}_{rj} + b_{t}^{{}} u_{r} , \) \( \sum\nolimits_{j = 1}^{n} {\bar{u}_{rj} } = u_{r} ,\,r = 1, \ldots ,s,\,t = 1, \ldots ,T,\,\,j = 1, \ldots ,N, \) are convex functions because linear functions are convex.

Therefore, Model (7) is a convex optimization problem and has a global optimal solution.\( \square \)

Proof of Theorem 5

The proof is given in Beck (2014).\( \square \)

Proof of Theorem 6

The proof is like that of Theorem 3 and so it is omitted.\( \square \)

Proof of Theorem 7

Let \( X_{P} \) and \( X_{T} \) be the feasible region of the constraints of Model (4) for the piecewise Linear approximation and the piecewise tangent approximation of \( \varPhi^{ - 1} \left( {\alpha^{{\lambda_{j} }} } \right) \), respectively.

$$ \begin{aligned} & X_{P} = \left\{ {\left( {u,v,\lambda } \right):\sum\limits_{i = 1}^{m} {v_{i} x_{ik} } = 1,\sum\limits_{r = 1}^{s} {u_{r} y_{rj} } + \varPhi^{ - 1} \left( {\alpha^{{\lambda_{j} }} } \right)\sqrt {\sum\limits_{r = 1}^{s} {u_{r}^{2} var\left( {\tilde{y}_{rj} } \right)} } \le \sum\limits_{i = 1}^{m} {v_{i} x_{ij} } ,\,\,\sum\limits_{j = 1}^{n} {\lambda_{j} } = 1,v_{i} \ge 0,u_{r} \ge 0,\lambda_{j} \ge 0} \right\} \\ & \varOmega_{j} = \mathop {max}\limits_{t = 1, \ldots ,T} \left\{ {\varPhi^{ - 1} \left( {\alpha^{{\lambda_{t} }} } \right) + \frac{1}{{\varPhi^{\prime}\left( {\varPhi^{ - 1} \left( {\alpha^{{\lambda_{t} }} } \right)} \right)}}\alpha^{{\lambda_{t} }} ln\left( \alpha \right)\left( {\lambda_{j} - \lambda_{t} } \right)} \right\} \\ & X_{T} = \left\{ {\left( {u,v,\lambda } \right):\sum\limits_{i = 1}^{m} {v_{i} x_{ik} } = 1,\sum\limits_{r = 1}^{s} {u_{r} y_{rj} } + \varOmega_{j} \sqrt {\sum\limits_{r = 1}^{s} {u_{r}^{2} var\left( {\tilde{y}_{rj} } \right)} } \le \sum\limits_{i = 1}^{m} {v_{i} x_{ij} } ,\,\,\sum\limits_{j = 1}^{n} {\lambda_{j} } = 1,v_{i} \ge 0,u_{r} \ge 0,\lambda_{j} \ge 0} \right\} \\ \end{aligned} $$

We know

$$ \varPhi^{ - 1} \left( {\alpha^{{\lambda_{j} }} } \right) \ge \mathop {max}\limits_{t = 1, \ldots ,T} \left\{ {\varPhi^{ - 1} \left( {\alpha^{{\lambda_{t} }} } \right) + \frac{1}{{f\left( {\varPhi^{ - 1} \left( {\alpha^{{\lambda_{t} }} } \right)} \right)}}\alpha^{{\lambda_{t} }} ln\left( \alpha \right)\left( {\lambda_{j} - \lambda_{t} } \right)} \right\} = \varOmega_{j} $$

Then \( X_{T} \subseteq X_{P} \), and consequently \( \theta_{T}^{*} \le \theta_{P}^{*} \).\( \square \)

Appendix B. Proofs of lemmas

Proof of Lemma 1

See Beck (2014).\( \square \)

Proof of Lemma 2

Let \( \eta \in \left( {0, 1} \right) \). Then \( \varPsi \left( {\eta X_{1} + \left( {1 - \eta } \right)X_{2} } \right) = \sqrt {\eta^{2} X_{1}^{t} VX_{1} + \left( {1 - \eta } \right)^{2} X_{2}^{t} VX_{2}^{{}} + 2\eta \left( {1 - \eta } \right)X_{1}^{t} VX_{2}^{{}} } \), which leads to: \( 2\eta \left( {1 - \eta } \right)X_{1}^{t} VX_{2}^{{}} \le \eta \left( {1 - \eta } \right)\left[ {X_{1}^{t} VX_{1}^{{}} + X_{2}^{t} VX_{2}^{{}} } \right] \). Clearly,

$$ \begin{aligned} \varPsi \left( {\eta X_{1} + \left( {1 - \eta } \right)X_{2} } \right) & = \sqrt {\eta^{2} X_{1}^{t} VX_{1} + \left( {1 - \eta } \right)^{2} X_{2}^{t} VX_{2}^{{}} + 2\eta \left( {1 - \eta } \right)X_{1}^{t} VX_{2}^{{}} } \\ & \le \eta \sqrt {X_{1}^{t} VX_{1} } + \left( {1 - \eta } \right)\sqrt {X_{2}^{t} VX_{2} } = \eta \varPsi \left( {X_{1} } \right) + \left( {1 - \eta } \right)\varPsi \left( {X_{2} } \right) \\ \end{aligned} $$

Thus, \( \varPsi \left( X \right) \) is a convex function.\( \square \)

Proof of Lemma 3

Clearly, \( \alpha^{{\lambda_{j} }} \) is convex and hence \( \varPhi^{ - 1} \left( . \right) \) is increasing and convex. It follows that \( \alpha^{{t\lambda_{1} + \left( {1 - t} \right)\lambda_{2} }} \le t\alpha^{{\lambda_{1} }} + \left( {1 - t} \right)\alpha^{{\lambda_{2} }} \). Since \( \varPhi^{ - 1} \left( . \right) \) is an increasing function, we have \( \varPhi^{ - 1} \left( {\alpha^{{t\lambda_{1} + \left( {1 - t} \right)\lambda_{2} }} } \right) \le \varPhi^{ - 1} \left( {t\alpha^{{\lambda_{1} }} + \left( {1 - t} \right)\alpha^{{\lambda_{2} }} } \right) \le t\varPhi^{ - 1} \left( {\alpha^{{\lambda_{1} }} } \right) + \left( {1 - t} \right)\varPhi^{ - 1} \left( {\alpha^{{\lambda_{2} }} } \right) \). Therefore, \( \varPhi^{ - 1} \left( {\alpha^{{\lambda_{j} }} } \right) \) is convex.\( \square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khanjani Shiraz, R., Tavana, M. & Fukuyama, H. A joint chance-constrained data envelopment analysis model with random output data. Oper Res Int J 21, 1255–1277 (2021). https://doi.org/10.1007/s12351-019-00478-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12351-019-00478-0

Keywords

Navigation