Skip to main content
Log in

Fast and accurate computation of the distribution of sums of dependent log-normals

  • Original Research
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

We present a new Monte Carlo methodology for the accurate estimation of the distribution of the sum of dependent log-normal random variables. The methodology delivers statistically unbiased estimators for three distributional quantities of significant interest in finance and risk management: the left tail, or cumulative distribution function; the probability density function; and the right tail, or complementary distribution function of the sum of dependent log-normal factors. For the right tail our methodology delivers a fast and accurate estimator in settings for which existing methodology delivers estimators with large variance that tend to underestimate the true quantity of interest. We provide insight into the computational challenges using theory and numerical experiments, and explain their much wider implications for Monte Carlo statistical estimators of rare-event probabilities. In particular, we find that theoretically strongly efficient estimators should be used with great caution in practice, because they may yield inaccurate results in the prelimit. Further, this inaccuracy may not be detectable from the output of the Monte Carlo simulation, because the simulation output may severely underestimate the true variance of the estimator.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. We denote the standard normal pdf with covariance \(\varSigma \) via \(\phi _\varSigma (\cdot )\) (\(\phi (\cdot )\equiv \phi _{\mathbf {I}}(\cdot )\)) and the univariate cdf and complementary cdf by \(\varPhi (\cdot )\) and \(\overline{\varPhi }(\cdot )\), respectively.

  2. The notation \(f(x)\simeq g(x)\) as \(x\rightarrow a\) stands for \(\lim _{x\rightarrow a}f(x)/g(x)=1\). Similarly, we define \(f(x)=\mathcal {O}(g(x))\Leftrightarrow \lim _{x\rightarrow a}|f(x)/g(x)|<\mathrm {const.}<\infty \); \(f(x)=o(g(x))\Leftrightarrow \lim _{x\rightarrow a}f(x)/g(x)=0\); also, \(f(x)=\varTheta (g(x))\Leftrightarrow f(x)=\mathcal {O}(g(x))\text { and } g(x)=\mathcal {O}(f(x))\).

  3. We remark that the GT estimator applies to the more general setting of sums and differences of log-normals. This generality of the GT estimator, however, comes at the cost of not being the most efficient estimator for sums—the case we consider here.

References

  • Ambartzumian, R., Der Kiureghian, A., Ohaniana, V., & Sukiasiana, H. (1998). Multinormal probability by sequential conditioned importance sampling: Theory and application. Probabilistic Engineering Mechanics, 13(4), 299–308.

    Article  Google Scholar 

  • Asmussen, S. (2018). Conditional Monte Carlo for sums, with applications to insurance and finance. Annals of Actuarial Science, 2018, 1–24.

    Article  Google Scholar 

  • Asmussen, S., & Binswanger, K. (1997). Simulation of ruin probabilities for subexponential claims. Astin Bulletin, 27(02), 297–318.

    Article  Google Scholar 

  • Asmussen, S., Blanchet, J., Juneja, S., & Rojas-Nandayapa, L. (2011). Efficient simulation of tail probabilities of sums of correlated lognormals. Annals of Operations Research, 189(1), 5–23.

    Article  Google Scholar 

  • Asmussen, S., Goffard, P. O., & Laub, P. J. (2016). Orthonormal polynomial expansions and lognormal sum densities. arXiv preprint arXiv:1601.01763.

  • Asmussen, S., Jensen, J. L., & Rojas-Nandayapa, L. (2014). On the Laplace transform of the lognormal distribution. Methodology and Computing in Applied Probability, 18, 441–458.

    Article  Google Scholar 

  • Asmussen, S., Jensen, J. L., & Rojas-Nandayapa, L. (2016). Exponential family techniques for the lognormal left tail. Scandinavian Journal of Statistics, 43, 774–787.

    Article  Google Scholar 

  • Asmussen, S., & Kroese, D. P. (2006). Improved algorithms for rare event simulation with heavy tails. Advances in Applied Probability, 38(02), 545–558.

    Article  Google Scholar 

  • Asmussen, S., & Rojas-Nandayapa, L. (2008). Asymptotics of sums of lognormal random variables with gaussian copula. Statistics & Probability Letters, 78(16), 2709–2714.

    Article  Google Scholar 

  • Bacry, E., Kozhemyak, A., & Muzy, J. F. (2013). Log-normal continuous cascade model of asset returns: Aggregation properties and estimation. Quantitative Finance, 13(5), 795–818.

    Article  Google Scholar 

  • Blanchet, J., Juneja, S., & Rojas-Nandayapa, L. (2008). Efficient tail estimation for sums of correlated lognormals. In Proceedings of the 2008 winter simulation conference, IEEE, pp. 607–614.

  • Botev, Z. I. (2016). The normal law under linear restrictions: Simulation and estimation via minimax tilting. Journal of the Royal Statistical Society: Series B (Statistical Methodology),. https://doi.org/10.1111/rssb.12162.

    Article  Google Scholar 

  • Botev, Z., & L’Ecuyer, P. (2017). Accurate computation of the right tail of the sum of dependent log-normal variates. In WSC 2017-winter simulation conference.

  • Doerr, C., Blenn, N., & Van Mieghem, P. (2013). Lognormal infection times of online information spread. PLoS ONE, 8(5), e64349.

    Article  Google Scholar 

  • Dufresne, D. (2004). The log-normal approximation in financial and other computations. Advances in Applied Probability, 36(3), 747–773.

    Article  Google Scholar 

  • Embrechts, P., Puccetti, G., Rüschendorf, L., Wang, R., & Beleraj, A. (2014). An academic response to basel 3.5. Risks, 2(1), 25–48.

    Article  Google Scholar 

  • Fenton, L. (1960). The sum of log-normal probability distributions in scatter transmission systems. IRE Transactions on Communications Systems, 8(1), 57–67.

    Article  Google Scholar 

  • Gubner, J. A. (2006). A new formula for lognormal characteristic functions. IEEE Transactions on Vehicular Technology, 55(5), 1668–1671.

    Article  Google Scholar 

  • Gulisashvili, A., & Tankov, P. (2016). Tail behavior of sums and differences of log-normal random variables. Bernoulli, 22(1), 444–493.

    Article  Google Scholar 

  • Hashorva, E., & Hüsler, J. (2003). On multivariate Gaussian tails. Annals of the Institute of Statistical Mathematics, 55(3), 507–522.

    Article  Google Scholar 

  • Hcine, M. B., & Bouallegue, R. (2015). Highly accurate log skew normal approximation to the sum of correlated lognormals. arXiv preprint arXiv:1501.02347.

  • Kortschak, D., & Hashorva, E. (2013). Efficient simulation of tail probabilities for sums of log-elliptical risks. Journal of Computational and Applied Mathematics, 247, 53–67.

    Article  Google Scholar 

  • Kortschak, D., & Hashorva, E. (2014). Second order asymptotics of aggregated log-elliptical risk. Methodology and Computing in Applied Probability, 16(4), 969–985.

    Article  Google Scholar 

  • Kroese, D. P., Taimre, T., & Botev, Z. I. (2011). Handbook of Monte Carlo methods (Vol. 706). New York: Wileys.

    Book  Google Scholar 

  • Laub, P. J., Asmussen, S., Jensen, J. L., & Rojas-Nandayapa, L. (2016). Approximating the Laplace transform of the sum of dependent lognormals. Advances in Applied Probability, 48(A), 203–215.

    Article  Google Scholar 

  • Limpert, E., Stahel, W. A., & Abbt, M. (2001). Log-normal distributions across the sciences: Keys and clues. BioScience, 51(5), 341–352.

    Article  Google Scholar 

  • McNeil, A., Frey, R., & Embrechts, P. (2015). Quantitative risk management: Concepts, techniques and tools. Princeton: Princeton University Press.

    Google Scholar 

  • Milevsky, M. A., & Posner, S. E. (1998). Asian options, the sum of lognormals, and the reciprocal gamma distribution. Journal of Financial and Quantitative Analysis, 33(03), 409–422.

    Article  Google Scholar 

  • Nguyen, Q. H., & Robert, C. Y. (2014). New efficient estimators in rare event simulation with heavy tails. Journal of Computational and Applied Mathematics, 261, 39–47.

    Article  Google Scholar 

  • Rached, N. B., Kammoun, A., Alouini, M.-S., & Tempone, R. (2016). Unified importance sampling schemes for efficient simulation of outage capacity over generalized fading channels. IEEE Journal of Selected Topics in Signal Processing, 10(2), 376–388.

    Article  Google Scholar 

  • Zuanetti, D., Diniz, C., & Leite, J. (2006). A lognormal model for insurance claims data. REVSTAT-Statistical Journal, 4(2), 131–142.

    Google Scholar 

Download references

Acknowledgements

We would like to thank Patrick Laub for his valuable feedback and comments on earlier drafts of this work and for sharing his computer code for pdf estimation. Zdravko Botev has been supported by the Australian Research Council Grant DE140100993. Robert Salomone has been supported by the Australian Research Council Centre of Excellence for Mathematical & Statistical Frontiers (ACEMS), under Grant No. CE140100049.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zdravko I. Botev.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Proof of Theorem 1

Proof

To proceed with the proof we recall the following three facts. First, note that \(\ell (\gamma )=\mathbb {P}(\varvec{1}^\top \exp (\varvec{Y})\le \gamma )\), where \(\varvec{Y}=\varvec{\nu }+\mathbf {L}\varvec{Z}\). Using Jensen’s inequality, we have that for any \(\varvec{w}\in \mathcal {W}\):

$$\begin{aligned} \begin{aligned} \ell (\gamma )&=\textstyle \mathbb {P}( \varvec{w}^\top \exp (\varvec{Y}-\ln \varvec{w}) \le \gamma )\le \textstyle \mathbb {P}( \varvec{w}^\top \ln (\varvec{w})-\varvec{w}^\top \varvec{Y} \ge -\ln \gamma )\\&\le \textstyle \overline{\varPhi }\left( \frac{\varvec{w}^\top \varvec{\nu }-\ln \gamma -\varvec{w}^\top \ln \varvec{w}}{\sqrt{\varvec{w}^\top \varSigma \varvec{w}}}\right) \le \exp \left( -\frac{(\varvec{w}^\top \varvec{\nu }-\ln \gamma -\varvec{w}^\top \ln \varvec{w})^2}{2\varvec{w}^\top \varSigma \varvec{w}}\right) \end{aligned} \end{aligned}$$
(16)

Second, denote \(\bar{\varvec{w}}={{\,\mathrm{argmin}\,}}_{\varvec{w}\in \mathcal {W}} \varvec{w}^\top \varSigma \varvec{w}\) and the set \(\mathcal {C}_\gamma \equiv \{\varvec{z}: \varvec{1}^\top \exp (\mathbf {L} \varvec{z}+\varvec{\nu })\le \gamma \}\). Then, we have the asymptotic formula, proved in Gulisashvili and Tankov (2016, Formulas (13) and (63)):

$$\begin{aligned} \ln \ell ^2(\gamma )\simeq c_1-\frac{(\ln (\gamma )-\bar{\varvec{w}}^\top \varvec{\nu }+\bar{\varvec{w}}^\top \ln \bar{\varvec{w}})^2}{\bar{\varvec{w}}^\top \varSigma \bar{\varvec{w}}}-(1+d)\ln (-\ln \gamma ),\qquad \gamma \downarrow 0, \end{aligned}$$
(17)

where \(c_1\) is a constant, independent of \(\gamma \). Thirdly, consider the nonlinear optimization

$$\begin{aligned} \bar{\varvec{\mu }}={\mathop {{{\,\mathrm{argmin}\,}}}\limits _{\varvec{\mu }}}\left\{ \Vert \varvec{\mu }\Vert ^2-\frac{(\ln (\gamma )-\bar{\varvec{w}}^\top (\varvec{\nu }-\mathbf {L}\varvec{\mu })+\bar{\varvec{w}}^\top \ln \bar{\varvec{w}})^2}{2\bar{\varvec{w}}^\top \varSigma \bar{\varvec{w}}}\right\} \end{aligned}$$
(18)

with explicit solution

$$\begin{aligned} \bar{\varvec{\mu }}=\frac{\ln \gamma -\bar{\varvec{w}}^\top \varvec{\nu }+\bar{\varvec{w}}^\top \ln \bar{\varvec{w}}}{\bar{\varvec{w}}^\top \varSigma \bar{\varvec{w}}}\;\mathbf {L}^\top \bar{\varvec{w}} \end{aligned}$$
(19)

Then, we obtain the following bound on the second moment:

$$\begin{aligned}&\mathbb {E}_{\varvec{\mu }^*}\hat{\ell }^2(\gamma )\\&\quad =\mathbb {E}_{\varvec{\mu }^*} \exp (2\psi (\varvec{Z};\varvec{\mu }^*))\\&\quad =\mathbb {E} \exp (\psi (\varvec{Z};\varvec{\mu }^*))\mathbb {I}\{\varvec{Z}\in \mathcal {C}_\gamma \}\\&\quad =\textstyle \mathbb {E} \exp (\Vert \varvec{\mu }^*\Vert ^2_2) \mathbb {I}\{(\varvec{Z}-\varvec{\mu }^*)\in \mathcal {C}_\gamma \} \prod _j\overline{\varPhi }(\mu _j^*-\alpha _j(\varvec{Z}-\varvec{\mu }^*))\\&\quad \le \textstyle \exp (\Vert \varvec{\mu }^*\Vert ^2_2)\mathbb {P}((\varvec{Z}-\varvec{\mu }^*)\in \mathcal {C}_\gamma )\\&{\text {using }}(16) \\&\quad \le \textstyle \exp (\Vert \varvec{\mu }^*\Vert ^2_2) \overline{\varPhi }\left( \frac{(\varvec{\nu }-\mathbf {L}\varvec{\mu }^*)^\top \varvec{w}^*-\ln \gamma -(\varvec{w}^*)^\top \ln \varvec{w}^*}{\sqrt{(\varvec{w}^*)^\top \varSigma \varvec{w}^*}}\right) \\&{\text {via }}(16)+(18) \\&\quad \le \textstyle \exp \Big (\Vert \bar{\varvec{\mu }}\Vert ^2_2 -\frac{(\bar{\varvec{w}}^\top (\varvec{\nu }-\mathbf {L}\bar{\varvec{\mu }})-\ln \gamma -\bar{\varvec{w}}^\top \ln \bar{\varvec{w}})^2}{2\bar{\varvec{w}}^\top \varSigma \bar{\varvec{w}}} \Big ) \end{aligned}$$

By substituting (19) in the last line, we obtain the upper bound

$$\begin{aligned} \mathbb {E}_{\varvec{\mu }^*}\hat{\ell }^2\le \textstyle \exp \left( -\frac{(\ln \gamma -\bar{\varvec{w}}^\top \varvec{\nu }+\bar{\varvec{w}}^\top \ln \bar{\varvec{w}})^2}{\bar{\varvec{w}}^\top \varSigma \bar{\varvec{w}}}\right) \end{aligned}$$

In other words, from (17) we deduce that

$$\begin{aligned} \frac{\mathbb {E}_{\varvec{\mu }^*}\hat{\ell }^2(\gamma )}{\ell ^2(\gamma )}= \mathcal {O}( (-\ln \gamma )^{(d+1)}) ,\qquad \gamma \downarrow 0 \end{aligned}$$

and therefore

$$\begin{aligned} \liminf _{\gamma \downarrow 0} \frac{\ln \mathbb {E}_{\varvec{\mu }^*}\hat{\ell }^2(\gamma )}{\ln \ell (\gamma )}=2, \end{aligned}$$

which implies that the algorithm is logarithmically efficient with respect to \(\gamma \). \(\square \)

1.2 Proof of Lemma 1

Proof

Let \(N{\mathop {=}\limits ^{\mathrm {def}}}\sum _{i=1}^d\mathbb {I}{\{X_i>\gamma \}},\) so that \(\ell _1(\gamma )=\mathbb {P}(N\ge 1)\simeq \ell _\mathrm {as}\) and the residual

$$\begin{aligned} \textstyle r(\gamma ){\mathop {=}\limits ^{\mathrm {def}}}\ell _\mathrm {as}-\ell _1(\gamma )=\sum _{i<j}\mathbb {P}(X_i>\gamma ,X_j>\gamma )+o\left( \sum _{i<j}\mathbb {P}(X_i>\gamma ,X_j>\gamma )\right) . \end{aligned}$$

Note that \(\mathbb {P}(N>1)=\varTheta (r(\gamma ))\) and \(\mathbb {P}_g(N=1)=\mathbb {P}(N=1)/\ell _\mathrm {as}(\gamma )=\varTheta ( 1)\), where g is the mixture density defined in (8). We thus obtain

$$\begin{aligned} \begin{aligned} \mathbb {E}_g\left| \hat{\ell }_1-\ell _1(\gamma )\right| ^m&= \sum _{j=1}^d\mathbb {E}_g \left[ \left| \hat{\ell }_1-\ell _1(\gamma )\right| ^m\,\mathbb {I}{\{N=j\}}\right] \\&=|\ell _\mathrm {as}(\gamma )-\ell _1(\gamma )|^m\mathbb {P}_g(N=1)+\sum _{j=2}^d\left| \frac{\ell _\mathrm {as}(\gamma )}{j}-\ell _1(\gamma )\right| ^m\mathbb {P}_g(N=j)\\&=r^m(\gamma )\mathbb {P}_g(N=1)+\varTheta (\ell _\mathrm {as}^m) \mathbb {P}_g(N>1)\\&=r^m(\gamma )\mathbb {P}_g(N=1)+\varTheta (\ell _\mathrm {as}^{m-1}) \mathbb {P}(N>1)\\&=\varTheta \left( r^m(\gamma )\right) +\varTheta \left( \ell _\mathrm {as}^{m-1} r(\gamma )\right) . \end{aligned} \end{aligned}$$

Therefore, since \(r(\gamma )=o(\ell _\mathrm {as}(\gamma ))\), we have:

$$\begin{aligned} \begin{aligned} n\mathbb {V}\mathrm {ar}(S_n^2)&=\mathbb {E}_g(\hat{\ell }_1-\ell _1(\gamma ))^4+\left( \frac{2}{n-1}-1\right) \mathbb {V}\mathrm {ar}^2(\hat{\ell }_1)\\&=\varTheta (r^4)+\varTheta (\ell _\mathrm {as}^{3}(\gamma ) r(\gamma ))+\varTheta (\mathbb {V}\mathrm {ar}^2(\hat{\ell }_1))\\&=\varTheta (\ell _\mathrm {as}^{3}(\gamma ) r(\gamma ))+\varTheta (\mathbb {V}\mathrm {ar}^2(\hat{\ell }_1)), \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \mathbb {V}\mathrm {ar}^2(\hat{\ell }_1)=\varTheta (r^4)+\varTheta (\ell _\mathrm {as}(\gamma ) r^3(\gamma ))+\varTheta (\ell _\mathrm {as}^{2}(\gamma )r^2(\gamma ))=\varTheta (\ell _\mathrm {as}^{2}(\gamma )r^2(\gamma )) \end{aligned}$$

Therefore, the relative error is \(\mathbb {V}\mathrm {ar}(S_n^2)/\mathbb {V}\mathrm {ar}^2(\hat{\ell }_1)=\varTheta (\ell _\mathrm {as}(\gamma )/r(\gamma ))\). By Lemma 4 there exists an \(\alpha >1\) such that

$$\begin{aligned} \textstyle \frac{r(\gamma )}{\ell _\mathrm {as}(\gamma )}= \frac{r(\gamma )}{\ell _\mathrm {as}(\gamma ^\alpha )}\times \frac{\ell _\mathrm {as}(\gamma ^\alpha )}{\ell _\mathrm {as}(\gamma )}=o(1)\times \mathcal {O}\left( \exp \left( -\frac{(\alpha ^2-1)\ln ^2(\gamma )}{2\sigma ^2}\right) \right) , \end{aligned}$$

which shows that \(\frac{\ell _\mathrm {as}(\gamma )}{r(\gamma )}\) grows at least at the exponential rate \(\exp (\frac{(\alpha ^2-1)\ln ^2(\gamma )}{2\sigma ^2})\). \(\square \)

This completes the proof. The proof also fills in the omitted details for (Botev and L’Ecuyer 2017, Proposition 1).

1.3 Proof of Lemma 2

Proof

First we show 1. To this end, recall that \(\varvec{X}=\exp (\varvec{Y})\), where \(\varvec{Y}\sim \mathsf {N}(\varvec{\nu },\varSigma )\). Further, recall the well-known property (which is strengthened in Lemma 4) that for \(i\not =j\) and \(\mathrm {Corr}(Y_i,Y_j)<1\), the pair \(Y_i,Y_j\) is asymptotically independent in the sense that

$$\begin{aligned} \mathbb {P}(Y_i>\gamma |Y_j>\gamma )=o(1),\qquad \gamma \uparrow \infty . \end{aligned}$$

In fact, Lemma 4 shows that this decay to zero is exponential. The consequences of this are \( \mathbb {P}(\max _i Y_i>\gamma )\simeq \sum _i \mathbb {P}(Y_i>\gamma ) \) and

$$\begin{aligned} \mathbb {P}(Y_k>\gamma ,\max _{i\not =k} Y_i>\gamma )=o(\mathbb {P}(Y_k>\gamma )) \end{aligned}$$

With these properties, we then have the lower bound:

$$\begin{aligned} \begin{aligned} \mathbb {P}(S>\gamma ,X_k=M)&\ge \mathbb {P}(X_k=M>\gamma )\\&\ge \mathbb {P}(X_k>\gamma ,\max _{j\not =k}X_j<\gamma )\\&=\mathbb {P}(Y_k>\ln \gamma ,\max _{j\not =k}Y_j<\ln \gamma ) \\&=\mathbb {P}(Y_k>\ln \gamma )+o(\mathbb {P}(Y_k>\ln \gamma ))\\&=\mathbb {P}(X_k>\gamma )+o(\mathbb {P}(X_k>\gamma )) \end{aligned} \end{aligned}$$

Next, using the result \(\mathbb {P}(S>\gamma ,X_k=M<\ln \gamma )=o(\mathbb {P}(X_k>\ln \gamma ))\) from Lemma 3, we also have the analogous upper bound:

$$\begin{aligned} \begin{aligned} \mathbb {P}(S>\gamma ,X_k=M)&= \mathbb {P}(X_k=M>\gamma )+\mathbb {P}(S>\gamma ,X_k=M<\gamma )\\&\le \mathbb {P}(X_k>\gamma )+\mathbb {P}(S>\gamma ,X_k=M<\gamma )\\&= \mathbb {P}(X_k>\gamma )+o(\mathbb {P}(X_k>\gamma )), \end{aligned} \end{aligned}$$

whence we conclude that \(\mathbb {P}(S>\gamma ,X_k=M)\simeq \mathbb {P}(X_k>\gamma )\).

Next, we show point 2. Using the facts that: (1) the fewer the active constraints in any solution, the closer its minimum is to zero (without constraints the minimum of (12) is zero); (2) any solution satisfies the Karush–Kuhn–Tucker (KKT) necessary conditions:

$$\begin{aligned} \begin{aligned} \varSigma ^{-1}\varvec{\mu }-\lambda _1\nabla g_1(\varvec{\mu })-\lambda _2 \nabla g_2(\varvec{\mu })&=\varvec{0}\\ \varvec{\lambda }\ge \varvec{0},\quad \varvec{g}(\varvec{\mu })\ge \varvec{0},\quad \varvec{\lambda }^\top \varvec{g}(\varvec{\mu })&=0, \end{aligned} \end{aligned}$$

we can verify by direct substitution that \(\varvec{\mu }^*\) satisfies the KKT conditions asymptotically as \(\gamma \uparrow \infty \) and that it causes only one constraint to be active (\(g_1(\varvec{\mu }^*)=o(1)\)). Moreover, it yields the asymptotic minimum:

$$\begin{aligned} \frac{1}{2}(\varvec{\mu }^*)^\top \varSigma ^{-1}\varvec{\mu }^*= \frac{(\ln (\gamma )-\nu _k)^2}{2\sigma _k^4}\varvec{e}_k^\top \varSigma \varSigma ^{-1}\varSigma \varvec{e}_k=\frac{(\ln (\gamma )-\nu _k)^2}{2\sigma _k^2} \end{aligned}$$

Finally, we show point 3, which is the linchpin of the proposed methodology. To this end, consider the \((m+1)\)-st moment with \(\varvec{\mu }\rightarrow \varvec{\mu }^*\) as \(\gamma \uparrow \infty \):

$$\begin{aligned} \begin{aligned} \mathbb {E}_{\varvec{\mu }} \hat{\hbar }^{m+1}_k=\mathbb {E}_{\varvec{0}} \hat{\hbar }^{m}_k&=\textstyle \mathbb {E} \exp \left( \frac{m\varvec{\mu }^\top \varSigma ^{-1}\varvec{\mu }}{2}-m\varvec{\mu }^\top \varSigma ^{-1}(\varvec{Y}-\varvec{\nu })\right) \mathbb {I}\{S>\gamma , X_k=M\}\\&=\textstyle \exp \left( \frac{(m^2+m)\varvec{\mu }^\top \varSigma ^{-1}\varvec{\mu }}{2}\right) \mathbb {P}_{-m\varvec{\mu }}(S>\gamma , X_k=M)\\&\simeq \textstyle \exp \left( \frac{(m^2+m)(\ln (\gamma )-\nu _k)^2}{2\sigma _k^2}\right) \mathbb {P}_{-m\varvec{\mu }^*}(S>\gamma , X_k=M)\\ \end{aligned} \end{aligned}$$

Next, notice that the measure \(\mathbb {P}_{-m\varvec{\mu }^*}\) is equivalent to first simulating

$$\begin{aligned} Y_k\sim \mathsf {N}(\nu _k-m(\ln (\gamma )-\nu _k),\sigma _k^2) \end{aligned}$$

and then, given \(Y_k=y_k\), simulating all the rest of the components, denoted \(\varvec{Y}_{-k}\), from the nominal Gaussian density \(\phi _\varSigma (\varvec{y}-\varvec{\nu })\) conditional on \(Y_k=y_k\), that is, \(\varvec{Y}_{-k}\sim \phi _\varSigma (\varvec{y}-\varvec{\nu }|y_k)\). In other words, asymptotically, the effect of the change of measure induced by (12) is to modify the marginal distribution of \(X_k\) only. Thus, repeating the same argument used to prove part 1, we have

$$\begin{aligned} \mathbb {P}_{-m\varvec{\mu }^*}(S>\gamma , X_k=M)\simeq \mathbb {P}_{-m\varvec{\mu }^*}(Y_k>\ln \gamma )=\overline{\varPhi }\left( \frac{(m+1)(\ln \gamma -\nu _k)}{\sigma _k}\right) \end{aligned}$$

Therefore, as \(\gamma \uparrow \infty \),

$$\begin{aligned} \begin{aligned} \mathbb {E}_{\varvec{\mu }} \hat{\hbar }^{m+1}_k&\textstyle \simeq \exp \left( \frac{(m^2+m)(\ln (\gamma )-\nu _k)^2}{2\sigma _k^2}\right) \overline{\varPhi }\left( \frac{(m+1)(\ln \gamma -\nu _k)}{\sigma _k}\right) \\&=\textstyle \varTheta \left( \frac{1}{\ln \gamma }\exp \left( -\frac{(m+1)(\ln (\gamma )-\nu _k)^2}{2\sigma _k^2}\right) \right) =\varTheta (\ln ^m(\gamma )\hbar _k^{m+1}) \end{aligned} \end{aligned}$$

Then, the part 3 of Lemma 2 follows from putting \(m=1\), and observing that

$$\begin{aligned} \begin{aligned} \frac{\mathbb {V}\mathrm {ar}(\hat{\hbar }_k)}{\hbar _k^2}&=\frac{\mathbb {E}_{\varvec{\mu }}\hat{\hbar }_k^2}{[\mathbb {P}(S>\gamma ,X_k=M)]^2}-1\simeq \frac{\mathbb {E}_{\varvec{\mu }}\hat{\hbar }_k^2}{[\mathbb {P}(X_k>\gamma )]^2}-1=\varTheta (\ln (\gamma )) \end{aligned} \end{aligned}$$

\(\square \)

Lemma 3

We have \(\mathbb {P}(S>\gamma ,X_k=M<\gamma )=o(\mathbb {P}(X_k>\gamma ))\) as \(\gamma \uparrow \infty \).

Proof

Let \(\beta \in (0,1)\) and \(M_{-k}=\max _{j\not =k}X_j\). Then, using the facts:

$$\begin{aligned} \frac{\overline{\varPhi }(\ln (\gamma -\gamma ^\beta ))}{\overline{\varPhi }(\ln \gamma )}\simeq \exp \left( -\frac{\ln ^2(\gamma -\gamma ^\beta )-\ln ^2(\gamma )}{2}\right) \frac{\gamma -\beta \gamma ^\beta }{\gamma -\gamma ^\beta } \end{aligned}$$

and

$$\begin{aligned} \ln ^2(\gamma )-\ln ^2(\gamma -\gamma ^\beta )\simeq 2\frac{\ln (\gamma )}{\gamma ^{1-\beta }}+o\left( \frac{\ln (\gamma )}{\gamma ^{1-\beta }}\right) , \end{aligned}$$

we obtain \(\overline{\varPhi }(\ln (\gamma -\gamma ^\beta ))\simeq \overline{\varPhi }(\ln \gamma )\) for any \(\beta \in (0,1)\). More generally,

$$\begin{aligned} \mathbb {P}(\ln (\gamma -\gamma ^\beta )\le Y_k\le \ln \gamma )=o(\mathbb {P}(Y_k>\ln \gamma )). \end{aligned}$$

Then, we have \(\mathbb {P}(S>\gamma ,X_k=M<\gamma )=\)

$$\begin{aligned} \begin{aligned}&=\textstyle \mathbb {P}(M_{-k}>\gamma ^\beta ,S>\gamma ,X_k=M<\gamma )+\mathbb {P}(M_{-k}<\gamma ^\beta ,S>\gamma ,X_k=M<\gamma )\\&\le \textstyle \mathbb {P}(\gamma ^\beta<M_{-k}<X_k<\gamma )+\mathbb {P}(M_{-k}<\gamma ^\beta ,\;\gamma -(d-1)M_{-k}<X_k<\gamma )\\&\le \textstyle \mathbb {P}(\gamma ^\beta<M_{-k},\;\gamma ^\beta<X_k)+\mathbb {P}(\gamma -(d-1)\gamma ^\beta<X_k<\gamma ) \end{aligned} \end{aligned}$$

Since for large enough \(\gamma \) there exists a \(\beta '\in (\beta ,1)\) such that \((d-1)\gamma ^\beta <\gamma ^{\beta '}\), we have

$$\begin{aligned} \mathbb {P}(\gamma -(d-1)\gamma ^\beta<X_k<\gamma )\le \mathbb {P}(\gamma -\gamma ^{\beta '}<X_k<\gamma )=o(\mathbb {P}(X_k>\gamma )) \end{aligned}$$

The proof will then be complete if we can find a \(\beta \in (0,1)\), such that (\(u=\ln \gamma \))

$$\begin{aligned} \mathbb {P}(M_{-k}>\gamma ^\beta ,X_k>\gamma ^\beta )= \mathbb {P}(\max _{j\not =k}Y_j>\beta u,Y_k>\beta u)=o(\mathbb {P}(Y_k>u)) \end{aligned}$$

Since \(\mathbb {P}(\max _{j\not =k}Y_j>\beta u,Y_k>\beta u)=\mathcal {O}\left( \sum _{j\not =k}\mathbb {P}(Y_j>\beta u,Y_k>\beta u)\right) \), the last is equivalent to showing that the bivariate normal probability \( \mathbb {P}(Y_j>\beta u,Y_k>\beta u)=o(\mathbb {P}(Y_k>u)) \) for some \(\beta \in (0,1)\). This last part then follows from Lemma 4, which completes the proof. \(\square \)

Lemma 4

(Gaussian Tail Probability) Let \(Y_1\sim \mathsf {N}(\nu _1,\sigma _1^2)\) and \(Y_2\sim \mathsf {N}(\nu _2,\sigma _2^2)\) be jointly bivariate normal with correlation coefficient \(\rho \in (-1,1)\). Then, there exists an \(\alpha >1\) such that

$$\begin{aligned} \mathbb {P}(Y_1>\gamma ,Y_2>\gamma )=o(\mathbb {P}(Y_1>\alpha \gamma ) \wedge \mathbb {P}(Y_2>\alpha \gamma )), \end{aligned}$$

where \(a\wedge b\) stands for \(\min \{a,b\}\).

Proof

Without loss of generality, we may assume that \(\sigma _1>\sigma _2\), so that

$$\begin{aligned} \mathbb {P}(Y_1>\alpha \gamma ) \wedge \mathbb {P}(Y_2>\alpha \gamma )\simeq \mathbb {P}(Y_2>\alpha \gamma )=\textstyle \varTheta \left( \gamma ^{-1}\exp \left( -\,\frac{(\alpha \gamma -\nu _2)^2}{2\sigma _2^2}\right) \right) \end{aligned}$$

Define the convex quadratic program:

$$\begin{aligned} \begin{aligned}&\min _{\varvec{y}}\;\frac{1}{2}\varvec{y}^\top \varSigma ^{-1}\varvec{y}\\&\text {subject to: }\varvec{y}\ge \gamma \varvec{1}-\varvec{\nu }, \end{aligned} \end{aligned}$$
(20)

where \(\varSigma _{11}=\sigma _1^2,\varSigma _{12}=\varSigma _{21}=\rho \sigma _1\sigma _2,\varSigma _{22}=\sigma _2^2\). Denote the solution as \(\varvec{y}^*\). Then, we have the following asymptotic result Hashorva and Hüsler (2003):

$$\begin{aligned} \textstyle \mathbb {P}(Y_1>\gamma ,Y_2>\gamma )=\varTheta \left( \gamma ^{-d_1}\exp \left( -\,\frac{(\varvec{y}^*)^\top \varSigma ^{-1}\varvec{y}^*}{2}\right) \right) , \end{aligned}$$

where \(d_1\in \{1,2\}\) is the number of active constraints in (20). Next, consider the quadratic programing problem which is the same as (20), except that we drop the first constraint (that is, we drop \(y_1\ge \gamma -\nu _1\)). The minimum of this second quadratic programing problem is \(\frac{(\gamma -\nu _2)^2}{2\sigma _2^2}\), and is achieved at the point \( \tilde{\varvec{y}}= \left( (\gamma -\nu _1)\rho \sigma _2/\sigma _1,\gamma -\nu _2 \right) ^\top \). Note that since \(\tilde{y}_1<\gamma -\nu _1\), we have dropped an active constraint. Since dropping an active constraint in a convex quadratic minimization achieves an even lower minimum, we have the strict inequality between the minima of the two quadratic minimization problems:

$$\begin{aligned} 0<\frac{(\gamma -\nu _2)^2}{2\sigma _2^2}<\frac{(\varvec{y}^*)^\top \varSigma ^{-1}\varvec{y}^*}{2} \end{aligned}$$

for any large enough \(\gamma >\nu _2\). Hence, after rearrangement of the last inequality, we have

$$\begin{aligned} \frac{\nu _2+\sigma _2\sqrt{(\varvec{y}^*)^\top \varSigma ^{-1}\varvec{y}^*}}{\gamma }>1, \end{aligned}$$

and therefore there clearly exists an \(\alpha \) in the range

$$\begin{aligned} 1<\alpha <\frac{\nu _2+\sigma _2\sqrt{(\varvec{y}^*)^\top \varSigma ^{-1}\varvec{y}^*}}{\gamma } \end{aligned}$$

For such an \(\alpha \) (in the above range), we have

$$\begin{aligned} \frac{(\alpha \gamma -\nu _2)^2}{2\sigma _2^2}<\frac{(\varvec{y}^*)^\top \varSigma ^{-1}\varvec{y}^*}{2} \end{aligned}$$

Therefore, \( \exp (-\frac{(\varvec{y}^*)^\top \varSigma ^{-1}\varvec{y}^*}{2})=o\left( \exp (-\frac{(\alpha \gamma -\nu _2)^2}{2\sigma _2^2})\right) ,\; \gamma \uparrow \infty \), and the exponential rate of decay of \(\mathbb {P}(Y_1>\gamma ,Y_2>\gamma )\) is greater than that of \(\mathbb {P}(Y_2>\alpha \gamma )\). This completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Botev, Z.I., Salomone, R. & Mackinlay, D. Fast and accurate computation of the distribution of sums of dependent log-normals. Ann Oper Res 280, 19–46 (2019). https://doi.org/10.1007/s10479-019-03161-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-019-03161-x

Keywords

Navigation