Skip to main content
Log in

The superposition of Markovian arrival processes: moments and the minimal Laplace transform

  • Original Research
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

The superposition of two independent Markovian arrival processes (MAPs) is also a Markovian arrival process of which the Markovian representation is given as the Kronecker sum of the transition rate matrices of the component processes. The moments of stationary intervals of the superposition can be obtained by differentiating the Laplace transform (LT) given in terms of the transition rate matrices. In this paper, we propose a streamlined procedure to determine the minimal LT of the merged process in terms of the minimal LT coefficients of the component processes. Combined with the closed-form transformation between moments and LT coefficients, our result enables us to determine the moments of the superposed process based on the moments of the component processes. The main contribution is that the whole procedure can be implemented without explicit Markovian representations. In order to transform the minimal LT coefficients of the component processes into the minimal LT representation of the merged process, we also introduce another minimal representation. A numerical example is provided to illustrate the procedure.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Albin, S. L. (1982). On Poisson approximations for superposition arrival processes in queues. Management Science, 28(2), 126–137.

    Article  Google Scholar 

  • Bodrog, L., Heindl, A., Horváth, G., & Telek, M. (2008). A Markovian canonical form of second-order matrix-exponential processes. European Journal of Operational Research, 190(2), 459–477.

    Article  Google Scholar 

  • Bodrog, L., Horváth, G., & Telek, M. (2008). Moment characterization of matrix exponential and Markovian arrival processes. Annals of Operations Research, 160(1), 51–68.

    Article  Google Scholar 

  • Casale, G., Zhang, E., & Smirni, E. (2010). Trace data characterization and fitting for Markov modeling. Performance Evaluation, 67(2), 61–79.

    Article  Google Scholar 

  • Gantmacher, F. R. (1960). The theory of matrices. New York: Chelsea Publishing.

    Google Scholar 

  • Horváth, A., Horváth, G., & Telek, M. (2010). A joint moments based analysis of networks of MAP/MAP/1 queues. Performance Evaluation, 67(9), 759–778.

    Article  Google Scholar 

  • Kang, S. H., Kim, Y. H., Sung, D. K., & Choi, B. D. (2002). An application of Markovian arrival process (MAP) to modeling superposed ATM cell streams. IEEE Transactions on Communications, 50(4), 633–642.

    Article  Google Scholar 

  • Kim, S. (2004). The heavy-traffic bottleneck phenomenon under splitting and superposition. European Journal of Operational Research, 157, 736–745.

    Article  Google Scholar 

  • Kim, S. (2011). Modeling cross correlation in three-moment four-parameter decomposition approximation of queueing networks. Operations Research, 59(2), 480–497.

    Article  Google Scholar 

  • Kim, S. (2017). Minimal LST representations of MAP(\(n\))s: Moment fittings and queueing approximations. Naval Research Logistics, 63(7), 549–561.

    Article  Google Scholar 

  • Kim, S., Muralidharan, R., & O’Cinneide, C. (2005). Taking account of correlations between streams in queueing network approximations. Queueing Systems, 49(5), 261–281.

    Article  Google Scholar 

  • Sriram, K., & Whitt, W. (1986). Characterizing superposition arrival processes in packet multiplexers for voice and data. IEEE Journal on Selected Areas in Communications, 4(6), 833–846.

    Article  Google Scholar 

  • Suresh, S., & Whitt, W. (1990). Arranging queues in series: A simulation experiment. Management Science, 36, 1080–1091.

    Article  Google Scholar 

  • Suresh, S., & Whitt, W. (1990). The heavy-traffic bottleneck phenomenon in open queueing networks. Operations Research Letters, 9, 355–362.

    Article  Google Scholar 

  • Telek, M., & Horváth, G. (2007). A minimal representation of Markov arrival processes and a moments matching method. Performance Evaluation, 64(9–12), 1153–1168.

    Article  Google Scholar 

  • Whitt, W. (1983). The queueing network analyzer. The Bell System Technical Journal, 62(9), 2779–2815.

    Article  Google Scholar 

  • Whitt, W. (1985). Queues with superposition arrival processes in heavy traffic. Stochastic Processes and their Applications, 21(1), 81–91.

    Article  Google Scholar 

  • Whitt, W., & You, W. (2021). A robust queueing network analyzer based on indices of dispersion. Naval Research Logistics, 69(1), 36–56.

Download references

Acknowledgements

This work was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2017S1A5A2A01023654) and by the Ajou University research fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sunkyo Kim.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Proof of Lemma 1

Proof

Let \(\tilde{f}(s) \equiv h(s)/g(s)\). Then, by Leibniz’s rule, the k-th order derivative of \(\tilde{f}(s) g(s)= h(s) \) is given as

$$\begin{aligned} \sum _{i=0}^{k} \left( {\begin{array}{c}k\\ i\end{array}}\right) \tilde{f}^{(k-i)}(s)g^{(i)}(s)&=h^{(k)}(s) \end{aligned}$$

by which we have

$$\begin{aligned} \sum _{i=0}^{k} \frac{f^{(k-i)}(0) }{(k-i)!} ~ \frac{g^{(i)}(0)}{i!}&= \frac{h^{(k)}(0)}{k!} \end{aligned}$$

and the result follows by Eqs. (2.5) and (3.1).\(\square \)

1.2 Proof of Corollary 1

Proof

In order to determine \((\varvec{a}, \varvec{b})\), we need \(2n-1\) independent equations. For \(k = 1, 2,\ldots ,n-1\), Eq. (3.2) can be written as \(\textbf{R}_2 \varvec{a} =\varvec{b}\). For \(k = n, n+1,\ldots ,2n-1\), Eq. (3.2) can be written as

$$\begin{aligned} \sum _{i=0}^{n-1} (-1)^{k-i} r_{k-i} a_i = (-1)^{k-n+1} r_{k-n} \hspace{0.5in} \end{aligned}$$

or in matrix form

$$\begin{aligned} \left[ \begin{array}{c c c c c c } (-1)^n r_n &{} (-1)^{n-1}r_{n-1} &{} \cdots &{} r_2 &{} -r_1 \\ (-1)^{n+1} r_{n+1} &{} (-1)^n r_n &{} \cdots &{} -r_3 &{} r_2 \\ \vdots &{}\vdots &{} \ddots &{} \vdots &{} \vdots \\ r_{2n-2} &{} -r_{2n-3} &{} \cdots &{} (-1)^n r_n &{} (-1)^{n-1} r_{n-1} \\ -r_{2n-1} &{} r_{2n-2} &{} \cdots &{} (-1)^{n+1} r_{n+1} &{} (-1)^n r_n \\ \end{array} \right] \left[ \begin{array}{c} a_0 \\ a_1 \\ a_2 \\ \vdots \\ a_{n-1} \end{array} \right] = \left[ \begin{array}{c} -r_0 \\ r_1 \\ -r_2 \\ \vdots \\ (-1)^n r_{n-1} \end{array} \right] \end{aligned}$$

where \(r_0 =1\). By multiplying every other equation by (\(-1\)) starting from the first one, we have \( \textbf{R}_1 \varvec{a} = \varvec{r}_{0..n-1}\). That is, for \(1 \le k \le 2n-1\), Eq. (3.2) can be written

$$\begin{aligned} \left[ \begin{array}{c} \textbf{R}_2 \\ \textbf{R}_1 \end{array} \right] \varvec{a} = \left[ \begin{array}{c} \varvec{b} \\ \varvec{r}_{0..n-1} \end{array} \right] . \end{aligned}$$

\(\square \)

1.3 Proof of Lemma 2

Proof

Let \(w(s,t) = \sum _{i=1}^{n-1}\sum _{j=1}^{n-1} c_{i,j} s^i t^j \), \( h(s) =\sum _{i=1}^{n-1} b_{i} s^i+ a_0 \), \(g(s)=s^n +\sum _{i=0}^{n-1} a_i s^i\). Then, the joint LT in Eq. (2.12) can be written as

$$\begin{aligned} \tilde{f} (s, t)&= \frac{w(s,t) + a_0 (h(s)+ h(t))- a_0^2}{ g(s)g(t)}. \end{aligned}$$

Let \(w_{k, l}(s,t) = \partial ^{k+l} w(s,t)/( \partial s^k \partial t^l)\) for which we have

$$\begin{aligned} \frac{w_{k, l}(0,0)}{k!l!}&= \left\{ \begin{array}{l l} c_{k,l} &{} \text{ for } 0 \le k, l \le n-1 \\ 0 &{} \text{ for } k \ge n \text{ or } l \ge n. \end{array} \right. \end{aligned}$$

By Leibniz’s rule with respect to s and t, it can be shown that

$$\begin{aligned} w_{k, l}(s,t)&= \sum _{i=0}^{k} \sum _{j=0}^{l} \left( {\begin{array}{c}k\\ i\end{array}}\right) \left( {\begin{array}{c}l\\ j\end{array}}\right) \tilde{f}_{i,j}(s,t) g^{(k-i)}(s)g^{(l-j)}(t) \end{aligned}$$

by which

$$\begin{aligned} \frac{w_{k, l}(0,0)}{k!l!}&= \sum _{i=0}^{k} \sum _{j=0}^{l} \frac{\tilde{f}_{i,j}(0,0)}{i! j!} \frac{g^{(k-i)} (0)}{(k-i)!} \frac{g^{(l-j)}(0)}{(l-j)!} \end{aligned}$$

and the result follows by Eqs. (2.6) and (3.1). \(\square \)

1.4 Proof of Lemma 3

Proof

Note that

$$\begin{aligned} (\textbf{D}_0^\oplus )^k \textbf{D}_1^\oplus&= (\hat{\textbf{D}}_0 \otimes \textbf{I}_n + \textbf{I}_m \otimes \check{\textbf{D}}_0)^k (\hat{\textbf{D}}_1 \otimes \textbf{I}_n + \textbf{I}_m \otimes \check{\textbf{D}}_1) \nonumber \\&=\sum _{i=0}^k \left( {\begin{array}{c}k\\ i\end{array}}\right) (\hat{\textbf{D}}_0^{k-i} \otimes \check{\textbf{D}}_0^i)(\hat{\textbf{D}}_1 \otimes \textbf{I}_n + \textbf{I}_m \otimes \check{\textbf{D}}_1) \nonumber \\&=\sum _{i=0}^k \left( {\begin{array}{c}k\\ i\end{array}}\right) \left( \hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \otimes \check{\textbf{D}}_0^i + \hat{\textbf{D}}_0^{k-i} \otimes \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \right) . \end{aligned}$$
(A.1)

Let \(\varvec{e}^\otimes = \varvec{e}_m \otimes \varvec{e}_n\). The subscript of \(\varvec{e}_m\) and \(\varvec{e}_n\) shall be suppressed below. Note that \(\hat{\textbf{D}}_0 \varvec{e}= -\hat{\textbf{D}}_1 \varvec{e} \) since \(\hat{\textbf{Q}} \varvec{e} = (\hat{\textbf{D}}_0+\hat{\textbf{D}}_1) \varvec{e} = \textbf{0}\). By Eq. (2.2) and the definition of \(\hat{y}_i\),

$$\begin{aligned} \hat{\varvec{p}}\hat{\textbf{D}}_0^i \varvec{e}&= -\hat{\varvec{p}}\hat{\textbf{D}}_0^{i-1} \hat{\textbf{D}}_1 \varvec{e} = - \hat{y}_{i-1},\\ \hat{\varvec{\pi }}\hat{\textbf{D}}_0^i \varvec{e}&= \hat{\varvec{\pi }}\hat{\textbf{D}}_0 \hat{\textbf{D}}_0^{i-1} \varvec{e} = -\hat{\lambda }_A \hat{\varvec{p}} \hat{\textbf{D}}_0^{i-1} \varvec{e} = \hat{\lambda }_A \hat{y}_{i-2}, \\ \hat{\varvec{\pi }}\hat{\textbf{D}}_0^i \hat{\textbf{D}}_1 \varvec{e}&= -\hat{\varvec{\pi }}\hat{\textbf{D}}_0^{i+1} \varvec{e} = - \hat{\lambda }_A \hat{y}_{i-1}. \end{aligned}$$

Likewise, \(\check{\varvec{p}}\check{\textbf{D}}_0^i \varvec{e} = - \check{y}_{i-1}\), \(\check{\varvec{\pi }}\check{\textbf{D}}_0^i \varvec{e} = \check{\lambda }_A \check{y}_{i-2}\), and \(\check{\varvec{\pi }}\check{\textbf{D}}_0^i \check{\textbf{D}}_1 \varvec{e} = - \check{\lambda }_A \check{y}_{i-1}\). By these identities and the definition of \(\hat{y}_k\) and \(\check{y}_k\),

$$\begin{aligned} (\hat{\varvec{p}}\otimes \check{\varvec{\pi }}) \left( \hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \otimes \check{\textbf{D}}_0^i + \hat{\textbf{D}}_0^{k-i} \otimes \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \right) \varvec{e}^\otimes&=\hat{\varvec{p}}\hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \varvec{e} \check{\varvec{\pi }} \check{\textbf{D}}_0^i \varvec{e} + \hat{\varvec{p}} \hat{\textbf{D}}_0^{k-i}\varvec{e} \check{\varvec{\pi }} \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \varvec{e} \nonumber \\&=\hat{y}_{k-i} \check{\lambda }_A \check{y}_{i-2}+ \hat{y}_{k-i-1}\check{\lambda }_A \check{y}_{i-1}, \end{aligned}$$
(A.2)
$$\begin{aligned} (\hat{\varvec{\pi }}\otimes \check{\varvec{p}}) \left( \hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \otimes \check{\textbf{D}}_0^i + \hat{\textbf{D}}_0^{k-i} \otimes \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \right) \varvec{e}^\otimes&=\hat{\varvec{\pi }}\hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \varvec{e} \check{\varvec{p}} \check{\textbf{D}}_0^i \varvec{e} + \hat{\varvec{\pi }}\hat{\textbf{D}}_0^{k-i} \varvec{e} \check{\varvec{p}} \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \varvec{e} \nonumber \\&= \hat{\lambda }_A \hat{y}_{k-i-1} \check{y}_{i-1} + \hat{\lambda }_A \hat{y}_{k-i-2} \check{y}_i. \end{aligned}$$
(A.3)

Let \(\left( {\begin{array}{c}n\\ i\end{array}}\right) = 0 \) for \(i<0\) or \(i> n\). Then, by Pascal’s identity,

$$\begin{aligned} \sum _{i=0}^k \left( {\begin{array}{c}k\\ i\end{array}}\right) \left( \hat{y}_{k-i} \check{y}_{i-2} + \hat{y}_{k-i-1} \check{y}_{i-1} \right)&=\sum _{i=0}^k \left( {\begin{array}{c}k\\ i\end{array}}\right) \hat{y}_{k-i} \check{y}_{i-2} + \sum _{i=0}^k \left( {\begin{array}{c}k\\ i\end{array}}\right) \hat{y}_{k-i-1} \check{y}_{i-1} \nonumber \\&=\sum _{i=0}^k \left( {\begin{array}{c}k\\ i\end{array}}\right) \hat{y}_{k-i} \check{y}_{i-2} + \sum _{i=1}^{k+1} \left( {\begin{array}{c}k\\ i-1\end{array}}\right) \hat{y}_{k-i} \check{y}_{i-2} \nonumber \\&=\sum _{i=0}^{k+1} \left( {\begin{array}{c}k\\ i\end{array}}\right) \hat{y}_{k-i} \check{y}_{i-2} + \sum _{i=0}^{k+1} \left( {\begin{array}{c}k\\ i-1\end{array}}\right) \hat{y}_{k-i} \check{y}_{i-2} \nonumber \\&=\sum _{i=0}^{k+1} \left( {\begin{array}{c}k+1\\ i\end{array}}\right) \hat{y}_{k-i} \check{y}_{i-2}. \end{aligned}$$
(A.4)

Likewise,

$$\begin{aligned} \sum _{i=0}^k \left( {\begin{array}{c}k\\ i\end{array}}\right) \left( \hat{y}_{k-i-1} \check{y}_{i-1} + \hat{y}_{k-i-2} \check{y}_i \right)&=\sum _{i=0}^{k+1} \left( {\begin{array}{c}k+1\\ i\end{array}}\right) \hat{y}_{k-i-1} \check{y}_{i-1}, \end{aligned}$$
(A.5)

By Eqs. (5.1), (A.1), (A.2), (A.3), (A.4), and (A.5),

$$\begin{aligned} y^\oplus _k = \varvec{p}^\oplus (\textbf{D}_0^\oplus )^k \textbf{D}_1^\oplus \varvec{e}&=\frac{1}{\lambda _A^\oplus } \left( \hat{\lambda }_A (\hat{\varvec{p}}\otimes \check{\varvec{\pi }}) + \check{\lambda }_A (\hat{\varvec{\pi }}\otimes \check{\varvec{p}}) \right) (\textbf{D}_0^\oplus )^k \textbf{D}_1^\oplus \varvec{e} \\&=\frac{\hat{\lambda }_A \check{\lambda }_A}{\lambda _A^\oplus } \sum _{i=0}^k \left( {\begin{array}{c}k\\ i\end{array}}\right) \left( \hat{y}_{k-i} \check{y}_{i-2} + 2\hat{y}_{k-i-1} \check{y}_{i-1} + \hat{y}_{k-i-2} \check{y}_i \right) \\&=\frac{\hat{\lambda }_A \check{\lambda }_A}{\lambda _A^\oplus } \sum _{i=0}^{k+1} \left( {\begin{array}{c}k+1\\ i\end{array}}\right) \left( \hat{y}_{k-i} \check{y}_{i-2} + \hat{y}_{k-i-1} \check{y}_{i-1}\right) \\&=\frac{\hat{\lambda }_A \check{\lambda }_A}{\lambda _A^\oplus } \sum _{i=0}^{k+2} \left( {\begin{array}{c}k+2\\ i\end{array}}\right) \hat{y}_{k-i} \check{y}_{i-2} \end{aligned}$$

where the last equality is due to the same argument used in (A.4). \(\square \)

1.5 Proof of Lemma 4

Proof

By Eq. (A.1),

$$\begin{aligned} (\textbf{D}_0^\oplus )^k \textbf{D}_1^\oplus (\textbf{D}_0^\oplus )^l \textbf{D}_1^\oplus&= \sum _{i=0}^k \sum _{j=0}^l \left( {\begin{array}{c}k\\ i\end{array}}\right) \left( {\begin{array}{c}l\\ j\end{array}}\right) \left( \hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{l-j} \hat{\textbf{D}}_1 \otimes \check{\textbf{D}}_0^{i+j} +\hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{l-j} \otimes \check{\textbf{D}}_0^{i+j} \check{\textbf{D}}_1 \right. \nonumber \\&~~~~~~~~~~~~~~~ \left. + \hat{\textbf{D}}_0^{k-i+l-j} \hat{\textbf{D}}_1 \otimes \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j + \hat{\textbf{D}}_0^{k-i+l-j} \otimes \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j \check{\textbf{D}}_1 \right) . \end{aligned}$$

By Eq. (2.2) and the definition of \(\hat{z}_{ij}\),

$$\begin{aligned} \hat{\varvec{p}} \hat{\textbf{D}}_0^i \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^j\varvec{e}&=-\hat{\varvec{p}} \hat{\textbf{D}}_0^i \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{j-1}\hat{\textbf{D}}_1 \varvec{e} =- z_{i,j-1},\\ \hat{\varvec{\pi }}\hat{\textbf{D}}_0^i \varvec{e}&= \hat{\varvec{\pi }}\hat{\textbf{D}}_0 \hat{\textbf{D}}_0^{i-1} \varvec{e} = -\lambda _A \varvec{p}\hat{\textbf{D}}_0^{i-1} \varvec{e} = \lambda _A y_{i-2}, \\ \hat{\varvec{\pi }}\hat{\textbf{D}}_0^i \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^j\varvec{e}&= \lambda _A \varvec{p} \hat{\textbf{D}}_0^{i-1}\hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{j-1} \hat{\textbf{D}}_1 \varvec{e} = \lambda _A z_{i-1,j-1},\\ \hat{\varvec{\pi }}\hat{\textbf{D}}_0^i \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^j \hat{\textbf{D}}_1 \varvec{e}&= -\lambda _A \varvec{p} \hat{\textbf{D}}_0^{i-1}\hat{\textbf{D}}_1 \hat{\textbf{D}}_0^j \hat{\textbf{D}}_1 \varvec{e} = -\lambda _A z_{i-1,j}. \end{aligned}$$

Likewise, \(\check{\varvec{p}} \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j\varvec{e} =-z_{i,j-1}\), \(\check{\varvec{\pi }}\check{\textbf{D}}_0^i \varvec{e} = \lambda _A y_{i-2}\), \(\check{\varvec{\pi }}\check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j\varvec{e} = \lambda _A z_{i-1,j-1}\), and \(\check{\varvec{\pi }}\check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j \check{\textbf{D}}_1 \varvec{e} = -\lambda _A z_{i-1,j}\). By these identities and the definition of \(\hat{y}_k\), \(\check{y}_k\), \(\hat{z}_{ij}\), and \(\check{z}_{ij}\), we have

$$\begin{aligned} (\hat{\varvec{p}}\otimes \check{\varvec{\pi }}) ( \hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{l-j} \hat{\textbf{D}}_1 \otimes \check{\textbf{D}}_0^{i+j} )\varvec{e}^\otimes&= \hat{\varvec{p}} \hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{l-j} \hat{\textbf{D}}_1 \varvec{e} \check{\varvec{\pi }} \check{\textbf{D}}_0^{i+j} \varvec{e} = \hat{z}_{k-i,l-j} \check{\lambda }_A \check{y}_{i+j-2},\\ (\hat{\varvec{p}}\otimes \check{\varvec{\pi }})(\hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{l-j} \otimes \check{\textbf{D}}_0^{i+j} \check{\textbf{D}}_1)\varvec{e}^\otimes&=\hat{\varvec{p}} \hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{l-j} \varvec{e} \check{\varvec{\pi }} \check{\textbf{D}}_0^{i+j} \check{\textbf{D}}_1 \varvec{e} =\hat{z}_{k-i,l-j-1} \check{\lambda }_A \check{y}_{i+j-1},\\ (\hat{\varvec{p}}\otimes \check{\varvec{\pi }})(\hat{\textbf{D}}_0^{k-i+l-j} \hat{\textbf{D}}_1 \otimes \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j )\varvec{e}^\otimes&= \hat{\varvec{p}}\hat{\textbf{D}}_0^{k-i+l-j} \hat{\textbf{D}}_1 \varvec{e} \check{\varvec{\pi }} \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j \varvec{e} = \hat{y}_{k-i+l-j} \check{\lambda }_A \check{z}_{i-1,j-1},\\ (\hat{\varvec{p}}\otimes \check{\varvec{\pi }})(\hat{\textbf{D}}_0^{k-i+l-j} \otimes \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j \check{\textbf{D}}_1 ) \varvec{e}^\otimes&= \hat{\varvec{p}} \hat{\textbf{D}}_0^{k-i+l-j} \varvec{e} \check{\varvec{\pi }} \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j \check{\textbf{D}}_1 \varvec{e} = \hat{y}_{k-i+l-j-1} \check{\lambda }_A \check{z}_{i-1,j},\\ (\hat{\varvec{\pi }}\otimes \check{\varvec{p}}) ( \hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{l-j} \hat{\textbf{D}}_1 \otimes \check{\textbf{D}}_0^{i+j} )\varvec{e}^\otimes&= \hat{\varvec{\pi }} \hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{l-j} \hat{\textbf{D}}_1 \varvec{e} \check{\varvec{p}} \check{\textbf{D}}_0^{i+j} \varvec{e}= \hat{\lambda }_A \hat{z}_{k-i-1,l-j} \check{y}_{i+j-1},\\ (\hat{\varvec{\pi }}\otimes \check{\varvec{p}}) (\hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{l-j} \varvec{e} \check{\textbf{D}}_0^{i+j} \check{\textbf{D}}_1)\varvec{e}^\otimes&= \hat{\varvec{\pi }} \hat{\textbf{D}}_0^{k-i} \hat{\textbf{D}}_1 \hat{\textbf{D}}_0^{l-j} \varvec{e} \check{\varvec{p}} \check{\textbf{D}}_0^{i+j} \check{\textbf{D}}_1 \varvec{e} = \hat{\lambda }_A \hat{z}_{k-i-1,l-j-1} \check{y}_{i+j} ,\\ (\hat{\varvec{\pi }}\otimes \check{\varvec{p}}) (\hat{\textbf{D}}_0^{k-i+l-j} \hat{\textbf{D}}_1 \varvec{e} \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j )\varvec{e}^\otimes&= \hat{\varvec{\pi }} \hat{\textbf{D}}_0^{k-i+l-j} \hat{\textbf{D}}_1 \varvec{e} \check{\varvec{p}} \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j \varvec{e}= \hat{\lambda }_A \hat{y}_{k-i+l-j-1} \check{z}_{i,j-1}, \\ (\hat{\varvec{\pi }}\otimes \check{\varvec{p}}) (\hat{\textbf{D}}_0^{k-i+l-j} \varvec{e} \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j \check{\textbf{D}}_1 )\varvec{e}^\otimes&= \hat{\varvec{\pi }}\hat{\textbf{D}}_0^{k-i+l-j} \varvec{e} \check{\varvec{p}} \check{\textbf{D}}_0^i \check{\textbf{D}}_1 \check{\textbf{D}}_0^j \check{\textbf{D}}_1 \varvec{e} =\hat{\lambda }_A \hat{y}_{k-i+l-j-2} \check{z}_{i,j}. \end{aligned}$$

Then, by the same argument as used in (A.4) with Pascal’s identity,

$$\begin{aligned} y^\oplus _{k,l}&=\varvec{p}^\oplus (\textbf{D}_0^\oplus )^k \textbf{D}_1^\oplus (\textbf{D}_0^\oplus )^l \textbf{D}_1^\oplus \varvec{e}\\&=\frac{\hat{\lambda }_A \check{\lambda }_A}{\lambda _A^\oplus } \sum _{i=0}^k \sum _{j=0}^l \left( {\begin{array}{c}k\\ i\end{array}}\right) \left( {\begin{array}{c}l\\ j\end{array}}\right) \left( \hat{z}_{k-i,l-j} \check{y}_{i+j-2}+ \hat{z}_{k-i,l-j-1} \check{y}_{i+j-1} \right. \\&\left. + \hat{y}_{k-i+l-j} \check{z}_{i-1,j-1}+ \hat{y}_{k-i+l-j-1} \check{z}_{i-1,j} \right. \\&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \left. + \hat{z}_{k-i-1,l-j} \check{y}_{i+j-1}+ \hat{z}_{k-i-1,l-j-1} \check{y}_{i+j} \right. \\&\left. + \hat{y}_{k-i+l-j-1} \check{z}_{i,j-1}+ \hat{y}_{k-i+l-j-2} \check{z}_{i,j} \right) \\&=\frac{\hat{\lambda }_A \check{\lambda }_A}{\lambda _A^\oplus } \sum _{i=0}^{k+1} \sum _{j=0}^{l+1} \left( {\begin{array}{c}k+1\\ i\end{array}}\right) \left( {\begin{array}{c}l+1\\ j\end{array}}\right) \left( \hat{z}_{k-i,l-j} \check{y}_{i+j-2} + \hat{y}_{k-i+l-j} \check{z}_{i-1,j-1}\right) . \end{aligned}$$

\(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, S. The superposition of Markovian arrival processes: moments and the minimal Laplace transform. Ann Oper Res 335, 237–259 (2024). https://doi.org/10.1007/s10479-024-05851-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-024-05851-7

Keywords

Navigation