Abstract
The master equation describes exactly the dynamics of a Markov Population Process (MPP) by associating one differential equation for each discrete state of the process. It is well known that MPPs are prone to suffer from the so-called curse of dimensionality, making the master equation intractable in most cases. We propose a novel approach, called h-scaling, that covers the state space of an MPP with a smaller number of states by an appropriate re-scaling of the MPP transition rate functions. When the original state space is bounded, this procedure may significantly reduce the number of the states while returning an approximate master equation that still retains good accuracy. We present h-scaling together with some theoretical results on asymptotic correctness and numerical examples taken from the performance evaluation literature. Moreover, we show that h-scaling can be combined with a recently proposed framework called dynamic boundary projection, which couples subsets of the master equation with mean-field approximations, to further reduce the number of equations without penalizing accuracy.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Anselmi, J., Verloop, I.M.: Energy-aware capacity scaling in virtualized environments with performance guarantees. Perform. Eval. 68(11), 1207–1221 (2011)
Baskett, F., Chandy, K.M., Muntz, R.R., Palacios, F.G.: Open, closed, and mixed networks of queues with different classes of customers. J. ACM 22(2), 248–260 (1975)
Benaim, M., Le Boudec, J.Y.: A class of mean field interaction models for computer and communication systems. Perform. Eeval. 65(11–12), 823–838 (2008)
Bortolussi, L., Hillston, J., Latella, D., Massink, M.: Continuous approximation of collective system behaviour: a tutorial. Perf. Eeval. 70(5), 317–349 (2013)
Buchholz, P.: Exact and ordinary lumpability in finite Markova chains. J. Appl. Probab. 31(1), 59–75 (1994)
Cao, Y., Li, H., Petzold, L.: Efficient formulation of the stochastic simulation algorithm for chemically reacting systems. J. Chem. Phy. 121(9), 4059–4067 (2004)
Ciocchetta, F., Degasperi, A., Hillston, J., Calder, M.: Some investigations concerning the CTMC and the ode model derived from bio-PEPA. Electr. Notes Theoret. Comput. Sci. 229(1), 145–163 (2009)
Darling, R.: Fluid limits of pure jump Markov processes: a practical guide. arXiv preprint math/0210109 (2002)
Darling, R., Norris, J.R.: Differential equation approximations for markov chains. Probability surveys 5, 37–79 (2008)
Gast, N., Bortolussi, L., Tribastone, M.: Size expansions of mean field approximation: transient and steady-state analysis. Perform. Eval. 129, 60–80 (2019)
Gast, N., Van Houdt, B.: A refined mean field approximation. In: Proceedings of the ACM on Measurement and Analysis of Computing Systems, vol. 1, pp. 1–28 (2017)
Kurtz, T.G.: Solutions of ordinary differential equations as limits of pure jump Markov processes. J. Appl. Prob. 7(1), 49–58 (1970)
Liu, Y., Li, W., Masuyama, H.: Error bounds for augmented truncation approximations of continuous-time Markova chains. Oper. Res. Lett. 46(4), 409–413 (2018)
Minnebo, W., Van Houdt, B.: A fair comparison of pull and push strategies in large distributed networks. IEEE/ACM Trans. Netw. 22(3), 996–1006 (2013)
Munsky, B., Khammash, M.: The finite state projection algorithm for the solution of the chemical master equation. J. Chem. Phy. 124(4) (2006)
Parekh, A.K., Gallager, R.G.: A generalized processor sharing approach to flow control in integrated services networks: the single-node case. IEEE/ACM Trans. Netw. 3, 344–357 (1993)
Parekh, A.K., Gallager, R.G.: A generalized processor sharing approach to flow control in integrated services networks: the multiple node case. IEEE/ACM Trans. Netw. 2(2), 137–150 (1994)
Randone, F., Bortolussi, L., Tribastone, M.: Refining mean-field approximations by dynamic state truncation. Proc. ACM Measur. Anal. Comput. Syst. 5(2), 1–30 (2021)
Van Kampen, N.G.: Stochastic Processes in Physics and Chemistry, vol. 1. Elsevier, New York (1992)
Xie, Q., Dong, X., Lu, Y., Srikant, R.: Power of d choices for large-scale bin packing: A loss model. ACM SIGMETRICS Perform. Eval. Rev. 43(1), 321–334 (2015)
Yang, X., De Veciana, G.: Service capacity of peer to peer networks. In: IEEE INFOCOM 2004, vol. 4, pp. 2242–2252. IEEE (2004)
Zhu, L., Casale, G., Perez, I.: Fluid approximation of closed queueing networks with discriminatory processor sharing. Perform. Eval. 139 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
1.1 8.1 Derivation of Scaled DBP
Having defined the truncations for \(\mathcal {S}^h\) as in Sect. 3.2 we proceed as in the derivation for DBP.
The border sets for the scaled truncations are defined as:
We can then define the boundary projection of \(X^h\) on \(\mathcal {T}^h(n,y)\), in which every jump from \(x \in \partial \mathcal {T}_l(n,y)\) to \(x'\) is redirected with same rate to \(x^*\) defined as:
After performing the augmentation we get the jump vectors \(l^{(n,h)}(x)\) defined exactly as before. Then, letting \(X^{(n,h)}_y\) be the boundary projection of \(X^h\) on \(\mathcal {T}^h(n,y)\), its transition matrix \(Q^{(n,h)}(y)\) can be written for \(x, x' \in \mathcal {T}^h(n,0)\) as:
So the ME for \(X^{(n,h)}_y\) as:
where \(P^{(n,h)}_y({}\cdot {};t)\) is an \(\mathcal {N}^h(n)\)-dimensional vector.
Again, to pass to DBP, we need to define the functions:
Observe that the second case in the definition of \(\varPi ^{(n,h)}(x,y)\) is motivated by the fact that x may not be in the form \(y + h(k_1e_1 + \ldots + k_me_m)\), and, to mirror what happens in classic DBP, we want the function to return the closes \(y'\) in this form so that \(\mathcal {T}^h(n,y')\) contains x.
Then the equations for scaled DBP with parameter n are given by:
Again, supposing \(X(0)=x_0\) with probability 1, to define the initial condition we set:
1.2 8.2 Equations for Example 3
Equations for scaled DBP read:
1.3 8.3 Proof of Theorem 4
Theorem 5
Suppose that for the sequence \(\left( X^N \right) _{N \ge N_0}\) the hypotheses of Theorem 2 are verified and, in addition:
-
equation (3) admits a globally asymptotically stable equilibrium \(x^*\);
-
for each N \(X^N(0)=N\hat{x}_0\);
-
for each N \(\gamma _N = N\).
Then, letting \(\mu (t)\) and \(\varSigma (t)\) denote the mean and the covariance matrix of limiting Gaussian process for the original sequence, we have that the sequence of approximating processes \(\left( X^{N,h}\right) _{N \ge N_0}\) admits a Gaussian limiting process with mean \(\mu ^h(t)\) and covariance matrix \(\varSigma ^h(t)\) such that:
Proof
Theorem 2 guarantees that under the hypothesis \(\mu (t)\) and \(\varSigma (t)\) exist.
The rest of the proof is obtained by following the same derivation used in [19] with the ansatz:
and verifying that \(\xi ^h(t)\) is a Gaussian Process whose mean \(\mu ^h(t)\) and covariance \(\varSigma ^h(t)\) satisfy exactly the same ODEs as \(\mu (t)\) and \(\varSigma (t)\), i.e. (5) and (6).
Furthermore, in the sequence of the approximated process \(\left( X^{N,h} \right) _{N \ge 0}\) we have redefined the initial conditions as \(X^{N,h}(0) = h \lfloor \frac{N\hat{x}_0}{h}\rfloor \), while the initial condition for the deterministic process remains unchanged. Therefore, when setting the initial condition for \(\mu ^h(y)\) we need to take into account that for the ansatz to be valid at time \(t=0\) the Gaussian Limit Process possibly has non-zero mean, namely \(\mu ^h(0) = \sqrt{\frac{h}{N}}\left( \lfloor \frac{N\hat{x}_0}{h}\rfloor -N\hat{x}_0\right) .\)
So, in general, \(\xi ^h(t)\), describing the fluctuations of \(X^{N,h}\), is different from \(\xi (t)\), describing the fluctuations of \(X^N\), since \(\mu ^h(0) \ne \mu (0) = 0\) (observe that instead the covariance matrix is still the same, i.e. \(\varSigma ^h(t) = \varSigma (t) \, \forall \, t \ge 0\)).
However, Eq. (5) is exactly the variational equation associated with the ODEs defining the deterministic limit (3), so, regardless of its initial condition, its solution must tend to 0 as \(\hat{x}(t)\) tends to the equilibrium \(x^*\). This implies \(\lim _{t \rightarrow \infty } \mu ^h(t) = \lim _{t \rightarrow \infty } \mu (t) = 0\).
Observe that all the introduced hypotheses are needed for the correct application of the ansatz: the differentiability of the drifts is needed to apply the Taylor expansion as in [19], while the presence of a globally asymptotically stable equilibrium ensures that the ansatz remains valid for \(t \in [0, +\infty ).\)
1.4 8.4 Additional Data on Malware Propagation Model
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Randone, F., Bortolussi, L., Tribastone, M. (2022). Jump Longer to Jump Less: Improving Dynamic Boundary Projection with h-Scaling. In: Ábrahám, E., Paolieri, M. (eds) Quantitative Evaluation of Systems. QEST 2022. Lecture Notes in Computer Science, vol 13479. Springer, Cham. https://doi.org/10.1007/978-3-031-16336-4_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-16336-4_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16335-7
Online ISBN: 978-3-031-16336-4
eBook Packages: Computer ScienceComputer Science (R0)