Skip to main content
Log in

A Multi-Component General Discrete System Subject to Different Types of Failures with Loss of Units

  • Published:
Discrete Event Dynamic Systems Aims and scope Submit manuscript

Abstract

Discrete systems are used in several fields such as reliability and computing and electronics in digital systems. On the other hand, there are systems that cannot be continuously monitored and they can be observed only at certain periods of time, via inspections, for example. In this paper a repairable multi-component system subject to internal and accidental external failures with loss of units is developed. The system is composed of a finite number of units, including the main one and the others disposed in cold standby. If a repairable failure occurs, the main unit enters the repair channel. On the other hand, the unit is removed if the failure is non-repairable. A repairman is considered. The distribution of the lifetime of the main unit is a general one and its phase-type representation is considered. Accidental failures occur according to a general discrete renewal process. The model is developed in detail and the up period is worked out up to no units and up to total failure of the system. Some reliability measures of interest such as the conditional probability of different types of failures are calculated. The operating of the system is analysed according to rewards introduced in the model. We have built complex algorithms for calculating the measures defined in this paper. We have introduced the RG-factorization method to work out these measures by means of matrices with low order. The results have been implemented computationally with Matlab. An example illustrates the model and the number of units is optimised according to the average net reward.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Alfa AS (2004) Markov chain representations of discrete distributions applied to queuing models. Comput Oper Res 31:2365–2385

    Article  MATH  MathSciNet  Google Scholar 

  • Alfa AS, Castro IT (2002) Discrete time analysis of a repairable machine. J Appl Probab 39(3):503–516

    Article  MATH  MathSciNet  Google Scholar 

  • Alfa AS, Neuts MF (1995) Modelling vehicular traffic using the discrete time Markovian arrival process. Transp Sci 29(2):109–117

    Article  MATH  Google Scholar 

  • Kulkarni VG (1999) Modeling, analysis, design and control of stochastic systems. Springer, New York

    MATH  Google Scholar 

  • Li QL, Cao J (2004) Two types of RG-factorizations of quasi-birth-and-death-processes and their applications to stochastic integral functionals. Stoch Models 20(3):299–340

    Article  MATH  MathSciNet  Google Scholar 

  • Neuts MF (1975) Probability distributions of phase type. In: Liber Amicorum Professor Emeritus H. Florin. Department of Mathematics, University of Louvain, Belgium, pp 183–206

    Google Scholar 

  • Neuts MF (1981) Matrix geometric solutions in stochastic models. An algorithmic approach. John Hopkins University Press, Baltimore

    MATH  Google Scholar 

  • Neuts MF, Meier KS (1981) On the use of phase type distributions in reliability modelling of systems with two components. Operat Res Spectrum 2:227–234

    Article  MATH  Google Scholar 

  • Pérez-Ocón R, Ruiz-Castro JE (2004) Two models for a repairable two-system with phase-type sojourn time distributions. Reliab Eng Syst Saf 84:253–260

    Article  Google Scholar 

  • Pérez-Ocón R, Montoro-Cazorla D, Ruiz-Castro JE (2006) Transient analysis of a multi-component system modelled by a General Markov Process. Asia-Pac J Oper Res 23(3):311–327

    Article  MATH  MathSciNet  Google Scholar 

  • Ruiz-Castro JE, Pérez-Ocón R, Fernández-Villodre G (2008a) Modelling a reliability system governed by discrete phase-type distributions. Reliab Eng Syst Saf 93(11):1650–1657. doi:10.1016/j.ress.2008.01.005

    Article  Google Scholar 

  • Ruiz-Castro JE, Pérez-Ocón R, Fernández-Villodre G (2008b) A level-dependent general discrete system involving phase-type distributions. IIE Trans (in press)

Download references

Acknowledgements

The authors are very grateful to the three referees and the Associate Editor, whose comments have greatly improved the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan Eloy Ruiz-Castro.

Appendices

Appendix A

The inverse of the matrix IP  ∗ , described in Section 5 where P  ∗  is given in Eq. 2, is worked out in this appendix in al algorithmic form. We use the LU-Type RG-factorization given in Li and Cao (2004). Given the structure of this matrix, the method is reduced to a LU-Type G-factorization. For this case,

$$ \left( {{\bf I-P}^\ast } \right)^{-1}=\left( {r_{m,n} } \right)_{m,n=0,...K-1} =\left\{ {{\begin{array}{*{20}c} {{\bf U}_m^{-1} } \hfill & ; \hfill & {0\le m\le K-2;n=m} \hfill \\[5pt] {\prod\limits_{i=m}^{n-1} {{\bf G}_i {\bf U}_n^{-1} } } \hfill & ; \hfill & {0\le m\le K-2;m+1\le n\le K-1} \hfill \\[5pt] {{\bf U}_{K-1}^{-1} } \hfill & ; \hfill & {m=n=k-1} \hfill \\[5pt] 0 \hfill & ; \hfill & {\mbox{otherwise}} \hfill \\ \end{array} }} \right. $$
(32)

being,

$$ \begin{array}{l} {\bf U}_k ={\bf I-R}_p^{K-k} ;\quad 0\le k\le K-1 \\ {\bf G}_k ={\bf U}_k^{-1} {\bf R}_s^{K-k} =\left[ {{\bf I-R}_p^{K-k} } \right]^{-1}{\bf R}_s^{K-k} ;\quad 0\le k\le K-2. \\ \end{array} $$
(33)

The calculation of the inverse matrix is reduced to matrix algebraic operations. The calculation of the inverse of the matrices U involves great difficulties. For it, we apply the LU-Type RG-factorization for each matrix U k .

$$ Matrix\,\, {\mathbf U}_k^{-1} $$

We consider the matrix \({\bf U}_k ={\bf I-R}_p^{K-k} \); 0 ≤ k ≤ K − 1. This matrix has the following structure. For k = 0, ..., K − 3:

$$ {\bf U}_k ={\bf I-R}_p^{K-k} =\left( {{\begin{array}{*{20}c} {{\bf I-B}_{00} } \hfill & {-{\bf B}_{01} } \hfill & \hfill & \hfill & \hfill & \hfill & \hfill \\ {-{\bf B}_{10} } \hfill & {{\bf I-A}_1 } \hfill & {-{\bf A}_0 } \hfill & \hfill & \hfill & \hfill & \hfill \\ \hfill & {-{\bf A}_2 } \hfill & {{\bf I-A}_1 } \hfill & {-{\bf A}_0 } \hfill & \hfill & \hfill & \hfill \\ \hfill & \hfill & \ddots \hfill & \ddots \hfill & \ddots \hfill & \hfill & \hfill \\ \hfill & \hfill & \hfill & {-{\bf A}_2 } \hfill & {{\bf I-A}_1 } \hfill & {-{\bf A}_0 } \hfill & \hfill \\ \hfill & \hfill & \hfill & \hfill & {-{\bf A}_2 } \hfill & {{\bf I-A}_1 } \hfill & {-{{\bf B}}''} \hfill \\ \hfill & \hfill & \hfill & \hfill & \hfill & {-{{\bf B}}'} \hfill & {{\bf I-B}} \hfill \\ \end{array} }} \right)_{K-k+1\times K-k+1} , $$

for k = K  − 2:

$$ {\bf U}_{K-2} ={\bf I-R}_p^2 =\left( {{\begin{array}{*{20}c} {{\bf I-B}_{01} } \hfill & {-{\bf B}_{01} } \hfill & {\bf 0} \hfill \\ {-{\bf B}_{10} } \hfill & {{\bf I-A}_1 } \hfill & {-{{\bf B}}''} \hfill \\ {\bf 0} \hfill & {-{{\bf B}}'} \hfill & {{\bf I-B}} \hfill \\ \end{array} }} \right)_{3\times 3} , $$

and for k = K  − 1:

$$ {\bf U}_{K-1} ={\bf I-R}_p^1 =\left( {{\begin{array}{*{20}c} {{\bf I-B}_{00} } \hfill & {-{\bf B}_{01}^1 } \hfill \\[3pt] {-{\bf B}_{10}^1 } \hfill & {{\bf I-B}} \hfill \\ \end{array} }} \right)_{2\times 2} . $$

Again, this matrix is a QBD one, and it can be decomposed following Theorem 1 given in Li and Cao (2004) in the following way:

$$ \begin{array}{rll} {\bf U}_{k} &= &({\bf I} - {\bf R}_{L}) {\bf V}_{D} ({\bf I} - {\bf J}_{U}), {\rm where}\\ \mathbf{V}_{D} &=& \mathrm{diag}(\mathbf{V}_{0}, \mathbf{V}_{1},{\ldots}, \mathbf{V}_{k - k - 1},\mathbf{V}_{K - k}),\nonumber\\ {\bf R}_L &=&\left( {{\begin{array}{*{20}c} 0 \hfill & \hfill & \hfill & \hfill & \hfill \\ {{\bf R}_1 } \hfill & 0 \hfill & \hfill & \hfill & \hfill \\ \hfill & \ddots \hfill & \ddots \hfill & \hfill & \hfill \\ \hfill & \hfill & {{\bf R}_{K-k-1} } \hfill & 0 \hfill & \hfill \\ \hfill & \hfill & \hfill & {{\bf R}_{K-k} } \hfill & 0 \hfill \\ \end{array} }} \right),\,\,{\bf J}_U =\left( {{\begin{array}{*{20}c} 0 \hfill & {{\bf J}_0 } \hfill & \hfill & \hfill & \hfill \\ \hfill & 0 \hfill & {{\bf J}_1 } \hfill & \hfill & \hfill \\ \hfill & \hfill & \ddots \hfill & \ddots \hfill & \hfill \\ \hfill & \hfill & \hfill & 0 \hfill & {{\bf J}_{K-k-1} } \hfill \\ \hfill & \hfill & \hfill & \hfill & 0 \hfill \\ \end{array} }} \right). \end{array} $$

The matrices V, R and J are worked out as follows:

$$ \begin{array}{rll} \label{eq34} {\bf V}_0 &=&{\bf I-B}_{00} ;\quad {\bf V}_1 ={\bf I-A}_1 -{\bf B}_{10} {\bf V}_0^{-1} {\bf B}_{01} \quad \mbox{for}\,k<K-1 \\ {\bf V}_i &=&{\bf I-A}_1 -{\bf A}_2 {\bf V}_{i-1}^{-1} {\bf A}_0;\quad 2\le i\le \,K-k-1\,\,\mbox{and}\,\,k\le K-3, \\ {\bf V}_{K-k} &=&\left\{ {{\begin{array}{*{20}c} {{\kern5pt}{\bf I-B}-{{\bf B}}'{\bf V}_{K-k-1}^{-1} {{\bf B}}'';\quad k<K-1,} \hfill \\\noalign{} {{\bf I-B-B}_{10}^1 {\bf V}_0^{-1} {\bf B}_{01}^1 ;\quad\kern6pt k=K-1} \hfill \\ \end{array} }} \right. \\ {\bf R}_1 &=&\left\{ {{\begin{array}{*{20}c} {{\bf B}_{10} {\bf V}_0^{-1} ;\quad k<K-1} \hfill \\\noalign{} {{\bf B}_{10}^1 {\bf V}_0^{-1} ;\quad k=K-1} \hfill \\ \end{array} },} \right. \\ {\bf R}_2 &=&\left\{ {{\begin{array}{*{20}c} {{\bf A}_2 \left[ {{\bf I-A}_1 -{\bf R}_1 {\bf B}_{01} } \right]^{-1};\quad k<K-2} \hfill \\ {{{\bf B}}'\left[ {{\bf I-A}_1 -{\bf R}_1 {\bf B}_{01} } \right]^{-1};\quad k=K-2} \hfill \\ \end{array} },} \right. \\ {\bf R}_i &=&{\bf A}_2 \left[ {{\bf I-A}_1 -{\bf R}_{i-1} {\bf A}_0 } \right]^{-1};\quad 3\le i\le K-k-1\,\,\mbox{and}\,\,k\le K-4, \\ {\bf R}_{K-k} &=&{{\bf B}}'\left[ {{\bf I-A}_1 -{\bf R}_{K-k-1} {\bf A}_0 } \right]^{-1};\quad k<K-2 \\ {\bf J}_0 &=&\left\{ {{\begin{array}{*{20}c} {{\bf V}_0^{-1} {\bf B}_{01} ;\quad k<K-1} \hfill \\\noalign{} {{\bf V}_0^{-1} {\bf B}_{01}^1 ;\quad k=K-1} \hfill \\ \end{array} };} \right. \\ {\bf J}_1 &=&\left\{ {{\begin{array}{*{20}c} {\left[ {{\bf I-A}_1 -{\bf B}_{10} {\bf J}_0 } \right]^{-1}{\bf A}_0 ;\quad k<K-2} \hfill \\ {\left[ {{\bf I-A}_1 -{\bf B}_{10} {\bf J}_0 } \right]^{-1}{{\bf B}}'';\quad k=K-2} \hfill \\ \end{array} };} \right. \\ {\bf J}_i &=&\left[ {{\bf I-A}_1 -{\bf A}_2 {\bf J}_{i-1} } \right]^{-1}{\bf A}_0 ;\quad 2\le i\le K-k-2\,\,\mbox{and}\,\,k\le K-4 \\ {\bf J}_{K-k-1} &=&\left[ {{\bf I-A}_1 -{\bf A}_2 {\bf J}_{K-k-2} } \right]^{-1}{{\bf B}}'';\quad k<K-2. \end{array} $$
(34)

From this decomposition, the inverse is calculated:

$$ {\bf U}_k^{-1} =\left( {{\bf I-J}_U } \right)^{-1}{\bf V}_D^{-1} \left( {{\bf I-R}_L } \right)^{-1}, $$

where:

$$ {\bf V}_D^{-1} =\mbox{diag}\left( {{\bf V}_0^{-1} ,{\bf V}_1^{-1} ,...,{\bf V}_{K-k-1}^{-1} ,\,{\bf V}_{K-k}^{-1} } \right), $$
$$ \begin{array}{l} \left( {{\bf {\bf I}-R}_L } \right)^{-1}=\left( {{\begin{array}{*{20}c} {\bf I} \hfill & \hfill & \hfill & \hfill & \hfill & \hfill \\\noalign{} {{\bf X}_1^{\left( 1 \right)} } \hfill & {\bf I} \hfill & \hfill & \hfill & \hfill & \hfill \\\noalign{} {{\bf X}_2^{\left( 2 \right)} } \hfill & {{\bf X}_1^{\left( 2 \right)} } \hfill & {\bf I} \hfill & \hfill & \hfill & \hfill \\\noalign{} {{\bf X}_3^{\left( 3 \right)} } \hfill & {{\bf X}_2^{\left( 3 \right)} } \hfill & {{\bf X}_1^{\left( 3 \right)} } \hfill & {\bf I} \hfill & \hfill & \hfill \\\noalign{} \vdots \hfill & \vdots \hfill & \vdots \hfill & \vdots \hfill & \ddots \hfill & \hfill \\\noalign{} {{\bf X}_{K-k}^{\left( {K-k} \right)} } \hfill & {{\bf X}_{K-k-1}^{\left( {K-k} \right)} } \hfill & {{\bf X}_{K-k-2}^{\left( {K-k} \right)} } \hfill & \cdots \hfill & {{\bf X}_1^{\left( {K-k} \right)} } \hfill & {\bf I} \hfill \\ \end{array} }} \right), \\ \\ \left( {{\bf I-J}_L } \right)^{-1}=\left( {{\begin{array}{*{20}c} {\bf I} \hfill & {{\bf Y}_1^{\left( 0 \right)} } \hfill & {{\bf Y}_2^{\left( 0 \right)} } \hfill & {{\bf Y}_3^{\left( 0 \right)} } \hfill & \cdots \hfill & {{\bf Y}_{K-k-1}^{\left( 0 \right)} } \hfill \\\noalign{} \hfill & {\bf I} \hfill & {{\bf Y}_1^{\left( 1 \right)} } \hfill & {{\bf Y}_2^{\left( 1 \right)} } \hfill & \cdots \hfill & {{\bf Y}_{K-k-2}^{\left( 1 \right)} } \hfill \\\noalign{} \hfill & \hfill & {\bf I} \hfill & {{\bf Y}_1^{\left( 2 \right)} } \hfill & \cdots \hfill & {{\bf Y}_{K-k-3}^{\left( 2 \right)} } \hfill \\\noalign{} \hfill & \hfill & \hfill & {\bf I} \hfill & \cdots \hfill & \vdots \hfill \\\noalign{} \hfill & \hfill & \hfill & \hfill & \ddots \hfill & {{\bf Y}_1^{\left( {K-k-1} \right)} } \hfill \\\noalign{} \hfill & \hfill & \hfill & \hfill & \hfill & {\bf I} \hfill \\ \end{array} }} \right) \\ \end{array} $$

being:

$$ {\bf X}_h^{\left( l \right)} =\prod\limits_{i=l-h+1}^l {{\bf R}_{2l-i-h+1} } \;\mbox{and}\;{\bf Y}_h^{\left( l \right)} =\prod\limits_{i=l}^{l+h-1} {{\bf J}_i .} $$
(35)

Given this decomposition, applying the theorem 2 in Li and Cao (2004) again, we obtain:

$$ \begin{array}{ccc} {{\bf U}_k^{-1}\!\! =\!\left( {r_{m,n}^k } \right)_{m,n=0,...K-k}\!\! =\!\!\left\{{\kern-5pt} {{\begin{array}{*{20}c} {{\bf V}_m^{-1} {\bf X}_{m-n}^{\left( m \right)} \!+\!\!\sum\limits_{i=1}^{K-k-m} {{\bf Y}_i^{\left( m \right)} {\bf V}_{i+m}^{-1} {\bf X}_{i+m-n}^{\left( {i+m} \right)} } } \, & ; \, & {\begin{array}{l} 0\le m\le K-k-1 \\ 0\le n\le m-1 \\ \end{array}} \hfill \\ {\kern-11pt}{{\bf V}_m^{-1} \!+\!\!\sum\limits_{i=1}^{K-k-m} {{\bf Y}_i^{\left( m \right)} {\bf V}_{i+m}^{-1} {\bf X}_i^{\left( {i+m} \right)} } } \, & ; \, & {0\le m\le K-k-1,n=m} \hfill \\ {{\bf Y}_{n-m}^{\left( m \right)} {\bf V}_n^{-1} \!+\!\!\sum\limits_{i=n-m+1}^{K-k-m} {{\bf Y}_i^{\left( m \right)} {\bf V}_{i+m}^{-1} {\bf X}_{i-\left( {n-m} \right)}^{\left( {i+m} \right)} } } \, & ; \, & {\begin{array}{l} 0\le m\le K-k-1\, \\ m\!+\!1\!\le\! n\!\le\! K-k \\ \end{array}} \hfill \\ {{\bf V}_{K-k}^{-1} {\bf X}_{K-k-n}^{\left( {K-k} \right)} } \, & ; \, & {m\!=\!K\!-\!k,0\!\le\! n\!\le\! K\!-\!k\!-\!1} \\ {{\bf V}_{K\!-\!k}^{-1} } \, & ; \, & {m\!-\!n\!=\!K\!-\!k.} \end{array} }} \right.}\\ \end{array} $$
(36)

Once \({\bf U}_k^{-1} \) is worked out, matrices G k in Eq. 33 are achieved in the following way.

For 0 ≤ k ≤ K − 3:

$$ \begin{array}{ccc} \label{eq37} \footnotesize{\left( {\left( {{\bf G}_k } \right)_{m,n} } \right)_{\begin{array}{l} _{m=0,1,\ldots ,K-k }\\\noalign{} _{n=0,1,\ldots ,K-k-1 }\\\noalign{} \end{array}} =\left\{ {{\begin{array}{*{20}c} {r_{m0}^k {\bf D}_{00} +r_{m1}^k {\bf D}_{10} } \hfill & ; \hfill & {0\le m\le K-k,n=0} \hfill \\\noalign{} {r_{mn}^k {\bf C}_1 +r_{m,n+1}^k {\bf C}_2 } \hfill & ; \hfill & {0\le m\le K-k,n=K-k-3} \hfill \\\noalign{} {r_{m,K-k-2}^k {\bf C}_1 +r_{m,K-k-1}^k {{\bf D}}'} \hfill & ; \hfill & {0\le m\le K-k,n=K-k-2} \hfill \\\noalign{} {r_{m,K-k-1}^k {\bf D}} \hfill & ; \hfill & {0\le m\le K-k,n=K-k-1,} \hfill \\ \end{array} }} \right.}\\ \end{array} $$
(37)

and for k = K − 2:

$$ \left( {\left( {{\bf G}_{K-2} } \right)_{m,n} } \right)_{\begin{array}{l} _{m=0,1,2} \\ _{n=0,1} \\ \end{array}} =\left\{ {{\begin{array}{*{20}c} {r_{m0}^{K-2} {\bf D}_{00} +r_{m1}^{K-2} {\bf D}_{10} } \hfill & ; \hfill & {0\le m\le 2,n=0} \hfill \\\noalign{} {r_{m1}^{K-2} {\bf D}} \hfill & ; \hfill & {0\le m\le 2,n=1.} \hfill \\ \end{array} }} \right. $$

Algorithm. Computation of the inverse matrix (IP  ∗ )

INPUT: Matrices V, R and J described in Eq. 34 and matrices D 00, D 10, C 1, C 2, D and D given in Section 3.3.

OUTPUT: The matrix (IP  ∗ ) − 1 by matrix blocks.

COMPUTATION

Step 1.    Compute the matrix sequences {V h ; h = 0, ..., K  −  k}, {R h ; h = 1, ..., K  −  k} and {J h ; h = 0, ..., K − k − 1} for k = 0, ..., K − 1 given in Eq. 34.

Step 2.    Compute the matrix sequences {\({\bf X}_h^{\left( l \right)} \); h, l = 1, ..., K  −  k} and {\({\bf Y}_h^{\left( l \right)} \); h = 1, ..., K − k − 1; l = 0,...,K − k − 1} for k = 0, ..., K − 2 given in Eq. 35.

Step 3.    Compute \({\bf U}_k^{-1} \) for k = 0, ..., K − 1 from Eq. 36.

Step 4.    Compute matrix G k defined in Eq. 33 through Eq. 37 for k = 0,..., K − 2.

Step 5.    Compute matrix (IP  ∗ ) − 1 from Eq. 32.

A similar analysis can be performed with the matrix P defined in Eq. 6.

Appendix B

2.1 B.1 Transition probabilities

In this appendix the transition probabilities are worked out by considering the different macro-states of the system. Given matrix P in Eq. 1, matrix P ν, for ν ≥ 2, is calculated in a recursive form in the following way. The blocks of this matrix are denoted by \(\left( {{\bf P}_{jh}^v } \right)_{j,h=0,\ldots ,K} \), where \({\bf P}_{jh}^v \) contains the probability that at time ν the system will have K − h units given that initially the system had K − j units, considering the different phases.

Given the structure of the matrix P then:

$$ {\bf P}_{jh}^v ={\bf R}_p^{K-j} {\bf P}_{jh}^{v-1} +{\bf R}_s^{K-j} {\bf P}_{j+1,h}^{v-1} , $$
(38)

for j = 0, ..., K; j ≤ h ≤ K; and being \({\bf P}_{jh}^1 \) the block (j, h) of matrix P. Obviously, matrix \({\bf P}_{jh}^v ={\bf 0}\) when j > h and \({\bf P}_{K,K}^v =1\) for ν ≥ 0.

From this expression and in a recursive form these matrices are achieved from matrices R.

From Eq. 38 in a recursive form we have:

$$ \begin{array}{rll} {\bf P}_{jh}^v &=&{\bf R}_p^{K-j} {\bf P}_{jh}^{v-1} +{\bf R}_s^{K-j} {\bf P}_{j+1,h}^{v-1} ={\bf R}_p^{K-j} \left[ {{\bf R}_p^{K-j} {\bf P}_{jh}^{v-2} +{\bf R}_s^{K-j} {\bf P}_{j+1,h}^{v-2} } \right] \\ &&+{\bf R}_s^{K-j} \left[ {{\bf R}_p^{K-j-1} {\bf P}_{j+1,h}^{v-2} +{\bf R}_s^{K-j-1} {\bf P}_{j+2,h}^{v-2} } \right] \\ \end{array} $$

and if we denote by:

$$ {\begin{array}{*{20}c} {{\bf A}_0^2 ={\bf R}_p^{K-j} {\bf R}_p^{K-j} } \hfill & ; \hfill & {{\bf A}_1^2 ={\bf R}_p^{K-j} {\bf R}_s^{K-j} +{\bf R}_s^{K-j} {\bf R}_p^{K-j-1} } \hfill & ; \hfill & {{\bf A}_2^2 ={\bf R}_s^{K-j} {\bf R}_s^{K-j-1} } \hfill \\ \end{array} } $$

then:

$$ {\bf P}_{jh}^v ={\bf A}_0^2 {\bf P}_{jh}^{v-2} +{\bf A}_1^2 {\bf P}_{j+1,h}^{v-2} +{\bf A}_2^2 {\bf P}_{j+2,h}^{v-2} , $$

for j = 0, ..., K; j ≤ h ≤ K.

Following this reasoning then:

$$ {\bf P}_{jh}^v =\sum\limits_{k=0}^{v-1} {{\bf A}_k^{v-1} {\bf P}_{j+k,h} ={\bf A}_{h-j}^{v-1} {\bf P}_{h,h} +{\bf A}_{h-j-1}^{v-1} {\bf P}_{h-1,h} ={\bf A}_{h-j}^{v-1} {\bf R}_p^{K-h} +{\bf A}_{h-j-1}^{v-1} {\bf R}_s^{K-h+1} ,} $$

being:

$$ \begin{array}{rll} {\bf A}_k^s &=&{\bf A}_{k-1}^{s-1} {\bf R}_s^{K-j-k+1} +{\bf A}_k^{s-1} {\bf R}_p^{K-j-k} ;\,k=1,...,s-1 \\ {\bf A}_0^s &=&\left\{ {{\begin{array}{*{20}c} {{\bf A}_0^{s-1} {\bf R}_p^{K-j} } \hfill & ; \hfill & {s\ge 2} \hfill \\ {{\bf R}_p^{K-j} } \hfill & ; \hfill & {s=1} \hfill \\ \end{array} }} \right. \\ {\bf A}_s^s &=&\left\{ {{\begin{array}{*{20}c} {{\bf A}_{s-1}^{s-1} {\bf R}_s^{K-j-s+1} } \hfill & ; \hfill & {s\ge 2} \hfill \\ {{\bf R}_s^{K-1} } \hfill & ; \hfill & {s=1} \hfill \\ \end{array} }.} \right. \end{array} $$

The measures associated to the system defined in Sections 4 and 6 involve the probability vector p z (v) defined in Section 4. This one is worked out in an algorithmic form from the analysis above. We consider that the system is new initially with initial probability vector (\(\boldsymbol{\rm \alpha} \)\(\boldsymbol{\rm \gamma} \), 0).

Given a time ν, the matrix \({\bf P}_{0,K-k}^v ={\bf A}_{K-k}^{v-1} {\bf R}_p^k +{\bf A}_{K-k-1}^{v-1} {\bf R}_s^{k+1} \) contains, by blocks, the probability that the system has K units and ν units of time later the system will have k units, depending on the different phases of the system. This matrix is composed by (K + 1) × (k + 1) blocks, depending on the number of units in the repair channel. We denote by \({\bf Q}_{r,w}^{v,k} \) to the block (r, w) of this matrix for r = 0,..., K and w = 0,..., k. This matrix \({\bf Q}_{r,w}^{v,k} \) contains the probabilities, depending on the phases, that initially the system has K units, r of them in the repair channel, and ν units of time later, the system will have k units with w of them in the repair channel. If the phases of this matrix are considered then the order of this matrix is mtn × mtn, if r ≥ 1 and w ≥ 1, mtn × mt, if r ≥ 1 and w = 0, mt × mtn, if r = 0 and w ≥ 1 and mt × mt, if r = 0 and w = 0, where m, t, n are defined in Section 3.1. Given it, if the phases are considered, the matrix block \({\bf Q}_{r,w}^{v,k} \) contains the rows mt + mtn(r  − 1) + 1: mt + mtnr and the columns mt + mtn(w  − 1) + 1: mt + mtnw of the matrix \({\bf P}_{0,K-k}^v \) if r and w are greater or equal than 1. If r or w are equal to zero then the corresponding row or column is composed by the elements 1: mt respectively.

Therefore, the vector \(p_{E_s^k}(v)\) for k = 1,..., K and s ≤ k can be calculated in the following way.

  1. 1.

    Matrix \({\bf P}_{0,K-k}^v ={\bf A}_{K-k}^{v-1} {\bf {\bf R}}_p^k +{\bf A}_{K-k-1}^{v-1} {\bf R}_s^{k+1} \) is calculated by blocks.

  2. 2.

    Matrix \({\bf Q}_{0,s}^{v,k} \) is calculated by considering the block (0, s) of the matrix \({\bf P}_{0,K-k}^v \) or the elements of the rows 1: mt and columns mt + mtn(s  − 1) + 1: mt + mtns.

  3. 3.

    \(p_{E_s^k } \left( v \right)=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Q}_{0,s}^{v,k} .\)

2.2 B.2 Average number of visits

The average number of visits to a certain phase up to time H is denoted by N(H), and it is given in Eq. 3. We are interested in the blocks associated to the macro-state k units in the system, for k = 1, ..., K. From matrix N(H) the block associated to the transition j units in the system to h units can be calculated in the following way:

$$ {\bf N}_{jh} \left( {H} \right)=r_{K-j,K-h} -\sum\limits_{k=h}^j {\left( {{\bf P}^\ast } \right)_{K-j,K-k}^{{H}+1} r_{K-k,K-h} =r_{K-j,K-h} -\sum\limits_{k=h}^j {{\bf P}_{K-j,K-k}^{H+1} r_{K-k,K-h} ,} } $$

where r is the matrix block given in Eq. 32.

Taking limits when H tends to infinity:

$$ {\bf N}_{jh} =r_{K-j,K-h} . $$

If we assume that initial distribution of the system is (\(\boldsymbol\alpha \)\(\boldsymbol{\rm \gamma} \),0) these measures can be calculated as follows. In this case, from Eq. 4 we have that:

$$ _{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}\left( { H} \right)=\left( {\boldsymbol\alpha \otimes \boldsymbol{\rm \gamma} ,{\bf 0}} \right)\left( {{\bf I-P}^\ast } \right)^{-1}-\left( {\boldsymbol\alpha \otimes \boldsymbol{\rm \gamma} ,{\bf 0}} \right)\left( {{\bf P}^\ast } \right)^{{H}+1}\left( {{\bf I-P}^\ast } \right)^{-1}. $$

Given the initial distribution, only the first mt rows of (\(\boldsymbol\alpha \)\(\boldsymbol{\rm \gamma} \),0) (IP*) − 1 are considered. On the other hand, matrix (IP  ∗ ) − 1 has been worked out by blocks in Eq. 32. Therefore we define \({\bf Z}_k^\ast \) as the sub-matrix of (IP  ∗ ) − 1 by considering only the first mt rows and the columns associated to the macro-state k units in the system. It is equal to:

$$ {\bf Z}_k^\ast =\left( {r_{0,K-k} } \right)_{1:mt,:} \quad \mbox{for}\,k=1,...,K. $$

The second element of the difference is analysed in the following way. Let \({\bf Q}_0^{h,k} \) be the matrix composed by the transition probabilities when the system is in a phase of the macro-state \(E_0^K \) initially and h units of time after it will occupy a phase of the macro-state U k . This matrix can be expressed as:

$$ {\bf Q}_0^{h,k} =\left( {{\bf Q}_{00}^{h,k} ,{\bf Q}_{01}^{h,k} ,{\bf Q}_{02}^{h,k} ,...{\bf Q}_{0k}^{h,k} } \right);\quad k=1,...,K. $$

Following a similar reasoning above, we define:

$$ {\bf Z}_k^h =\sum\limits_{i=k}^K {{\bf Q}_0^{h,i} \times r_{K-i,K-k} } ,\quad h\ge 1,k=1,...,K. $$

Hence, the average number of visits to the phases of the macro-state U k up to time H, given that the initial distribution (\(\boldsymbol\alpha \)\(\boldsymbol{\rm \gamma} \),0) is, is equal to:

$$ _{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}_k \left( {H} \right)=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\left( {{\bf Z}_k^\ast -{\bf Z}_k^{\left( {{H}+1} \right)} } \right)\cdot $$
(39)

If all macro-states are considered, the average number of visits to the different phases is given by the corresponding element of the matrix:

$$ \label{eq40} _{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}\left( {H} \right)=\left( {\boldsymbol\alpha \otimes \boldsymbol{\rm \gamma} } \right)\left( {{\bf Z}^\ast -{\bf Z}^{\left( {{H}+1} \right)}} \right), $$
(40)

being \({\bf Z}^\ast =\left( {{\bf Z}_K^\ast ,{\bf Z}_{K-1}^\ast ,...,{\bf Z}_1^\ast } \right)\) and \({\bf Z}^h=\left( {{\bf Z}_K^h ,{\bf Z}_{K-1}^h ,...,{\bf Z}_1^h } \right)\) for h ≥ 0.

If H tends to infinity the average number of visits up to units in the system is achieved. It is equal to:

$$ \label{eq41} _{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} ,{\bf 0}} \right)\left( {{\bf I-P}^\ast } \right)^{-1}=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Z}^\ast , $$
(41)

Then, the average number of visits to the phases of the macro-state k units in the system is given by:

$$ _{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}_k =\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Z}_k^\ast . $$
(42)

2.3 B.3 Conditional probability of failure of the online unit and of the system

The conditional probabilities of failures defined and worked out in Section 6.1 can be calculated by considering the algorithmic form of the transition probabilities described above in Section B.2. We show the repairable failure case, the others can be performed in a similar way. This measure is given by Eq. 9 in Section 6.1. If the expressions for the transition probabilities are considered then:

$${ r_p^k \left( v \right)=\left\{ {{\begin{array}{*{20}l} {\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Q}_{0,0}^{v-1,1} \left( {{\bf Te}\otimes {\bf L}_r^0 } \right)} \hfill & ; \hfill & {k=1} \hfill \\ {\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Q}_{0,0}^{v-1,k} \left( {{\bf Te}\otimes {\bf L}_r^0 } \right)+\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\sum\limits_{s=1}^{k-1} {{\bf Q}_{0,s}^{v-1,k} } \left( {{\bf e}_{k-1} \otimes {\bf Te}\otimes {\bf L}_r^0 \otimes {\bf e}_n } \right)} \hfill & ; \hfill & {k=2,...,K.} \hfill \\ \end{array} }} \right.} $$

2.3.1 B.3.1 Conditional probability of failure of the system

Next, a similar reasoning for the conditional probability of failure of the system given in Eq. 14 is performed. For the repairable case it is:

$$ rs_p^k \left( v \right)=\left\{ {{\begin{array}{*{20}l} {\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Q}_{0,0}^{v-1,1} \left( {{\bf Te}\otimes {\bf L}_r^0 } \right)} \hfill & ; \hfill & {k=1} \hfill \\[3pt] {\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Q}_{0,k-1}^{v-1,k} \left( {{\bf Te}\otimes {\bf L}_r^0 \otimes {\bf S}e} \right)} \hfill & ; \hfill & {k=2,...,K.} \hfill \\ \end{array} }} \right. $$

A similar reasoning can be performed for the other cases of failures.

2.4 B.4 Average number of failures

The average number of failures has been analysed in Section 6.3. We calculate in an algorithmic form this measure for the online unit and then for the system. In both cases, this measure is obtained up to a certain time and until there are no units in the system.

2.4.1 B.4.1 Average number of failures of the online unit

We analyse the accidental repairable failure case, the others can be worked out in a similar way. If Eq. 40 is considered, the average number of repairable failures up to certain time H in the system is equal to:

$$ {ANFp}\left( {H} \right)=\sum\limits_{v=1}^{H} \sum\limits_{k=1}^{K} r_p^k \left( v \right)=\left( \boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} \right)\sum\limits_{k=1}^K \left( {\bf Z}_k^\ast -{\bf Z}_k^{H} \right){\bf V}_k^p =\left[ _{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}\left( {{H}-1} \right) \right] {\bf V}^p, $$

being the column vector \({\bf V}^p=\left( {{\bf V}_K^p ,{\bf V}_{K-1}^p ,\ldots {\bf V}_2^p ,{\bf V}_1^p } \right)^\prime \) where:

$$ {\bf V}_k^p =\left\{ {{\begin{array}{*{20}l} {\left( {{\begin{array}{*{20}c} {{\bf Te}\otimes {\bf L}_r^0 } \hfill \\ {{\bf e}_{k-1} \otimes {\bf Te}\otimes {\bf L}_r^0 \otimes {\bf e}_n } \hfill \\ {{\bf 0}_n } \hfill \\ \end{array} }} \right)} \hfill & ; \hfill & {k=2,...,K} \hfill \\[15pt] {\left( {{\begin{array}{*{20}c} {{\bf Te}\otimes {\bf L}_r^0 } \hfill \\ {{\bf 0}_n } \hfill \\ \end{array} }} \right)} \hfill & ; \hfill & {k=1,} \hfill \\ \end{array} }} \right. $$

and 0 n is a column vector of zeros with n rows (number of phases for the case all units are in repair).

For repairable failures and taking limits, if Eq. 41 is considered, the average number of failures until there are no units in the system is:

$$ ANFp=\sum\limits_{v=1}^{H} \sum\limits_{k=1}^{K} r_p^k \left( v \right)=\left( \boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} \right)\sum\limits_{k=1}^K {\bf Z}_k^\ast {\bf V}_k^p =\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Z}^\ast {\bf V}^p=\left[ _{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N} \right] {\bf V}^p $$

If vector V k p is changed adequately, the average number of failures for the different cases can be worked out.

2.4.2 B.4.2 Average number of failures of the system

We perform a similar reasoning for the analysing the average number of failures of the system; up to a certain time and until there are no units. The case in which the system fails due to repairable failure is shown; the others can be achieved analogously. It can be expressed as:

$$ ANFSp\left( {H} \right)=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\sum\limits_{k=1}^K {\left( {{\bf Z}_k^\ast -{\bf Z}_k^{H} } \right){\bf V}_k^{Sp} =\left[ {_{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}\left( {H-1} \right)} \right]} {\bf V}^{Sp} $$

where \({\bf V}^{Sp}=\left( {{\bf V}_K^{Sp} ,{\bf V}_{K-1}^{Sp} ,\ldots ,{\bf V}_2^{Sp} ,{\bf V}_1^{Sp} } \right)^\prime \), with:

$$ {\bf V}_k^{Sp} =\left\{ {{\begin{array}{*{20}c} {\left( {{\begin{array}{*{20}c} {{\bf 0}_{mt+\left( {k-2} \right)mtn} } \hfill \\ {{\bf Te}\otimes {\bf L}_r^0 \otimes {\bf Se}} \hfill \\ {{\bf 0}_n } \hfill \\ \end{array} }} \right)} \hfill & ; \hfill & {k=2,...,K} \hfill \\[15pt] {\left( {{\begin{array}{*{20}c} {{\bf Te}\otimes {\bf L}_r^0 } \hfill \\ {{\bf 0}_n } \hfill \\ \end{array} }} \right)} \hfill & ; \hfill & {k=1} \hfill \\ \end{array} }} \right.. $$

For repairable failures, the average number of failures of the system up to there are no units is:

$$ {ANFSp}=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\sum\limits_{k=1}^K {{\bf Z}_k^\ast {\bf V}_k^{Sp} =\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Z}^\ast {\bf V}^{Sp}=\left[ {_{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}} \right]{\bf V}^{Sp}.} $$

If vector \({\bf V}_k^{Sp} \) is changed adequately, the average number of failures of the system due to the different types of failures is worked out.

The average number of failures of the system is given in Eqs. 23 and 24. This one in an algorithmic form can be calculated as:

$$ {ANFS}\left( H \right)=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\sum\limits_{k=1}^K {\left( {{\bf Z}_k^\ast -{\bf Z}_k^H } \right){\bf V}_k^S =\left[ {_{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}\left( {H-1} \right)} \right]} {\bf V}^S $$

where \({\bf V}^S=\left( {{\bf V}_K^S ,{\bf V}_{K-1}^S ,\ldots ,{\bf V}_2^S ,{\bf V}_1^S } \right)^\prime \), with:

$$ {\bf V}_k^S =\left\{ {{\begin{array}{*{20}c} {\left( {{\begin{array}{*{20}c} {{\bf 0}_{mt+\left( {k-2} \right)mtn} } \hfill \\[4pt] {{\bf Te}\otimes {\bf L}^0\otimes \boldsymbol{\rm Se}+{\bf T}^0\otimes {\bf e}_t \otimes \boldsymbol{\rm Se}} \hfill \\ {{\bf 0}_n } \hfill \\ \end{array} }} \right)} \hfill & ; \hfill & {k=2,...,K} \hfill \\[18pt] {\left( {{\begin{array}{*{20}c} {{\bf Te}\otimes {\bf L}^0+{\bf T}^0\otimes {\bf e}_t } \hfill \\ {{\bf 0}_n } \hfill \\ \end{array} }} \right)} \hfill & ; \hfill & {k=1} \hfill \\ \end{array} }} \right. $$

The average number of failures of the system until there are no units is:

$$ {ANFSp}=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\sum\limits_{k=1}^K {{\bf Z}_k^\ast {\bf V}_k^S =\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Z}^\ast {\bf V}^S=\left[ {_{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}} \right]} {\bf V}^S. $$

2.5 B.5 Average number of loss of units

The average number of lost units up to a certain time H has been calculated in Eq. 25. If the results above are considered then:

$$ {ANUL}\left( H \right)\!\!=\!\sum\limits_{v=1}^H \!\sum\limits_{k=1}^K\! \left(r_q^k \left( v \right)+r_d^k \left( v \right) \right)\!=\!\left( \boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} \right) \!\sum\limits_{k=1}^K \!\left( {\bf Z}_k^\ast \!-\!{\bf Z}_k^H \right){\bf V}_k^L \!=_{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)}\!\! {\bf N}\left( H\!-\!1 \right){\bf V}^L $$
(43)

being the column vector \({\bf V}^L=\left( {{\bf V}_K^L ,{\bf V}_{K-1}^L ,\ldots ,{\bf V}_2^L ,{\bf V}_1^L } \right)^\prime \) where:

$$ {\bf V}_K^L =\left\{ {{\begin{array}{*{20}c} {\left( {{\begin{array}{*{20}c} {{\bf Te}\otimes {\bf L}_{nr}^0 +{\bf T}^0\otimes {\bf e}_t } \hfill \\[4pt] {{\bf e}_{k-1} \otimes {\bf Te}\otimes {\bf L}_{nr}^0 \otimes {\bf e}_n +{\bf e}_{k-1} \otimes {\bf T}^0\otimes {\bf e}_{nt} } \hfill \\ {{\bf 0}_n } \hfill \\ \end{array} }} \right)} \hfill & ; \hfill & {k=2,...,K} \hfill \\[18pt] {\left( {{\begin{array}{*{20}c} {{\bf Te}\otimes {\bf L}_{nr}^0 +{\bf T}^0\otimes {\bf e}_t } \hfill \\ {{\bf 0}_n } \hfill \\ \end{array} }} \right)} \hfill & ; \hfill & {k=1} \hfill \\ \end{array} }.} \right. $$

Appendix C

In this appendix the different average net rewards described in Section 7 are worked out in an algorithmic form from de appendices above. In what follows we will consider h ≥ 1.

3.1 C.1 Average net reward at a fixed time h

This measure is given in Eq. 27.The case h = 0 is immediate. It is equal to:

$$ {RW}\left( 0 \right)=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} , \mathbf{0}} \right){\bf c}=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf c}_0^K . $$

We analyse the case h ≥ 1. The initial distribution affects to the mt first phases only. Therefore, if the Appendix B, Section B.1., is considered then:

$$ \left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma},\mathbf{0}} \right)\left( {{\bf P}^\ast } \right)^h=p_{\mathop U\limits_{k=1}^K \mathop U\limits_{s=0}^k E_s^k } \left( h \right)=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\sum\limits_{k=1}^K {\sum\limits_{s=0}^k {{\bf Q}_{0s}^{h,k} .} } $$

Hence:

$$ {RW}\left( h \right)=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\sum\limits_{k=1}^K {\sum\limits_{s=0}^k {{\bf Q}_{0s}^{h,k} {\bf c}_s^k .} } $$

This function can be expressed as:

$$ {RW}\left( h \right)=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\sum\limits_{k=1}^K {{\bf Q}_0^{h,k} {\bf c}^k,} $$

being c k the column vector with the average net reward when the system visits the phases of the macro-state k units are present in the system, U k.. This vector is:

$$ {\bf c}^k=\left( {{\begin{array}{*{20}c} {{\bf c}_0^k } \hfill \\[2pt] {{\bf c}_1^k } \hfill \\ \vdots \hfill \\ {{\bf c}_k^k } \hfill \\ \end{array} }} \right);\quad \mbox{for}\,\,\,k=1,...,K. $$

3.2 C.2 Cumulative average net reward up to time H

The cumulative average net reward up to time H is given in Eq. 28. The case H = 0 is immediate:

$$ {CRW}\left( 0 \right)=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf c}_0^K . $$

We build a methodology for calculating this measure for the case H ≥ 1. This method reduces the order of the matrices involved due to the initial distribution. This measure can be interpreted as the mean number of visits by a certain state by the corresponding net reward. The net average cost-reward up to certain time H units for the case k units in the system can be achieved as:

$$ {CRW}_k \left( H \right)=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\left[ {{\bf Z}_k^\ast -{\bf Z}_k^{H+1} } \right]{\bf c}^k=\left[ {_{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}_k \left( H \right)} \right]{\bf c}^k,\,k=1,...,K, $$

where \(_{(\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} )}\) N k (H) is given in Eq. 39.

Adding these values over all k, the net average cost-reward up to certain time H is obtained:

$$ {CRW}\left( H \right)=\sum\limits_{k=1}^K {{CRW}_k \left( H \right)=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\sum\limits_{k=1}^K {\left[ {{\bf Z}_k^\ast -{\bf Z}_k^{H+1} } \right]{\bf c}^k=\left[ {_{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}\left( H \right)} \right]} {\bf c}} $$

where \(_{(\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} )}\) N(H) and c are given in Eqs. 40 and 26 respectively.

When H tends to infinity, the net reward until there are no units in the system is calculated. For the case k units in the system this one is:

$$ {CRW}_k =\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Z}_k^\ast {\bf c}^k=\left[ {_{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}_k } \right]{\bf c}^k, $$

where \(_{(\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} )}\) N k is given in Eq. 42.

And for any number of units it is:

$$ {CRW}=\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right){\bf Z}^\ast {\bf c}=\left[ {_{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}} \right]{\bf c}, $$

where \(_{(\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} )}\) N is given in Eq. 41.

3.3 C.3 Cumulative average net reward up to time H including cost per loss of units

The mean number of lost units up to time H is given in Eq. 25 and in an algorithmic form in Eq. 43. From this point the total average net reward given in Eq. 31 can be calculated as:

$$ \begin{array}{rll} {TCRW}\left( H \right)&=&\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\sum\limits_{k=1}^K {\left[ {{\bf Z}_k^\ast -{\bf Z}_k^{H+1} } \right]} {\bf c}^k-\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)\sum\limits_{k=1}^K {\left({\bf Z}_k^\ast -{\bf Z}_k^H\right) {\bf V}_k A} \\ &=&\left[ {_{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}\left( H \right)} \right]c-_{\left( {\boldsymbol{\rm \alpha} \otimes \boldsymbol{\rm \gamma} } \right)} {\bf N}\left( {H-1} \right){\bf V}\cdot A \\ \end{array} $$

being the column vector \({\bf {\bf V}}^L=\left( {{\bf V}_K^L ,{\bf V}_{K-1}^L ,\ldots ,{\bf V}_2^L ,V_1^L } \right)^\prime \) given in Eq. 43.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ruiz-Castro, J.E., Fernández-Villodre, G. & Pérez-Ocón, R. A Multi-Component General Discrete System Subject to Different Types of Failures with Loss of Units. Discrete Event Dyn Syst 19, 31–65 (2009). https://doi.org/10.1007/s10626-008-0046-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10626-008-0046-3

Keywords

Navigation