Skip to main content
Log in

Planning of life-depleting preventive maintenance activities with replacements

  • Original Research
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

We consider a system that generates a reward at some predefined decreasing rate as its virtual age increases. Periodic maintenance (PM) is performed to restore the system to as-good-as new condition, i.e., its virtual age is decreased to zero after each maintenance action. However, the system has a given deterministic initial lifetime and each maintenance action also results in a depletion of the remaining system’s lifetime. Whenever the lifetime expires, the system is replaced by a new identical one at some predefined cost. From the application perspective the considered setting is primarily motivated by the practice of battery-powered systems, e.g., sensors, that need to be remotely monitored. We formulate a general repair-replacement problem with an infinite time horizon, where we seek an optimal number of PM actions and an optimal PM interval. We describe global optimization techniques for solving a general case of the problem under some reasonably mild assumptions on the reward and lifetime depletion rates. For the special case of a constant lifetime depletion per each maintenance action, some analytic observations are provided. Finally, we also demonstrate that for sufficiently large replacement costs the infinite-horizon problem can be solved by leveraging the results for the related finite-horizon problem previously considered in the literature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Adjiman, C., Androulakis, I., & Floudas, C. (1998). A global optimization method, \(\alpha \) BB, for general twice-differentiable constrained NLPs-II. Implementation and computational results. Computers and Chemical Engineering, 22, 1159–1179.

    Article  Google Scholar 

  • Ahmad, R., & Kamaruddin, S. (2012). An overview of time-based and condition-based maintenance in industrial application. Computers and Industrial Engineering, 63(1), 135–149.

    Article  Google Scholar 

  • Alghunaim, S. A., & Sayed, A. H. (2020). Linear convergence of primal-dual gradient methods and their performance in distributed optimization. Automatica, 117, 109003.

    Article  Google Scholar 

  • Boyd, S., Boyd, S. P., & Vandenberghe, L. (2004). Convex optimization. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Finkelstein, M., & Ludick, Z. (2014). On some steady-state characteristics of systems with gradual repair. Reliability Engineering and System Safety, 128, 17–23.

    Article  Google Scholar 

  • Finkelstein, M., Shafiee, M., & Kotchap, A. N. (2016). Classical optimal replacement strategies revisited. IEEE Transactions on Reliability, 65(2), 540–546.

    Article  Google Scholar 

  • Hansen, P., Jaumard, B., & Lu, S.-H. (1992). Global optimization of univariate lipschitz functions: I. Survey and properties. Mathematical Programming, 55(1–3), 251–272.

    Article  Google Scholar 

  • Jibetean, D., & de Klerk, E. (2006). Global optimization of rational functions: A semidefinite programming approach. Mathematical Programming, 106(1), 93–109.

    Article  Google Scholar 

  • Jung, W., Rillig, A., Birkemeyer, R., Miljak, T., & Meyerfeldt, U. (2008). Advances in remote monitoring of implantable pacemakers, cardioverter defibrillators and cardiac resynchronization therapy systems. Journal of Interventional Cardiac Electrophysiology, 23(1), 73–85.

    Article  Google Scholar 

  • Kaio, N., Dohi, T., & Osaki, S. (2002) Classical maintenance models. In Stochastic models in reliability and maintenance (pp. 65–87). Springer.

  • Khojandi, A., Maillart, L. M., & Prokopyev, O. A. (2014a). Optimal planning of life-depleting maintenance activities. IIE Transactions, 46(7), 636–652.

    Article  Google Scholar 

  • Khojandi, A., Maillart, L. M., Prokopyev, O. A., Roberts, M. S., Brown, T., & Barrington, W. W. (2014b). Optimal implantable cardioverter defibrillator (icd) generator replacement. INFORMS Journal on Computing, 26(3), 599–615.

    Article  Google Scholar 

  • Kijima, M., Morimura, H., & Suzuki, Y. (1988). Periodical replacement problem without assuming minimal repair. European Journal of Operational Research, 37(2), 194–203.

    Article  Google Scholar 

  • Kvasov, D. E., Mukhametzhanov, M. S., The univariate case. (2018). Metaheuristic vs. deterministic global optimization algorithms. Applied Mathematics and Computation, 318, 245–259.

    Article  Google Scholar 

  • Machado, R., & Tekinay, S. (2008). A survey of game-theoretic approaches in wireless sensor networks. Computer Networks, 52(16), 3047–3061.

    Article  Google Scholar 

  • Nakagawa, T. (1986). Periodic and sequential preventive maintenance policies. Journal of Applied Probability, 23(2), 536–542.

    Article  Google Scholar 

  • Nakagawa, T. (1988). Sequential imperfect preventive maintenance policies. IEEE Transactions on Reliability, 37(3), 295–298.

    Article  Google Scholar 

  • Neumann, P. J., & Greenberg, D. (2009). Is the United States ready for QALYS? Health Affairs, 28(5), 1366–1371.

    Article  Google Scholar 

  • Pham, H., & Wang, H. (1996). Imperfect maintenance. European Journal of Operational Research, 94(3), 425–438.

    Article  Google Scholar 

  • Sengul, C., Bakht, M., Harris, A. F., Abdelzaher, T., & Kravets, R. (2008) Improving energy conservation using bulk transmission over high-power radios in sensor networks. In 2008 The 28th international conference on distributed computing systems (pp. 801–808). IEEE.

  • uit het Broek, M. A., Teunter, R. H., de Jonge, B., Veldman, J., & Van Foreest, N. D. (2020). Condition-based production planning: Adjusting production rates to balance output and failure risk. Manufacturing and Service Operations Management, 22(4), 792–811.

    Article  Google Scholar 

  • Wang, H. (2002). A survey of maintenance policies of deteriorating systems. European Journal of Operational Research, 139(3), 469–489.

    Article  Google Scholar 

  • Wang, G. J., & Zhang, Y. L. (2014). Geometric process model for a system with inspections and preventive repair. Computers and Industrial Engineering, 75, 13–19.

    Article  Google Scholar 

  • Wang, G. J., Zhang, Y. L., & Yam, R. C. (2017). Preventive maintenance models based on the generalized geometric process. IEEE Transactions on Reliability, 66(4), 1380–1388.

    Article  Google Scholar 

  • Yazdandoost, K. Y., & Kohno, R. (2009) Health care and medical implanted communications service. In 13th International conference on biomedical engineering (pp. 1158–1161). Springer.

  • Zhao, X., Al-Khalifa, K. N., Hamouda, A. M., & Nakagawa, T. (2017). Age replacement models: A summary with new perspectives and methods. Reliability Engineering and System Safety, 161, 95–105.

    Article  Google Scholar 

Download references

Acknowledgements

The article was prepared within the framework of the Basic Research Program at the National Research University Higher School of Economics (Sections 1–3) and funded by RFBR according to the research project 20-37-90060 (Sections 4–5). The authors are thankful to the associate editor and two anonymous referees for their constructive comments that allowed us to greatly improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oleg A. Prokopyev.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Proofs

Proof of Lemma 1

First, assume that \(n = 0\). Then \(T(0) = \{L\}\) by definition. Hence, (9) holds by construction.

Otherwise, assume that \(n \in N \setminus \{0\}\). In this case the denominator in \(V(\tau , n)\) is strictly positive; recall (6). Next, by Assumption A1 we observe that:

$$\begin{aligned}&V(\tau , n) = \frac{nR(\tau ) + R(L - n(\tau + \Delta ))-c}{L - n\Delta } \\&\frac{\partial ^2 V(\tau , n)}{\partial \tau ^2} = \frac{nr'(\tau ) + n^2r'(L - n(\tau + \Delta ))}{L - n\Delta } \end{aligned}$$

Furthermore, by Assumption A1 we have that \(r'(t) < 0\) for any \(t \in [0, L]\); also, recall that \(\tau \in T(n)\). Hence, \(L - n(\tau + \Delta ) > 0\) and \(\frac{\partial ^2 V(\tau , n)}{\partial \tau ^2} < 0\) for any \(\tau \in T(n)\). Thus, the first part of lemma holds.

Next, using the first-order optimality condition we derive a closed form expression for the optimal PM interval. That is,

$$\begin{aligned} \frac{\partial V(\tau , n)}{\partial \tau } = \frac{nr(\tau ) - nr(L - n(\tau + \Delta ))}{L - n\Delta } = 0, \end{aligned}$$

which implies that \(r(\tau ) = r(L - n(\tau + \Delta ))\) and, hence, \(\tau = L - n(\tau + \Delta )\). The latter observation follows from the monotonicity of the reward rate and results in (9). In conclusion, by a straightforward calculation we can verify that \(\tau ^0(n) \in T(n)\), which completes the proof. \(\square \)

Proof of Lemma 2

Consider optimization problem (7) with the integrality assumption relaxed, i.e., for \(\tau \in T(n)\) and \(n \in {\overline{N}}\), where \(\tau =\tau ^0(n)\). Then we maximize

$$\begin{aligned} V(\tau ^0(n), n) = {\hat{V}}(\tau ^0(n)) := \frac{1}{\tau ^0(n)}R( \tau ^0(n)) - \frac{c}{L + \Delta }\Big (1 + \frac{\Delta }{\tau ^0(n)}\Big ) \end{aligned}$$

over \(n \in {\overline{N}}\). Observe that (dependence on n is omitted in certain places below to simplify the notation):

$$\begin{aligned} \frac{dV(\tau ^0(n), n)}{dn} = \frac{d{\hat{V}}(\tau ^0)}{d\tau ^0} \frac{d\tau ^0(n)}{dn} = - \Big \{\frac{c\Delta }{L + \Delta } + \tau ^0 \cdot r(\tau ^0) - R(\tau ^0) \Big \}\Big (1 + \frac{\Delta }{\tau ^0}\Big )^2{(L + \Delta )}\nonumber \\ \end{aligned}$$
(23)

Thus, the first-order optimality condition in terms of \(\tau ^0(n)\) reduces to (10). Define \(g(\tau ^0) := \tau ^0 \cdot r(\tau ^0) - R(\tau ^0)\) and observe that it strictly decreases as a function of \(\tau ^0\). In fact,

$$\begin{aligned} g'(\tau ^0) = \tau ^0 \cdot r'(\tau ^0) < 0 \end{aligned}$$

as \(\tau ^0(n)\) is strictly positive for \(n \in {\overline{N}}\) and \(r'(\tau ^0) < 0\) by Assumption A1. Moreover, \(g(0) = 0\) and, therefore, (10) has at most one positive solution with respect to \(\tau \). Consequently, the necessary result can be derived using (9). In particular, we consider three possible cases:

(i):

Assume that (10) has a unique solution \({\overline{\tau }}^* \in (0, L]\). Then the corresponding value of parameter n, which is referred to as \({\overline{n}}^*\), is derived by inverting (9) and thus, satisfies (11). Furthermore, \({\overline{n}}^* \in {\overline{N}}\) by definition and we can show that \({\overline{n}}^*\) is a unique maximum, i.e.,

$$\begin{aligned} {\overline{n}}^* = {{\,\mathrm{argmax}\,}}\{V(\tau ^0(n), n): n \in {\overline{N}}\} \end{aligned}$$

Specifically, from (23) we conclude that, if \(\tau ^0(n) > {\overline{\tau }}^*\) or, equivalently, \(n < {\overline{n}}^*\), then the first derivative is positive and \(V(\tau ^0(n), n)\) increases as a function of n. On the other hand, if \(\tau ^0(n) < {\overline{\tau }}^*\) or \(n> {\overline{n}}^*\), then the first derivative is negative and, hence, \(V(\tau ^0(n), n)\) decreases as a function of n.

(ii):

Assume that (10) has a unique solution \({\overline{\tau }}^* > L\). Then the corresponding value of parameter n is negative and thus, the maximum is given by \({\overline{n}}^* = 0\); see (i).

(iii):

Assume that (10) has no positive solutions. Then (23) implies that the first derivative is negative and therefore, the objective function is strictly decreasing in n. Thus, the maximum is also given by \({\overline{n}}^* = 0\). This observation concludes the proof.

\(\square \)

Proof of Corollary 1

Observe that Eq. (10) in the case of a linear reward rate has the form:

$$\begin{aligned} \frac{-c\Delta }{L + \Delta } = \tau (-a\tau + b) + a\frac{\tau ^2}{2} - b\tau \end{aligned}$$

Analytical solution with respect to \(\tau \) implies (13); recall the first part of Proposition 1. The value of \({\overline{n}}^*\) is then given by inverting (9); recall the second part Proposition 1. \(\square \)

Proof of Proposition 4

First, assume that \(n = 0\). Then by definition \(T(0) = \{L\}\) and the result follows. Alternatively, if \(n \ne 0\), then we represent T(n) as an intersection of three semi-infinite sets, that is,

$$\begin{aligned} T(n) = \{\tau \in \mathbb {R}: \tau > 0\} \cap \{\tau \in \mathbb {R}: \tau + f(\tau ) < \frac{L}{n}\} \cap \{\tau \in \mathbb {R}: \frac{L}{n + 1} \le \tau + f(\tau )\} \end{aligned}$$
(24)

The set \(\{\tau \in \mathbb {R}: \tau > 0\}\) is convex as an open half-space. Note that \(\widetilde{f}(\tau ):= \tau + f(\tau )\) is monotonically increasing in \(\tau \) by Assumption A2. Hence, for any \(t_0, f_0 \in \mathbb {R}\) the inequality \(\widetilde{f}(t_0) < f_0\) implies that \(f(t) < f_0\) holds for all \(t \le t_0\). Then convexity of the second and the third sets in the right-hand side of (24) follows from the definition of a convex set. As a result, T(n) is convex as an intersection of convex sets. \(\square \)

Appendix B: Successive approximation algorithm

In this section we describe a simple successive approximation algorithm, which solves the infinite-horizon problem (7) with a given precision, \(\varepsilon \in \mathbb {R}_{>0}\), in terms of the optimal objective function value. The algorithm can be applied, e.g., if the number of feasible PM actions, \(n \in N\), is sufficiently large or when solution of subproblems (8) for each particular \(n \in N\) is time-consuming.

figure a

In fact, given \(n \in N\) and some lower and upper bounds for the objective function value in (8) we can compare these bounds against the best currently-known solution. Thus, there is no need to solve subproblems (8) for each \(n \in N\) subject to the required accuracy \(\varepsilon \); recall the brute-force algorithm in Sect. 4. Contrariwise, at each step of our algorithm we consider a smaller subset of candidate solutions with respect to the number of PM actions, n, and solve these subproblems with a higher accuracy.

We presume that the required precision \(\varepsilon \) is sufficiently small to guarantee that the optimal number of PM actions is unique. The pseudocode of the algorithm is given by Algorithm 1. For some initial accuracy \(\varepsilon ' = \varepsilon _0 > 0 \) we solve subproblems (8) sequentially for each \(n \in \mathcal {N}\) using the optimization oracle defined in Sect. 4.1; see line 12 of Algorithm 1. In particular, in line 13 we check, whether the current value of parameter n may occur to be optimal and in lines 14–18 we update the best found solution. If for some \(n \in \mathcal {N}\) the obtained upper bound does not exceed the best currently-known lower bound, i.e., the maximal one, then we remove this n from the list of candidate solutions. The above procedure is repeated with higher accuracy, i.e., we set \(\varepsilon ' := \frac{\varepsilon '}{2}\), until the optimal number of PM actions is determined. Then we call the oracle for (8) with \(n = n^*\) to achieve the required precision \(\varepsilon > 0\), if necessary; see lines 23–24.

As a result, with the decrease of \(\varepsilon '\) the computational efforts of Algorithm 1 are basically concentrated around a reasonably small subset of candidate solutions. Note that, if for given \(n \in N\) subproblem (8) is solved with some accuracy \(\varepsilon ' > 0\), then the oracle can use the obtained information to solve the same problem with higher accuracy. The aforementioned property stipulates an advantage of the successive approximation algorithm, namely, Algorithm 1, against a naive brute-force approach.

Numerical study. Let \(L = 50\), \(c = 500\), \(\varepsilon = 1\) and \(\varepsilon _0 = 128\) be the parameters of Algorithm 1. In Tables 4 and 5 we report running times of Algorithm 1 and the brute-force approach with respect to several different forms of the reward rate r(t) and constant depletion rates \(\Delta \in \{0.1, 0.5, 1, 1.5, 2\}\).

Table 4 Running times (in seconds) of the brute-force approach and the successive approximation algorithm for polynomial reward rates and constant depletion rates with \(L = 50\), \(c = 500\), \(\varepsilon = 1\) and \(\varepsilon _0 = 128\)
Table 5 Running times (in seconds) of the brute-force approach and the successive approximation algorithm for exponential reward rates and constant depletion rates with \(L = 50\), \(c = 500\), \(\varepsilon = 1\) and \(\varepsilon _0 = 128\)

Both algorithms are implemented using Wolfram Mathematica 11.2. Specifically, in the case of constant depletion rate our fractional optimization problem (7) inherits attractive properties of the finite-horizon problem (15) for fixed \(n \in N\). That is, the objective function in (7) is concave in \(\tau \in T(n)\). As long as we need to obtain some lower and upper bounds for the objective function value in (8), we use standard primal and dual gradient methods with a fixed learning rate; see, e.g., Alghunaim & Sayed (2020). In particular, we set the learning rate of 0.1 for both methods. The starting points for the primal and dual algorithms are at the center of the interval T(n), \(n \in N\), and \((1,1)^T \in \mathbb {R}^2\), respectively. A dual reformulation of (8) is rather straightforward and thus, further implementation issues are omitted for brevity.

The computational results reported in Tables 4 and 5 are fairly consistent. In particular, the running times of both algorithms tend to increase with the decrease of parameter \(\Delta \). This observation is not surprising given that the value of \(n_{max}\) increases as the value of \(\Delta \) decreases; recall (4) and the definition of \(n_{max}\).

More importantly, in all test instances the successive approximation algorithm either outperforms the brute-force approach or provides almost the same running time performance. The latter occurs only for \(r(t) = 30 e^{-\frac{t}{60}}\), which corresponds to a “sharp” decrease of the reward rate as t increases. Our intuition is that in this setting the initial difference between the upper and lower bounds, \({\overline{V}}_n - {\underline{V}}_n\), turns out to be smaller than the desired solution accuracy, \(\varepsilon \), for a sufficiently large subset of feasible \(n \in N\). Hence, for these values of n both algorithms demonstrate almost the same performance as the gradient methods are not even used.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ketkov, S.S., Prokopyev, O.A. & Maillart, L.M. Planning of life-depleting preventive maintenance activities with replacements. Ann Oper Res 324, 1461–1483 (2023). https://doi.org/10.1007/s10479-022-04767-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-022-04767-4

Keywords

Navigation