Skip to main content
Log in

Efficient approximation schemes for the maximum lateness minimization on a single machine with a fixed operator or machine non-availability interval

  • Published:
Journal of Combinatorial Optimization Aims and scope Submit manuscript

Abstract

In this paper we deal with the single machine scheduling problem with one non-availability interval to minimize the maximum lateness where jobs have positive tails. Two cases are considered. In the first one, the non-availability interval is due to the machine maintenance. In the second case, the non-availability interval is related to the operator who is organizing the execution of jobs on the machine. The contribution of this paper consists in an improved fully polynomial time approximation scheme (FPTAS) for the maintenance non-availability interval case and the elaboration of the first FPTAS for the operator non-availability interval case. The two FPTASs are strongly polynomial.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Brauner N, Finke G, Kellerer H, Lebacque V, Rapine C, Potts C, Strusevich V (2009) Operator non-availability periods. 4OR 7:239–253

  • Carlier J (1982) The one-machine sequencing problem. Eur J Oper Res 11:42–47

    Article  MathSciNet  MATH  Google Scholar 

  • Chen Y, Zhang A, Tan Z (2013) Complexity and approximation of single machine scheduling with an operator non-availability period to minimize total completion time. Inf Sci 251:150–163

    Article  MathSciNet  MATH  Google Scholar 

  • Dessouky MI, Margenthaler CR (1972) The one-machine sequencing problem with early starts and due dates. AIIE Trans 4(3):214–222

    Article  Google Scholar 

  • Gens GV, Levner EV (1981) Fast approximation algorithms for job sequencing with deadlines. Discret Appl Math 3:313–318

    Article  MATH  Google Scholar 

  • He Y, Zhong W, Gu H (2006) Improved algorithms for two single machine scheduling problems. Theor Comput Sci 363:257–265

    Article  MathSciNet  MATH  Google Scholar 

  • Ibarra O, Kim CE (1975) Fast approximation algorithms for the knapsack and sum of subset problems. J ACM 22:463–468

    Article  MathSciNet  MATH  Google Scholar 

  • Kacem I (2009) Approximation algorithms for the makespan minimization with positive tails on a single machine with a fixed non-availability interval. J Comb Optim 17(2):117–133

    Article  MathSciNet  MATH  Google Scholar 

  • Kacem I, Kellerer H (2014) Approximation algorithms for no idle time scheduling on a single machine with release times and delivery times. Discret Appl Math 164(1):154–160

    Article  MathSciNet  MATH  Google Scholar 

  • Kubzin MA, Strusevich VA (2006) Planning machine maintenance in two machine shop scheduling. Oper Res 54:789–800

    Article  MATH  Google Scholar 

  • Lee CY (1996) Machine scheduling with an availability constraints. J Glob Optim 9:363–384

    Article  Google Scholar 

  • Qi X (2007) A note on worst-case performance of heuristics for maintenance scheduling problems. Discret Appl Math 155:416–422

    Article  MathSciNet  MATH  Google Scholar 

  • Qi X, Chen T, Tu F (1999) Scheduling the maintenance on a single machine. J Oper Res Soc 50:1071–1078

    Article  MATH  Google Scholar 

  • Rapine C, Brauner N, Finke G, Lebacque V (2012) Single machine scheduling with small operator-non-availability periods. J Sched 15:127–139

    Article  MathSciNet  MATH  Google Scholar 

  • Sahni S (1976) Algorithms for scheduling independent tasks. J ACM 23:116–127

    Article  MathSciNet  MATH  Google Scholar 

  • Schmidt G (2000) Scheduling with limited machine availability. Eur J Oper Res 121:1–15

    Article  MathSciNet  MATH  Google Scholar 

  • Yuan JJ, Shi L, Ou JW (2008) Single machine scheduling with forbidden intervals and job delivery times. Asia-Pac J Oper Res 25(3):317–325

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the referees and the editors for their helpful remarks and suggestions. This work has been funded by the CONSEIL REGIONAL DE LORRAINE (under the Programme “Chercheur d’Excellence 2013”).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Imed Kacem.

Additional information

The short version of this paper has been presented at ISCO’2014 conference (2014).

Appendices

Appendix 1: Proof of Theorem 5

First, we recall the idea of the dynamic programming algorithm which is necessary to explain the proof. Indeed, the problem can be optimally solved by applying the following dynamic programming algorithm APS. This algorithm generates iteratively some sets of states. At every iteration j, a set \({\mathcal {X}}_{j}\) composed of states is generated (\(1\le j\le \overline{n}\)). Each state \(\left[ t,f\right] \) in \({\mathcal {X}}_{j}\) can be associated to a feasible schedule for the first j jobs. Variable t denotes the completion time of the last job scheduled before \(T_{1}\) and f is the maximum lateness of the corresponding schedule. This algorithm can be described as follows:

figure b

Let UB be an upper bound on the optimal maximum lateness for problem \( \left( \mathcal {T}^{\prime \prime }\right) \). If we add the restriction that for every state \(\left[ t,f\right] \) the relation \(f\le UB\) must hold, then the running time of APS can be bounded by \(\overline{n}T_{1}UB\). Indeed, t and f are integers and at each step j, we have to create at most \( T_{1}UB\) states to construct \({\mathcal {X}}_{j}\). Moreover, the complexity of APS is proportional to \(\sum _{k=1}^{\overline{n}}\left| {\mathcal {X}} _{k}\right| \). In the remainder of the paper, Algorithm APS denotes the version of the dynamic programming algorithm by taking \(UB=\mathcal { \varphi }_{JS}\left( \mathcal {T}^{\prime \prime }\right) \).

The main idea of the FPTAS is to remove a special part of the states generated by the algorithm. Therefore, the modified algorithm \( APS_{\varepsilon }^{\prime }\) becomes faster and yields an approximate solution instead of the optimal schedule. The approach of modifying the execution of an exact algorithm to design FPTAS, was initially proposed by Ibarra and Kim for solving the knapsack problem (Ibarra and Kim 1975). It is noteworthy that during the last decades numerous combinatorial problems have been addressed by applying such an approach [for instance, see Sahni (1976) and Gens and Levner (1981)]. The worst-case analysis of our FPTAS is based on the comparison of the execution of algorithms APS and \( APS_{\varepsilon }^{\prime }\) as described in the following lemma.

Lemma 13

For every state \(\left[ t,f\right] \) in \({\mathcal {X}}_{j}\) there exists a state \(\left[ t^{\#},f^{\#}\right] \) in \({\mathcal {X}}_{j}^{\#}\) such that:

$$\begin{aligned} t^{\#}\le t\le t^{\#}+j\delta _{2} \end{aligned}$$
(4)

and

$$\begin{aligned} f^{\#}\le f+j\max \{\delta _{1},\delta _{2}\} \end{aligned}$$
(5)

Proof

By induction on j. First, for \(j=1\) we have \({\mathcal {X}}_{1}^{\#}={\mathcal {X}}_{1}\). Therefore, the statement is trivial.Now, assume that the statement holds true up to level \(j-1\). Consider an arbitrary state \(\left[ t,f\right] \) \(\in \) \({\mathcal {X}}_{j}\). Algorithm APS introduces this state into \({\mathcal {X}}_{j}\) when job j is added to some feasible state for the first \(j-1\) jobs. Let \(\left[ t^{\prime },f^{\prime } \right] \) be the above feasible state. Two cases can be distinguished: either \(\left[ t,f\right] =\left[ t^{\prime }+p_{j},\max \left\{ f^{\prime },t^{\prime }+p_{j}+q_{j}\right\} \right] \) or \(\left[ t,f\right] =\left[ t^{\prime },\max \left\{ f^{\prime },T_{2}+\sum _{i=1}^{j}p_{i}-t^{\prime }+q_{j}\right\} \right] \) must hold. We will prove the statement for level j in the two cases.

1st case: \(\left[ t,f\right] =\left[ t^{\prime }+p_{j},\max \left\{ f^{\prime },t^{\prime }+p_{j}+q_{j}\right\} \right] \) Since \(\left[ t^{\prime },f^{\prime }\right] \in {\mathcal {X}}_{j-1}\), there exists \(\left[ t^{\prime \#},f^{\prime \#}\right] \in {\mathcal {X}}_{j-1}^{\#}\) such that \(t^{\prime \#}\le t^{\prime }\le t^{\prime \#}+\left( j-1\right) \delta _{2}\) and \(f^{\prime \#}\le f^{\prime }+\left( j-1\right) \max \{\delta _{1},\delta _{2}\}\). Consequently, the state \(\left[ t^{\prime \#}+p_{j},\max \left\{ f^{\prime \#},t^{\prime \#}+p_{j}+q_{j}\right\} \right] \) is generated by Algorithm \( APS_{\varepsilon }^{\prime }\) at iteration j. However it may be removed when reducing the state subset. Let \(\left[ \lambda ,\mu \right] \) be the state in \( {\mathcal {X}}_{j}^{\#}\) that is in the same box as \(\left[ t^{\prime \#}+p_{j}, \max \left\{ f^{\prime \#},t^{\prime \#}+p_{j}+q_{j}\right\} \right] \). Hence, we have:

$$\begin{aligned} \lambda \le t^{\prime \#}+p_{j}\le t^{\prime }+p_{j}=t \end{aligned}$$
(6)

Moreover,

$$\begin{aligned} \lambda +\delta _{2}\ge t^{\prime \#}+p_{j}\ge t^{\prime }-\left( j-1\right) \delta _{2}+p_{j}=t-\left( j-1\right) \delta _{2} \end{aligned}$$

which implies

$$\begin{aligned} t\le \lambda +j\delta _{2} \end{aligned}$$
(7)

Finally,

$$\begin{aligned} \mu&\le \max \left\{ f^{\prime \#},t^{\prime \#}+p_{j}+q_{j}\right\} +\delta _{1} \nonumber \\&\le \max \left\{ f^{\prime }+\left( j-1\right) \max \{\delta _{1},\delta _{2}\},t^{\prime }+p_{j}+q_{j}\right\} +\delta _{1} \nonumber \\&\le \max \left\{ f^{\prime },t^{\prime }+p_{j}+q_{j}\right\} +\left( j-1\right) \max \{\delta _{1},\delta _{2}\}+\delta _{1} \nonumber \\&<f+j\max \{\delta _{1},\delta _{2}\}. \end{aligned}$$
(8)

Consequently, the statement holds for level j in this case.

2nd case: \(\left[ t,f\right] =\left[ t^{\prime },\max \left\{ f^{\prime },T_{2}+\sum _{i=1}^{j}p_{i}-t^{\prime }+q_{j}\right\} \right] \)Since \(\left[ t^{\prime },f^{\prime }\right] \in {\mathcal {X}}_{j-1}\), there exists \(\left[ t^{\prime \#},f^{\prime \#}\right] \in {\mathcal {X}}_{j-1}^{\#}\) such that \(t^{\prime \#}\le t^{\prime }\le t^{\prime \#}+\left( j-1\right) \delta _{2}\) and \(f^{\prime \#}\le f^{\prime }+\left( j-1\right) \max \{\delta _{1},\delta _{2}\}\). Consequently, the state \(\left[ t^{\prime \#},\max \left\{ f^{\prime \#},T_{2}+\sum _{i=1}^{j}p_{i}-t^{\prime \#}+q_{j}\right\} \right] \) is generated by algorithm \(APS_{\varepsilon }^{\prime }\) at iteration j. However it may be removed when reducing the state subset. Let \(\left[ \lambda ^{\prime },\mu ^{\prime }\right] \) be the state in \({\mathcal {X}}_{j}^{\#}\) that is in the same box as \([t^{\prime \#},\max \{f^{\prime \#},T_{2}+\ \sum _{i=1}^{j}p_{i}\) \(-t^{\prime \#}+q_{j}\}]\). Hence, we have:

$$\begin{aligned} \lambda ^{\prime }\le t^{\prime \#}\le t^{\prime }=t \end{aligned}$$
(9)

Moreover,

$$\begin{aligned} \lambda ^{\prime }+\delta _{2}\ge t^{\prime \#}\ge t^{\prime }-\left( j-1\right) \delta _{2}=t-\left( j-1\right) \delta _{2} \end{aligned}$$

which implies

$$\begin{aligned} t\le \lambda +j\delta _{2} \end{aligned}$$
(10)

and

$$\begin{aligned} \mu ^{\prime }&\le \max \left\{ f^{\prime \#},T_{2}+\sum _{i=1}^{j}p_{i}-t^{\prime \#}+q_{j}\right\} +\delta _{1} \end{aligned}$$
(11)
$$\begin{aligned}&\le \max \left\{ f^{\prime }+\left( j-1\right) \max \{\delta _{1},\delta _{2}\},T_{2}+\sum _{i=1}^{j}p_{i}-t^{\prime }+\left( j-1\right) \delta _{2}+q_{j}\right\} +\delta _{1} \end{aligned}$$
(12)
$$\begin{aligned}&\le \max \left\{ f^{\prime },T_{2}+\sum _{i=1}^{j}p_{i}-t^{\prime }+q_{j}\right\} +\left( j-1\right) \max \{\delta _{1},\delta _{2}\}+\delta _{1} \end{aligned}$$
(13)
$$\begin{aligned}&\le f+j\max \{\delta _{1},\delta _{2}\}. \end{aligned}$$
(14)

In conclusion, the statement holds also for level k in the second case, and this completes our inductive proof. \(\square \)

Now, we give the proof of Eq. (2) in Theorem 5. By definition, the optimal solution can be associated to a state \(\left[ t^{*},f^{*}\right] \) in \( {\mathcal {X}}_{\overline{n}}\). From Lemma 13, there exists a state \(\left[ t^{\#},f^{\#}\right] \) in \({\mathcal {X}}_{\overline{n}}^{\#}\) such that:

$$\begin{aligned} f^{\#}&\le f^{*}+\overline{n}\max \{\delta _{1},\delta _{2}\} \nonumber \\&=f^{*}+\overline{n}\max \left\{ \frac{\mathcal {\varphi }_{JS}\left( {\mathcal {I}} ^{\prime \prime }\right) }{\omega _{1}},\frac{T_{1}}{\omega _{2}}\right\} \nonumber \\&=f^{*}+n\max \{\frac{\mathcal {\varphi }_{JS}\left( {\mathcal {I}} ^{\prime \prime }\right) }{\left\lceil \frac{2\overline{n}}{\varepsilon }\right\rceil }, \frac{T_{1}}{\left\lceil \frac{\overline{n}}{\varepsilon }\right\rceil }\} \nonumber \\&\le f^{*}+\max \left\{ \varepsilon \frac{\mathcal {\varphi }_{JS}\left( \mathcal { I}^{\prime \prime }\right) }{2},\varepsilon T_{1}\right\} =\left( 1+\varepsilon \right) \mathcal {\varphi }^{*}\left( {\mathcal {I}} ^{\prime \prime }\right) . \end{aligned}$$
(15)

Since \(\mathcal {\varphi }_{APS_{\varepsilon }^{\prime }}\left( {\mathcal {I}} ^{\prime \prime }\right) \le f^{\#}\), we conclude that Equation (5) holds.

Appendix 2: Proof of Lemma 6

The first step consists in applying heuristic JS, which can be implemented in \(O\left( \overline{n}\ln \overline{n}\right) \) time. In the second step, algorithm \(APS_{\varepsilon }^{\prime }\) generates the state sets \({\mathcal {X}} _{j}^{\#}\) (\(j\in \left\{ 1,2,\ldots ,\overline{n}\right\} \)). Since \(\left| {\mathcal {X}}_{j}^{\#}\right| \le \left( \omega _{1}+1\right) \left( \omega _{2}+1\right) \), we deduce that

$$\begin{aligned} \sum _{j=1}^{\overline{n}}\left| {\mathcal {X}}_{j}^{\#}\right|&\le \overline{n}\left( \omega _{1}+1\right) \left( \omega _{2}+1\right) =\overline{n}\left( \left\lceil \frac{\overline{n}}{\varepsilon } \right\rceil +1\right) \left( \left\lceil \frac{2\overline{n}}{\varepsilon } \right\rceil +1\right) \nonumber \\&\le \overline{n}\left( \frac{\overline{n}}{\varepsilon }+2\right) \left( \frac{2\overline{n}}{\varepsilon }+2\right) . \end{aligned}$$
(16)

Note that algorithm \(APS_{\varepsilon }^{\prime }\) generates \({\mathcal {X}} _{j}^{\#}\) by associating every new created state to its corresponding box if and only if such a state has a smaller value of t (in this case, the last state associated to this box will be removed). Otherwise, the new created state will be immediately removed. This allows us to generate \( {\mathcal {X}}_{j}^{\#}\) in \(O\left( \omega _{1}\omega _{2}\right) \) time. Hence, our method can be implemented in \(O\left( \overline{n}\ln \overline{n}+ \overline{n}^{3}/\varepsilon ^{2}\right) \) time and this completes the proof.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kacem, I., Kellerer, H. & Seifaddini, M. Efficient approximation schemes for the maximum lateness minimization on a single machine with a fixed operator or machine non-availability interval. J Comb Optim 32, 970–981 (2016). https://doi.org/10.1007/s10878-015-9924-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10878-015-9924-4

Keywords

Navigation