Skip to main content
Log in

Near-linear-time approximation algorithms for scheduling a batch-processing machine with setups and job rejection

  • Published:
Journal of Scheduling Aims and scope Submit manuscript

Abstract

In this paper we study a single batch-processing machine scheduling model. In our model, a set of jobs having different release dates needs to be scheduled onto a single machine that can process a batch of jobs simultaneously at a time. Each batch incurs a fixed setup time and a fixed setup cost. The decision maker may reject some of the jobs by paying penalty cost so as to achieve a tight makespan, but the total rejection penalty cost is required to be no greater than a given value. Our model extends several existing batch-processing machine scheduling models in the literature. We present efficient approximation algorithms with near-linear-time complexities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Allahverdi, A., Gupta, J. N. D., & Aldowaisan, T. (1999). A review of scheduling research involving setup considerations. OMEGA, 27, 219–239.

    Article  Google Scholar 

  • Allahverdi, A., Ng, C. T., Cheng, T. C. E., & Kovalyov, M. Y. (2008). A survey of scheduling problems with setup times or costs. European Journal of Operational Research, 187, 985–1032.

    Article  Google Scholar 

  • Brucker, P., Gladky, A., Hoogeveen, H., Kovalyov, M. Y., Potts, C. N., & van de Velde, S. L. (1998). Scheduling a batching machine. Journal of Scheduling, 1(1), 31–54.

    Article  Google Scholar 

  • Cao, Z., & Yang, X. G. (2009). A PTAS for parallel batch scheduling with rejection and dynamic job arrivals. Theoretical Computer Science, 410, 2732–2745.

    Article  Google Scholar 

  • Cheng, T. C. E., Liu, Z. H., & Yu, W. C. (2001). Scheduling jobs with release dates and deadlines on a batching processing machine. IIE Transactions, 33, 685–690.

    Article  Google Scholar 

  • Croce, F. D., Koulamas, C., & Vincent, T. (2017). A constraint generation approach for two-machine shop problems with jobs selection. European Journal of Operational Research, 259(3), 898–905.

    Article  Google Scholar 

  • Havil, J. (2009). Gamma: Exploring Eulers Constant. Princeton: Princeton University Press.

    Google Scholar 

  • He, C., Leung, J. Y.-T., Lee, K., & Pinedo, M. L. (2016a). Scheduling a single machine with parallel batching to minimize makespan and total rejection cost. Discrete Applied Mathematics, 204, 150–163.

    Article  Google Scholar 

  • He, C., Leung, J. Y.-T., Lee, K., & Pinedo, M. L. (2016b). Improved algorithms for single machine scheduling with release dates and rejections. 4OR, 14, 41–55.

    Article  Google Scholar 

  • Kellerer, H., Pferschy, U., & Pisinger, D. (2004). Knapsack problems. Berlin: Springer.

    Book  Google Scholar 

  • Lee, C.-Y., & Uzsoy, R. (1999). Minimizing makespan on a single batch processing machine with dynamic job arrivals. International Journal of Production Research, 37(1), 219–236.

    Article  Google Scholar 

  • Lee, C.-Y., Uzsoy, R., & Martin-Vega, L. A. (1992). Efficient algorithms for scheduling batch processing machines. Operations Research, 40(4), 764–775.

    Article  Google Scholar 

  • Liu, Z. H., & Yu, W. C. (2000). Scheduling one batch processor subject to job release dates. Discrete Applied Mathematics, 105, 129–136.

    Article  Google Scholar 

  • Liu, Z. H., Yuan, J. J., & Cheng, T. C. E. (2003). On scheduling an unbounded parallel batch machine. Operations Research Letters, 31, 42–48.

    Article  Google Scholar 

  • Lu, L. F., Cheng, T. C. E., Yuan, J. J., & Zhang, L. Q. (2009). Bounded single-machine parallel-batch scheduling with release dates and rejection. Computers and Operations Research, 36, 2748–2751.

    Article  Google Scholar 

  • Lu, L. F., Zhang, L. Q., & Yuan, J. J. (2010). The unbounded parallel batch machine scheduling with release dates and rejection to minimize makespan. Theoretical Computer Science, 396, 283–289.

    Article  Google Scholar 

  • Ou, J. W., & Zhong, X. L. (2017). Bicriteria order acceptance and scheduling with consideration of fill rate. European Journal of Operational Research, 263(3), 904–907.

    Article  Google Scholar 

  • Ou, J. W., Li, C.-L., & Zhong, X. L. (2016). Faster algorithms for single machine scheduling with release dates and rejection. Information Processing Letters, 116, 503–507.

    Article  Google Scholar 

  • Potts, C., & Kovalyov, M. (2000). Scheduling with batching: A review. European Journal of Operational Research, 120(2), 228–249.

    Article  Google Scholar 

  • Shabtay, D. (2014). The single machine serial batch scheduling problem with rejection to minimize total completion time and total rejection cost. European Journal of Operational Research, 233(1), 64–74.

    Article  Google Scholar 

  • Shabtay, D., Gaspar, N., & Kaspi, M. (2013). A survey on offline scheduling with rejection. Journal of Scheduling , 16(1), 3–28.

    Article  Google Scholar 

  • Slotnick, S. A. (2011). Order acceptance and scheduling: A taxonomy and review. European Journal of Operational Research, 212(1), 1–11.

    Article  Google Scholar 

  • Webster, S., & Baker, K. R. (1995). Scheduling groups of jobs on a single machine. Operations Research, 43, 692–703.

    Article  Google Scholar 

  • Zhang, L. Q., Lu, L. F., & Yuan, J. J. (2009). Single machine scheduling with release dates and rejection. European Journal of Operational Research, 198(3), 975–978.

    Article  Google Scholar 

  • Zhang, L. Q., Lu, L. F., & Ng, C. T. (2012). The unbounded parallel-batch scheduling with rejection. Journal of the Operational Research Society, 63(3), 293–298.

    Article  Google Scholar 

Download references

Acknowledgements

The author thanks two anonymous referees for their excellent suggestions and comments on how to improve the presentation of the paper. This research was supported in part by NSFC 71101064.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinwen Ou.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Theorem 1

Consider \(\mathcal{A}(\sigma ^*)\), the set of jobs that are accepted in the optimal solution \(\sigma ^*\). If \(\mathcal{A}(\sigma ^*)=\emptyset \), then \(z^*=(1-\alpha )w(\mathcal{J})=z_0\), which indicates \(\mathbf{H}_1\) generates an optimal solution in this case (note that we must have \(Q_0=R_0-w(\mathcal{J})\ge 0\) in this case). We only need to consider the case when \(\mathcal{A}(\sigma ^*)\ne \emptyset \) in the following analysis.

Let \(\beta \in \{1,2,\ldots ,n\}\) be the largest job index among all jobs in \(\mathcal{A}(\sigma ^*)\). Then, all jobs in set \(\mathcal{J}\setminus \{J_{1},\ldots ,J_\beta \}\) are rejected in \(\sigma ^*\), and inequality \(w(\mathcal{J})-\sum _{j=1}^\beta w_j\le R_0\) holds. Note that \(W_i=w(\mathcal{J})-\sum _{j=1}^i w_j\) for \(i=1,2,\ldots ,n\). Hence, we must have \(Q_\beta =R_0- W_\beta =R_0-w(\mathcal{J})+\sum _{j=1}^\beta w_j\ge 0\). Observe that in \(\sigma ^*\) the makespan of the accepted jobs is no less than \(r_\beta +p_\beta \), and the total rejection penalty of the rejected jobs is no less than \(W_\beta \). This indicates that

$$\begin{aligned} z^*\ge \alpha (r_\beta +p_\beta )+(1-\alpha )(c+W_\beta )=z_\beta . \end{aligned}$$
(12)

Let \(\sigma _\eta \) denote the final solution determined in Step 3. By assumption, we have \(1\le \eta \le n\). According to the definition of \(\eta \), on the one hand, we must have \(Q_\eta =R_0-W_\eta \ge 0\), which indicates that it is feasible if all jobs in \(\mathcal{J}\setminus \{J_{1},\ldots ,J_\eta \}\) are rejected. On the other hand, we have

$$\begin{aligned}&\alpha (r_\eta +p_\eta )+(1-\alpha )(c+W_\eta )= z_\eta \nonumber \\&\quad =\min _{i=0,1,\ldots ,n}\{z_i\mid Q_i\ge 0\}\le z_\beta . \end{aligned}$$
(13)

Recall that \(r_1+p_1\le r_2+p_2 \le \cdots \le r_n+p_n\). We thus have

$$\begin{aligned} \max _{i=1,2,\ldots ,\eta }\{p_i\}\le \max _{i=1,2,\ldots ,\eta }\{r_i+p_i\}=r_\eta +p_\eta \end{aligned}$$
(14)

and

$$\begin{aligned} \max _{i=1,2,\ldots ,\eta }\{r_i\}\le \max _{i=1,2,\ldots ,\eta }\{r_i+p_i\}=r_\eta +p_\eta . \end{aligned}$$
(15)

By the construction of \(\sigma _\eta \), the unique job batch starts to be processed at time \(\max _{1\le i \le \eta }\{r_i\}\), and thus, its value

$$\begin{aligned} z(\sigma _\eta )&=\alpha \left( \max _{i=1,2,\ldots ,\eta }\{r_i\}\right. \nonumber \\&\quad \left. +\max _{i=1,2,\ldots ,\eta }\{p_i\} \right) +(1-\alpha )(c+W_\eta ). \end{aligned}$$
(16)

By (16), (15), (14), (13) and (12), we have

$$\begin{aligned} z(\sigma _\eta )\le & {} \alpha [(r_\eta +p_\eta )+(r_\eta +p_\eta )]+(1-\alpha )(c+W_\eta ) \\= & {} z_\eta +\alpha (r_\eta +p_\eta )\le 2z_\eta \le 2z_\beta \le 2z^*. \end{aligned}$$

Hence, \(\mathbf{H}_1\) is a 2-approximation to problem \(\mathcal{P}\).

Consider the time complexity of \(\mathbf{H}_1\). Step 1 takes \(O(n\log n)\) time. Each of Step 2 and Step 3 only takes O(n) time. Hence, the time complexity of \(\mathbf{H}_1\) is \(O(n\log n)\).

Finally, we show that the bound is tight. Consider the following instance with 2 jobs: \((r_1, p_1,w_1) = (0, 2, 1)\), \((r_2, p_2,w_2) = (2, 0, 2)\), where \(R_0=1\), \(\alpha =0.5\) and \(c=0\). Then, \(r_1+p_1=r_2+p_2=2\), \(w_1+w_2 = 3\), \(Q_0=-2\), \(Q_1=-1\) and \(Q_2=1\). Hence, \(\eta =2\), and the solution determined by \(\mathbf{H}_1\) is as follows: Both \(J_1\) and \(J_2\) are accepted and processed as a batch starting from time 2. The value of such a solution is equal to \(0.5\times (2+2)=2\). However, the optimal schedule is as follows: both \(J_1\) and \(J_2\) are accepted, \(J_1\) starts its processing at time 0, and \(J_2\) starts its processing at time 2 (i.e., there are two batches in the optimal solution, each batch contains exactly one job). The value of the optimal solution equals \(0.5\times 2=1\). Hence, the bound is tight. This completes the proof. \(\square \)

Proof of Theorem 2

Consider \(\mathcal{A}(\sigma ^*)\), the set of accepted jobs in \(\sigma ^*\). If \(\mathcal{A}(\sigma ^*)=\emptyset \), then the optimal solution value \(z^*=z_0\), which can be obtained by rejecting all the jobs. We only need to consider the case when \(\mathcal{A}(\sigma ^*)\ne \emptyset \). According to Step 1 of \(\mathbf{H}_1\), we have \(r_1\le r_2 \le \cdots \le r_n\) if \(p_1=p_2=\cdots =p_n\), and \(p_1\le p_2 \le \cdots \le p_n\) if \(r_1=r_2=\cdots =r_n\). In either case, we let \(\tau \) denote the largest job index among all jobs in \(\mathcal{A}(\sigma ^*)\). Then, the optimal solution is as follows: All jobs in \(\{J_j\in \mathcal{J}\mid j\le \tau \}\) are accepted and processed as a batch starting from time \(r_\tau \), while all other jobs are rejected. The value of the optimal solution is equal to \(\alpha (r_\tau +p_\tau )+(1-\alpha )(c+W_{\tau })= z_\tau \). Observe that in \(\mathbf{H}_1\), we have \(z_i=\alpha (r_i+p_i)+(1-\alpha )(c+W_{i})\) for \(i=1,2,\ldots ,n\). As a result, \(\eta =\arg \min _{i=1,2,\ldots ,n}\{z_i\mid Q_i\ge 0\} =\tau \) (due to optimality). Hence, \(\mathbf{H}_1\) generates an optimal solution to \(\mathcal{P}\) when either \(p_1=p_2=\cdots =p_n\) or \(r_1=r_2=\cdots =r_n\). \(\square \)

Proof of Lemma 1

Note that the LPT-rule holds for the classical unbounded parallel-batch single-machine scheduling problem \(B\mid b= +\infty , r_j\mid C_{\max }\) (see Lee and Uzsoy 1999). Consider \(\mathcal{A}(\sigma ^*)\), the set of accepted jobs in the optimal solution \(\sigma ^*\). Clearly, in the schedule for the jobs in \(\mathcal{A}(\sigma ^*)\), the fixed batch setup cost c does not affect the optimality of the LPT-rule. Hence, the result holds. \(\square \)

Proof of Lemma 2

Note that \(s^*_1\le t_1\Delta \) and \(s^*_2-s^*_1\le (t_2-t_1+1)\Delta \). As a result, we have

$$\begin{aligned} B^*_1\subseteq & {} \{J_j\in \mathcal{J}\mid r_j\le s^*_1, p_j\le s^*_2-s^*_1\}\\\subseteq & {} \{J_j\in \mathcal{J}\mid r_j\le t_1\Delta , p_j\le (t_2-t_1+1)\Delta \}=H_1. \end{aligned}$$

We only need to show that \(\cup _{\tau =1}^i H_\tau \supseteq \cup _{\tau =1}^i B^*_\tau \) for \(i=2,\ldots ,\rho \) if \(\rho \ge 2\) in the following.

Suppose that inequality \(\cup _{\tau =1}^{\rho '-1} H_\tau \supseteq \cup _{\tau =1}^{\rho '-1} B^*_\tau \) has been shown to be corrected for any \(\rho '=2,\ldots ,\rho -1\), and now we would like to prove \(\cup _{\tau =1}^{\rho '} H_\tau \supseteq \cup _{\tau =1}^{\rho '} B^*_\tau \). To prove this result, it is equivalent to prove \(\cup _{\tau =1}^{\rho '} H_\tau \supseteq B^*_{\rho '}\). By definition, we have

$$\begin{aligned} H_{\rho '}= \{J_j\in \mathcal{J}\setminus (\cup _{\tau =1}^{\rho '-1} H_\tau ) \mid r_j\le t_{\rho '}\Delta , p_j\le (t_{\rho '+1}-t_{\rho '}+1)\Delta \} \end{aligned}$$

and

$$\begin{aligned} B^*_{\rho '}\subseteq \{J_j\in \mathcal{J}\mid r_j\le s^*_{\rho '}, p_j\le s^*_{\rho '+1}-s^*_{\rho '}\}. \end{aligned}$$

Note that \(s^*_{\rho '}\le t_{\rho '}\Delta \) and \(s^*_{\rho '+1}-s^*_{\rho '}\le (t_{\rho '+1}-t_{\rho '}+1)\Delta \). As a result,

$$\begin{aligned} \cup _{i=1}^{\rho '} H_i\supseteq & {} \{J_j\in \mathcal{J}\mid r_j\le t_{\rho '}\Delta , p_j\le (t_{\rho '+1}-t_{\rho '}+1)\Delta \}\\\supseteq & {} \{J_j\in \mathcal{J}\mid r_j\le s^*_{\rho '}, p_j\le s^*_{\rho '+1}-s^*_{\rho '}\}. \end{aligned}$$

We thus have \(\cup _{\tau =1}^{\rho '} H_\tau \supseteq B^*_{\rho '}\). The result holds. \(\square \)

Proof of Lemma 3

By the construction of \(\sigma _\rho \), the starting time of the first batch \(H_1\) is no later than \(t_1\Delta \), and the setup time plus the processing time of batch \(H_1\) is no greater than \((t_2-t_1+1)\Delta \). As a result, the completion time of \(H_1\) is no greater than \(t_1\Delta +(t_2-t_1+1)\Delta =(t_2+1)\Delta \). Hence, the result holds when \(\rho =1\). We consider the case when \(\rho \ge 2\) in the following.

Suppose that for any i, \(2\le i\le \rho \), the completion time of the \((i-1)\)-th batch \(H_{i-1}\) is no greater than \((t_{i}+i-1)\Delta \), and we would like to prove that the completion time of the i-th batch \(H_i\) is no greater than \((t_{i+1}+i)\Delta \). Note that by (5), we have \(r_j\le t_{i}\Delta \le (t_{i}+i-1)\Delta \) for any \(J_j\in H_i\), which indicates that the starting time of \(H_{i}\) is no later than \((t_{i}+i-1)\Delta \). By (4), the processing time of batch \(H_{i+1}\) is no greater than \((t_{i+1}-t_{i}+1)\Delta \). As a result, the completion time of \(H_{i}\) is no greater than \((t_{i}+i-1)\Delta +(t_{i+1}-t_{i}+1)\Delta =(t_{i+1}+i)\Delta \). This completes the proof. \(\square \)

Proof of Lemma 5

By definition, we have \(H'_1=H_1\),

$$\begin{aligned} H'_2=\{J_j\in \mathcal{J}\setminus H'_1\mid r_j\le t_2\Delta , p_j\le (t_{\rho +1}-t_2+1)\Delta \}, \end{aligned}$$

and

$$\begin{aligned} H_2= \{J_j\in \mathcal{J}\setminus H_1\mid r_j\le t_2\Delta , p_j\le (t_3-t_2+1)\Delta \}. \end{aligned}$$

As \(t_3\le t_{\rho +1}\), we have \((\cup _{\tau =1}^{2}H'_\tau )\supseteq H_{2}\). By Lemma 2, we have \((\cup _{\tau =1}^{2} H'_\tau )\supseteq (\cup _{\tau =1}^{2} H_\tau ) \supseteq (\cup _{\tau =1}^{2} B^*_\tau ).\) Note that \(\mathcal{A}(\sigma ^*)=\cup _{i=1}^\rho B_i^*\). Hence, to prove the result, we only need to show \((\cup _{\tau =1}^{3}H'_\tau )\supseteq \cup _{i=3}^\rho B_i^*\). By definition again,

$$\begin{aligned} H'_3=\{J_j\in \mathcal{J}\setminus (\cup _{i=1}^{2} H'_i)\mid r_j\le t_{\rho +1}\Delta , p_j\le \frac{1}{3}t_{\rho +1}\Delta \}. \end{aligned}$$

On important observation is that for any \(J_j\in \cup _{i=3}^\rho B_i^*\), we must have

$$\begin{aligned} r_j\le C_{\max }(\sigma ^*)\le t_{\rho +1}\Delta \end{aligned}$$

and

$$\begin{aligned} s+p_j\le \frac{1}{3}C_{\max }(\sigma ^*)\le \frac{1}{3}t_{\rho +1}\Delta \end{aligned}$$

(otherwise, according to the LPT-rule, the total processing time of the first 3 batches in \(\sigma ^*\) has been greater than \(C_{\max }(\sigma ^*)\), which is a contradiction). This indicates that

$$\begin{aligned} (\cup _{i=3}^\rho B_i^*)\subseteq & {} \{J_j\in \mathcal{J}\mid r_j\le t_{\rho +1}\Delta , p_j\le \frac{1}{3}t_{\rho +1}\Delta \}\\\subseteq & {} (\cup _{\tau =1}^{3}H'_\tau ). \end{aligned}$$

The result holds. \(\square \)

Proof of Lemma 6

Note that \(H'_1=H_1\). By Lemma 3, the completion time of \(H'_1\) is bounded by \((t_{2}+1)\Delta \). By definition of \(H'_{2}\), on the one hand, \(r_j\le t_{2}\Delta \le (t_{2}+1)\Delta \) for any \(J_j\in H'_{2}\), which indicates that the starting time of \(H'_{2}\) is no later than \((t_{2}+1)\Delta \). On the other hand, the processing time of \(H'_{2}\) is no greater than \((t_{\rho +1}-t_{2}+1)\Delta \). Hence, the completion time of \(H'_{2}\) is no greater than \((t_{2}+1)\Delta +(t_{\rho +1}-t_{2}+1)\Delta =(t_{\rho +1}+2)\Delta \).

By definition of \(H'_{3}\), on the one hand, \(r_j\le t_{\rho +1}\Delta \le (t_{\rho +1}+2)\Delta \) for any \(J_j\in H'_{3}\), which indicates that the starting time of \(H'_{3}\) is no later than \((t_{\rho +1}+2)\Delta \). On the other hand, the processing time of batch \(H'_{3}\) is no greater than \(\frac{1}{3}t_{\rho +1}\Delta \). As a result, \(C_{\max }({\hat{\sigma }}_2)\) the completion time of \(H'_{3}\), is no greater than \((t_{\rho +1}+2)\Delta +\frac{1}{3}t_{\rho +1}\Delta =\frac{4}{3}t_{\rho +1}\Delta +2\Delta \). Note that

$$\begin{aligned} (t_{\rho +1}-1)\Delta \le C_{\max }(\sigma ^*) \end{aligned}$$

and

$$\begin{aligned} \frac{10}{3}\Delta =\frac{10\epsilon \cdot Z_0}{21\alpha } \le \frac{10\epsilon \cdot (2z^*)}{21\alpha } \le \frac{z^*\epsilon }{\alpha }. \end{aligned}$$

Hence,

$$\begin{aligned} C_{\max }({\hat{\sigma }}_2)&\le \frac{4}{3}t_{\rho +1}\Delta +2\Delta \le \frac{4}{3}(C_{\max }(\sigma ^*)+\Delta )+2\Delta \\&= \frac{4}{3} C_{\max }(\sigma ^*)+ \frac{10}{3} \Delta \le \frac{4}{3} C_{\max }(\sigma ^*)+\frac{z^*\epsilon }{\alpha }. \end{aligned}$$

The result holds. \(\square \)

Proof of Lemma 8

After our method to combine jobs is applied, there are no more than \(T-2\) jobs remaining in \(\mathcal{S}\), and there are no more than \((T-2)\cdot \zeta \) jobs remaining in \(\mathcal{L}\). Hence, \(|\mathcal{J}'|=(T-2)\cdot \zeta +(T-2)\). To prove the result, we only need to show that \(\zeta =O(T\log T)\). By definition,

$$\begin{aligned} \zeta \le \sum _{i=1}^{T-2}\left\lceil \frac{T}{i}\right\rceil \le \sum _{i=1}^{T-2}\left( 1+\frac{T}{i} \right) =T-2+T\sum _{i=1}^{T-2} \frac{1}{i}. \end{aligned}$$

According to the partial sums of the harmonic series, we have

$$\begin{aligned} \sum _{i=1}^{T-2} \frac{1}{i} = \ln (T-1) +\gamma , \end{aligned}$$

where \(\gamma \approx 0.5772156649\) is the Euler - Mascheroni constant which is no greater than 1 (see Havil 2009). As a result,

$$\begin{aligned} \zeta\le & {} T-2+T\cdot [\ln (T-1) +1] = 2T-2+T\ln (T-1) \\= & {} O(T\ln T)=O(T\log T). \end{aligned}$$

The result holds. \(\square \)

Proof of Lemma 10

Let \(J_\eta \) be any job in \(B'_\mu \setminus \{J_{t_\mu }\}\). By definition, \(p_{t_\mu }\ge p_\eta \). If \(J_{t_\mu }\in \mathcal{S}\), then \({\tilde{p}}_{t_\mu }={\tilde{p}}_\eta =0\), the result holds. We only need to consider the case when \(J_{t_\mu }\in \mathcal{L}\) in the following analysis. Assume that \(J_{t_\mu }\in L_{i}^\kappa \) for some integers \(i\in \{1,2,\ldots ,T-2\}\) and \(\kappa \in \{1,2,\ldots ,\left\lceil T/i\right\rceil \}\). If job \(J_\eta \) is also within set \(L_{i}^\kappa \), then \({\tilde{p}}_{t_\mu }=\tilde{p}_\eta =i\Delta _1+(\kappa -1)\frac{i}{T}\Delta _1\), the result holds. If job \(J_\eta \) is within set \(L_{i}^{\kappa '}\) with \(1\le \kappa '<\kappa \), then \(\tilde{p}_\eta =i\Delta _1+(\kappa '-1)\frac{i}{T}\Delta _1< i\Delta _1+(\kappa -1)\frac{i}{T}\Delta _1={\tilde{p}}_{t_\mu }\), the result holds. If job \(J_\eta \) is within set \(L_{i'}\) with \(1\le i'<i\), then \({\tilde{p}}_\eta \le p_\eta < (i'+1)\Delta _1 \le i\Delta _1\le {\tilde{p}}_{t_\mu }\), the result still holds. This completes the proof. \(\square \)

Proof of Lemma 11

According to our job partition method, for any \(J_j\in \mathcal{J}\), we always have

$$\begin{aligned} p_j-{\tilde{p}}_j\le \Delta _1=\frac{\epsilon Z_0}{5\alpha } \le \frac{2\epsilon z^*}{5\alpha }. \end{aligned}$$
(17)

Hence, the result holds for the case when \(\ell =1\). We only need to consider the case when \(\ell >1\) in the following analysis.

Remember that \(\{J_{t_1},\ldots ,J_{t_{\ell -1}}\}\subseteq \mathcal{L}\), and that \(L_i=\{J_j\in \mathcal{L}\mid i\Delta _1< p_j\le (i+1)\Delta _1\}\) for \(i=1,2,\ldots ,T-2\). Let \(\Phi =\{\tilde{p}_{t_1},{\tilde{p}}_{t_2},\ldots ,{\tilde{p}}_{t_{\ell -1}}\}\). For \(i=1,2,\ldots ,T-2\), let \(\varphi _i\) be the set of distinct scaled processing times of the jobs in \(L_i\), and let \(v_i=|\varphi _i\cap \Phi |\) be the number of distinct scaled processing times of the jobs in \(L_i\) that are included within \(\Phi \). According to our job partition method, we must have

$$\begin{aligned} \sum _{\mu =1}^{\ell -1} {\tilde{p}}_{t_\mu }\ge \sum _{i=1}^{T-2}v_i\cdot i\Delta _1 \end{aligned}$$
(18)

and

$$\begin{aligned} \sum _{\mu =1}^{\ell -1} p_{t_\mu }\le \sum _{i=1}^{T-2}v_i\cdot (i+1)\Delta _1. \end{aligned}$$
(19)

Furthermore, for any \(J_j\in L_i\), we have

$$\begin{aligned} p_j-{\tilde{p}}_j\le \frac{i}{T}\Delta _1. \end{aligned}$$
(20)

As a result, by (18), and (19) and (20),

$$\begin{aligned} \sum _{\mu =1}^{\ell -1} (p_{t_\mu }-{\tilde{p}}_{t_\mu })&\le \sum _{i=1}^{T-2}v_i\cdot (\frac{i}{T}\Delta _1) =\frac{1}{T}\sum _{i=1}^{T-2}v_i\cdot i \Delta _1 \nonumber \\&\le \frac{1}{T}\sum _{\mu =1}^\ell {\tilde{p}}_{t_\mu }. \end{aligned}$$
(21)

By Lemma 9 and equality \(C_{\max }({\tilde{\sigma }}) =C_{\max }(\pi ^*)\), we have

$$\begin{aligned} \sum _{\mu =1}^{\ell } {\tilde{p}}_{t_\mu }\le C_{\max }({\tilde{\sigma }}) =C_{\max }(\pi ^*)\le \frac{1}{\alpha }z(\pi ^*)\le \frac{1}{\alpha }z^*. \end{aligned}$$
(22)

Hence, by (17),  (21) and (22), we have

$$\begin{aligned} \sum _{\mu =1}^{\ell } (p_{t_\mu }-{\tilde{p}}_{t_\mu })&= (p_{t_\ell }-\tilde{p}_{t_\ell })+\sum _{\mu =1}^{\ell -1} (p_{t_\mu }-{\tilde{p}}_{t_\mu })\nonumber \\&\le \frac{2\epsilon z^*}{5\alpha }+\frac{z^*}{T\alpha } =\frac{3\epsilon }{5\alpha }z^*. \end{aligned}$$
(23)

This completes the proof. \(\square \)

Proof of Lemma 12

Based on the schedule of \({\tilde{\sigma }}\), we first replace \({\tilde{p}}_j\) by \(p_j\) for \(j=1,2,\ldots ,n\), and let batches \(B'_1,\ldots ,B'_\ell \) start processing as early as possible following this order. After doing so, by Lemma 11, the makespan of the accepted jobs increases by no more than \(\frac{3\epsilon z^*}{5\alpha }\). Then, we further replace \(\tilde{r}_j\) by \(r_j\) for \(j=1,2,\ldots ,n\), and let batches \(B'_1,\ldots ,B'_\ell \) start processing as early as possible following this order. After doing so, the makespan of the accepted jobs further increases by no more than \(\Delta _1=\frac{\epsilon Z_0}{5\alpha }\le \frac{2\epsilon z^*}{5\alpha }\). Hence, the total increment of the makespan of the accepted jobs in \(\sigma _f\) is bounded by \(\frac{3\epsilon z^*}{5\alpha }+ \frac{2\epsilon z^*}{5\alpha } =\frac{\epsilon z^*}{\alpha }\). This completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ou, J. Near-linear-time approximation algorithms for scheduling a batch-processing machine with setups and job rejection. J Sched 23, 525–538 (2020). https://doi.org/10.1007/s10951-020-00657-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10951-020-00657-4

Keywords

Navigation