Skip to main content
Log in

A unified view of parallel machine scheduling with interdependent processing rates

  • Published:
Journal of Scheduling Aims and scope Submit manuscript

Abstract

In this paper, we are concerned with the problem of scheduling n jobs on m machines. The job processing rate is interdependent and the jobs are non-preemptive. During the last several decades, a number of related models for parallel machine scheduling with interdependent processing rates (PMS-IPR) have appeared in the scheduling literature. Some of these models have been studied independently from one another. The purpose of this paper is to present two general PMS-IPR models that capture the essence of many of these existing PMS-IPR models. Several new complexity results are presented. We discuss improvements on some existing models. Furthermore, for an extension of the two related PMS-IPR models where they include many resource constraint models with controllable processing times, we propose an efficient dynamic programming procedure that solves the problem to optimality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

Download references

Acknowledgements

The authors would like to thank the associate editor and the anonymous referee for their constructive and helpful comments on initial version of this paper. Their careful readings have helped us produce an improved paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haibo Wang.

Appendix

Appendix

Example A.1

Consider a 3-job 3-machine scheduling problem as follows. Let the basic processing time (the processing time if a single machine is assigned to the job for processing) of each job be \( d_{1} = 2,d_{2} = 3,d_{3} = 5 \). The scheduling objective is to minimize makespan. Several cases are considered as follows.

Case 1 Consider scheduling Model 1 with processing time rate equal to \( g_{i} (Y(t)) = Y(t) \) for \( Y(t) \in \{ 1,2,3\} \), f(1) = 1 and \( i \in \{ 1,2,3\} \). Note that this is exactly the model presented by Adiri and Yehudai (1987). In that, the demand rate is 1/Y(t). The optimal makespan is equal to \( \tilde{D}_{3} = 10 \) independent of schedule of jobs on machines (single- or multiple-machines scheduling). For example, assign job i to machine i as illustrated in the following scenario.

$$ {\text{Machine 1:}}\;\;\;\tilde{D}_{1} = \mathop \smallint \limits_{0}^{2} 3{\text{d}}u = 6 $$
$$ {\text{Machine 2:}}\;\;\;\tilde{D}_{2} = \mathop \smallint \limits_{0}^{2} 3{\text{d}}u + \mathop \smallint \limits_{2}^{3} 2{\text{d}}u = 8 $$
$$ {\text{Machine 3:}}\;\;\tilde{D}_{3} = \mathop \smallint \limits_{0}^{2} 3{\text{d}}u + \mathop \smallint \limits_{2}^{3} 2{\text{d}}u + \mathop \smallint \limits_{3}^{5} 1{\text{d}}u = 10. $$

Case 2 Consider scheduling Model 1 with processing time rate equal to \( g_{i} (Y(t)) = 1/Y(t) \) for \( Y(t) \in \{ 1,2,3\} \), f(1) = 1 and \( i \in \{ 1,2,3\} \). The optimal makespan is equal to \( \tilde{D}_{3} = 19/6 \) and accomplished by assigning job i to machine i as illustrated below.

$$ {\text{Machine 1:}}\;\;\;\tilde{D}_{1} = \mathop \smallint \limits_{0}^{2} \left( {\frac{1}{3}} \right){\text{d}}u = 2 / 3 $$
$$ {\text{Machine 2:}}\;\;\;\tilde{D}_{2} = \mathop \smallint \limits_{0}^{2} \left( {\frac{1}{3}} \right){\text{d}}u + \mathop \smallint \limits_{2}^{3} \left( {\frac{1}{2}} \right){\text{d}}u = 7 / 6 $$
$$ {\text{Machine 3:}}\;\;\;\tilde{D}_{3} = \mathop \smallint \limits_{0}^{2} \left( {\frac{1}{3}} \right){\text{d}}u + \mathop \smallint \limits_{2}^{3} \left( {\frac{1}{2}} \right){\text{d}}u + \mathop \smallint \limits_{3}^{5} 1{\text{d}}u = 1 9 / 6. $$

Case 3 Consider scheduling model (2) with \( g_{i} (Y(t)) = 1/Y(t) \) for \( Y(t) \in \{ 1,2,3\} \), f(1) = 1 and \( i \in \{ 1,2,3\} \). The optimal makespan is equal to \( \tilde{D}_{3} = 10 \) independent of schedule of jobs on machines (single- or multiple-machines scheduling); see a scenario as follows:

$$ {\text{Machine 1:}}\;\;\;\tilde{D}_{1} = \mathop \smallint \limits_{0}^{{\tilde{D}_{1} }} \left( {\frac{1}{3}} \right){\text{d}}u $$
$$ {\text{Machine 2:}}\;\;\;\tilde{D}_{2} = \mathop \smallint \limits_{0}^{{\tilde{D}_{1} }} \left( {\frac{1}{3}} \right){\text{d}}u + \mathop \smallint \limits_{{\tilde{D}_{1} }}^{{\tilde{D}_{2} }} \left( {\frac{1}{2}} \right){\text{d}}u $$
$$ {\text{Machine 3:}}\;\;\;\tilde{D}_{3} = \mathop \smallint \limits_{0}^{{\tilde{D}_{1} }} \left( {\frac{1}{3}} \right){\text{d}}u + \mathop \smallint \limits_{{\tilde{D}_{1} }}^{{\tilde{D}_{2} }} \left( {\frac{1}{2}} \right){\text{d}}u + \mathop \smallint \limits_{{\tilde{D}_{2} }}^{{\tilde{D}_{3} }} 1{\text{d}}u. $$

Note that here the sum of demands in the left-hand sides is a constant number equal to 10.

Case 4: Consider scheduling model (2) with \( g_{i} (Y(t)) = Y(t) \) for \( Y(t) \in \{ 1,2,3\} \), f(1) = 1 and \( i \in \{ 1,2,3\} \). The optimal makespan is equal to \( \tilde{D}_{3} = 19/6 \) calculated as follows:

$$ {\text{Machine 1:}}\;\;\;\tilde{D}_{1} = \mathop \smallint \limits_{0}^{{\tilde{D}_{1} }} 3{\text{d}}u $$
$$ {\text{Machine 2:}}\;\;\;\tilde{D}_{2} = \mathop \smallint \limits_{0}^{{\tilde{D}_{1} }} 3{\text{d}}u + \mathop \smallint \limits_{{\tilde{D}_{1} }}^{{\tilde{D}_{2} }} 2{\text{d}}u $$
$$ {\text{Machine 3:}}\;\;\;\tilde{D}_{3} = \mathop \smallint \limits_{0}^{{\tilde{D}_{1} }} 3{\text{d}}u + \mathop \smallint \limits_{{\tilde{D}_{1} }}^{{\tilde{D}_{2} }} 2{\text{d}}u + \mathop \smallint \limits_{{\tilde{D}_{2} }}^{{\tilde{D}_{3} }} 1{\text{d}}u. $$

Here, also the sum of demands in the left-hand side is a constant number equal to 10.

In cases 5–7, we consider malleable task scheduling, with different assumptions, where the processing time of a job is exactly inversely proportional to the number of machines assigned to that job. Thus, in each case we have the processing time of a job \( j \in \{ 1,2,3\} \) equal to \( \tilde{d}_{j} = d_{j} /Y\left( t \right) \) for \( Y(t) = 1,2,3. \)

Case 5: Assume idle processors cannot join a group of operating processors in the middle of job processing, and dropping out to a different job before the job is completed is also not allowed. An optimal makespan can be found by assigning 3 machines to each job simultaneously. The optimal makespan is (2 + 3 + 5)/3 = 10/3, and this is a unique optimal schedule. The next best schedule is to assign 2 machines to job 3 (with actual processing time 5/2), assign one machine to job 2 (with actual processing time 3) and assign two machines (previously were assigned to job 3) to job 1 (with actual processing time 1). The makespan is now 5/2 + 1=7/2.

Case 6. Assume idle processors are able to join a group of operating processors at any point in the job processing; however, dropping out to a different job before the job is completed is not allowed. Here, the optimal schedule is similar to the case 5.

Case 7. Assume idle processors are able to join a group of operating processors at any point in the job processing, and dropping out to a different job before the job is completed is also allowed. Similar to cases 5 and 6, an optimal makespan can be found by assigning 3 machines to each job simultaneously with the makespan equal to 10/3. However, here an optimal schedule is not unique as it was in cases 5 and 6. The following schedule is another optimal solution. Assign 3 machines to job 3 for 3 units of its processing time to account for 3/3 actual processing time (there are 2 more time units left on this job to be processed). Assign 3 machines to job 2 for one unit of its time to account for 1/3 actual processing time (here also there are 2 more time units left of the job to be processed). Now, each job has 2 time units to be processed. Assign one machine to each job for 2 time units to account for 2 actual processing times. The total elapsed time for finishing all jobs (i.e., makespan) is 3/3 + 1/3 + 2 = 10/3.

Obviously, this example shows that different models can produce different results. In particular, since Model 1 and 2 are inverse of each other when f(1) = 1, \( t \ne 0 \), they provided similar results (compare cases 1–4). Note that in cases 1 and 3 any schedule is optimal, while this is not the case for cases 2 and 4. The malleable task scheduling (cases 5–7) gave very different results when compared to Model 1 (cases 1, 2) and Model 2 (cases 3, 4). In particular, malleable task scheduling in cases 5 and 6 (these assumptions were made in [6]) have a unique solution in each case. However, for Model 1 (case 1) and Model 2 (case 3), any schedule is optimal. When any of the assumptions in case 5, 6, or 7 is satisfied, it is easy to prove that an optimal schedule exists by assigning all machines to all jobs irrespective to order of the job processing. It is important to realize that this is not to say ‘any schedule of jobs on machines is optimal.’ Also, note that comparison of cases 2 and 4 with cases 5, 6, and 7 show no apparent relationship between malleable task scheduling and Model 1 and 2.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alidaee, B., Wang, H., Kethley, R.B. et al. A unified view of parallel machine scheduling with interdependent processing rates. J Sched 22, 499–515 (2019). https://doi.org/10.1007/s10951-019-00605-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10951-019-00605-x

Keywords

Navigation