Skip to main content
Log in

Robust Algorithms for Preemptive Scheduling

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

Preemptive scheduling problems on parallel machines are classic problems. Given the goal of minimizing the makespan, they are polynomially solvable even for the most general model of unrelated machines. In these problems, a set of jobs is to be assigned to run on a set of m machines. A job can be split into parts arbitrarily and these parts are to be assigned to time slots on the machines without parallelism, that is, for every job, at most one of its parts can be processed at each time.

Motivated by sensitivity analysis and online algorithms, we investigate the problem of designing robust algorithms for constructing preemptive schedules. Robust algorithms receive one piece of input at a time. They may change a small portion of the solution as an additional part of the input is revealed. The capacity of change is based on the size of the new piece of input. For scheduling problems, the supremum ratio between the total size of the jobs (or parts of jobs) which may be re-scheduled upon the arrival of a new job j, and the size of j, is called migration factor.

We design a strongly optimal algorithm with the migration factor \(1-\frac{1}{m}\) for identical machines. Strongly optimal algorithms avoid idle time and create solutions where the (non-increasingly) sorted vector of completion times of the machines is lexicographically minimal. In the case of identical machines this results not only in makespan minimization, but the created solution is also optimal with respect to any p norm (for p>1). We show that an algorithm of a smaller migration factor cannot be optimal with respect to makespan or any other p norm, thus the result is best possible in this sense as well. We further show that neither uniformly related machines nor identical machines with restricted assignment admit an optimal algorithm with a constant migration factor. This lower bound holds both for makespan minimization and for any p norm. Finally, we analyze the case of two machines and show that in this case it is still possible to maintain an optimal schedule with a small migration factor in the cases of two uniformly related machines and two identical machines with restricted assignment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Alon, N., Azar, Y., Woeginger, G.J., Yadid, T.: Approximation schemes for scheduling. In: Proc. 8th Symp. on Discrete Algorithms (SODA), pp. 493–500. ACM/SIAM, New York/Philadelphia (1997)

    Google Scholar 

  2. Aspnes, J., Azar, Y., Fiat, A., Plotkin, S., Waarts, O.: On-line load balancing with applications to machine scheduling and virtual circuit routing. J. ACM 44(3), 486–504 (1997)

    MATH  MathSciNet  Google Scholar 

  3. Azar, Y., Naor, J., Rom, R.: The competitiveness of on-line assignments. J. Algorithms 18(2), 221–237 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  4. Berman, P., Charikar, M., Karpinski, M.: On-line load balancing for related machines. J. Algorithms 35(1), 108–121 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  5. Caprara, A., Kellerer, H., Pferschy, U.: Approximation schemes for ordered vector packing problems. Nav. Res. Logist. 50(1), 58–69 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  6. Chen, B., van Vliet, A., Woeginger, G.J.: An optimal algorithm for preemptive on-line scheduling. Oper. Res. Lett. 18(3), 127–131 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  7. Correa, J.R., Skutella, M., Verschae, J.: The power of preemption on unrelated machines and applications to scheduling orders. Math. Oper. Res. 37(2), 379–398 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  8. Dósa, G., Epstein, L.: Preemptive online scheduling with reordering. SIAM J. Discrete Math. 25(1), 2149 (2011)

    Article  Google Scholar 

  9. Ebenlendr, T., Jawor, W., Sgall, J.: Preemptive online scheduling: optimal algorithms for all speeds. Algorithmica 53(4), 504–522 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  10. Ebenlendr, T., Sgall, J.: Optimal and online preemptive scheduling on uniformly related machines. J. Sched. 12(5), 517–527 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  11. Englert, M., Özmen, D., Westermann, M.: The power of reordering for online minimum makespan scheduling. In: Proc. 48th Symp. Foundations of Computer Science (FOCS), pp. 603–612 (2008)

    Google Scholar 

  12. Epstein, L.: Optimal preemptive on-line scheduling on uniform processors with non-decreasing speed ratios. Oper. Res. Lett. 29(2), 93–98 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  13. Epstein, L., Levin, A.: A robust APTAS for the classical bin packing problem. Math. Program. 119(1), 33–49 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  14. Epstein, L., Levin, A.: AFPTAS results for common variants of bin packing: a new method for handling the small items. SIAM J. Optim. 20(6), 3121–3145 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  15. Epstein, L., Levin, A.: Robust approximation schemes for cube packing. Manuscript (2010, in review)

  16. Epstein, L., Noga, J., Seiden, S.S., Sgall, J., Woeginger, G.J.: Randomized online scheduling on two uniform machines. J. Sched. 4(2), 71–92 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  17. Epstein, L., Sgall, J.: A lower bound for on-line scheduling on uniformly related machines. Oper. Res. Lett. 26(1), 17–22 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  18. Epstein, L., Tassa, T.: Optimal preemptive scheduling for general target functions. J. Comput. Syst. Sci. 72(1), 132–162 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  19. Fleischer, R., Wahl, M.: Online scheduling revisited. J. Sched. 3(5), 343–353 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  20. Gonzales, T.F., Sahni, S.: Preemptive scheduling of uniform processor systems. J. ACM 25(1), 92–101 (1978)

    Google Scholar 

  21. Graham, R.L.: Bounds for certain multiprocessing anomalies. Bell Syst. Tech. J. 45, 1563–1581 (1966)

    Article  Google Scholar 

  22. Horvath, E.C., Lam, S., Sethi, R.: A level algorithm for preemptive scheduling. J. ACM 24(1), 32–43 (1977)

    MATH  MathSciNet  Google Scholar 

  23. Huo, Y., Leung, J.Y.-T., Wang, X.: Preemptive scheduling algorithms with nested processing set restriction. Int. J. Found. Comput. Sci. 20(6), 1147–1160 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  24. Lawler, E.L., Labetoulle, J.: On preemptive scheduling of unrelated parallel processors by linear programming. J. ACM 25(4), 612–619 (1978)

    MATH  MathSciNet  Google Scholar 

  25. Lenstra, J.K., Shmoys, D.B., Tardos, E.: Approximation algorithms for scheduling unrelated parallel machines. Math. Program. 46(1–3), 259–271 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  26. Liu, J.W.S., Liu, C.L.: Bounds on scheduling algorithms for heterogeneous computing systems. In: Rosenfeld, J.L. (ed.) Proceedings of IFIP Congress. Information Processing, vol. 74, pp. 349–353 (1974)

    Google Scholar 

  27. Liu, J.W.S., Yang, A.T.: Optimal scheduling of independent tasks on heterogeneous computing systems. In: Proceedings of the ACM National Conference, vol. 1, pp. 38–45. ACM, New York (1974)

    Google Scholar 

  28. McNaughton, R.: Scheduling with deadlines and loss functions. Manag. Sci. 6(1), 1–12 (1959)

    Article  MATH  MathSciNet  Google Scholar 

  29. Muntz, R.R., Coffman, E.G. Jr.: Optimal preemptive scheduling on two-processor systems. IEEE Trans. Comput. 18(11), 1014–1020 (1969)

    Article  Google Scholar 

  30. Muntz, R.R., Coffman, E.G. Jr.: Preemptive scheduling of real-time tasks on multiprocessor systems. J. ACM 17(2), 324–338 (1970)

    MATH  MathSciNet  Google Scholar 

  31. Sanders, P., Sivadasan, N., Skutella, M.: Online scheduling with bounded migration. Math. Oper. Res. 34(2), 481–498 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  32. Sgall, J.: A lower bound for randomized on-line multiprocessor scheduling. Inf. Process. Lett. 63(1), 51–55 (1997)

    Article  MathSciNet  Google Scholar 

  33. Shachnai, H., Tamir, T., Woeginger, G.J.: Minimizing makespan and preemption costs on a system of uniform machines. Algorithmica 42(3–4), 309–334 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  34. Skutella, M., Verschae, J.: A robust PTAS for machine covering and packing. In: Proc. 18th European Symp. on Algorithms (ESA), pp. 36–47 (2010)

    Google Scholar 

  35. Wen, J., Du, D.: Preemptive on-line scheduling for two uniform processors. Oper. Res. Lett. 23(3–5), 113–116 (1998)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leah Epstein.

Additional information

An extended abstract of this paper appears in Proc. of ESA 2011.

Appendices

Appendix A: Fractional Restricted Assignment

In this section we discuss a relaxation where a job can be split arbitrarily among machines, as long as the total time dedicated to it is sufficient. That is, the constraint that a job cannot be processed on different machines in parallel is removed. In the cases of identical machines and uniformly related machines this makes the problem trivial even in the online environment; in order to obtain an optimal solution, each new job is simply split into m parts of proportional sizes, according to the required ratio between machine loads. This algorithm does not migrate any jobs or parts of jobs (so its migration factor is zero).

As for restricted assignment, simply splitting a job equally among its allowed machines leads to solutions whose makespan can be Ω(m) times the optimal makespan (for example, this happens for an instance consisting of m−1 unit sized jobs, such that job j can be assigned to machines {j,m}, then machine m receives a half of each job, for a load of \(\frac{m-1}{2}\), while an optimal solution has makespan \(\frac{m-1}{m}<1\)). In fact, the lower bound of Ω(logm) on the competitive ratio of any online algorithm for makespan minimization with restricted assignment [3] also holds for fractional assignment. This lower bound does not hold if bounded migration is allowed, so it is possible that a robust algorithm would have a better performance. The lower bounds on the migration factor proved in Sect. 3.3 are valid for fractional assignment. Thus, we could still hope to find an optimal robust algorithm with a migration factor of \(\frac{m-1}{2}\). As an algorithm for restricted assignment cannot be online, it must contain a process in which jobs (or parts of jobs) are moved from machines that become too loaded as a result of the arrival of a new job to less loaded machines. We show a simple optimal algorithm of migration factor m−1 for makespan minimization for this case, which is the best possible result up to a constant multiplicative factor in the migration factor. This algorithm deals with the selection of jobs that should be re-assigned using network flow.

We now present the algorithm and show how to deal with the arrival of a new job j whose size is p j that can be processed on the machine set M j . First, we compute the new value of the optimal makespan via a solution for the following standard linear program which minimizes the makespan \({\mathcal{M}}\) subject to the constraints that all jobs are processed and each machine finishes its allocated work by the makespan. The variable x ij corresponds to the part of job j which is assigned to machine i.

Denote the value of this optimal solution by opt. Note that we cannot replace the solution by the new solution of the linear program because this may have a large migration factor. Next, we construct a flow network as follows: Our network has m+2 nodes, where there is a source node s, a sink node t and one node for each machine. In what follows, for each machine ψ, we use both the machine and its associated node in the network equivalently. For every job τ for which the current solution (prior to the arrival of j) processes β time units on machine ψ, and machine ϕψ in its processing set, we will have an arc from node ψ to node ϕ of capacity β. For every machine ψM j , we have an arc (s,ψ) of infinite capacity, and for every machine ψ for which the current solution finishes at time C ψ such that C ψ <opt, we have an arc (ψ,t) of capacity optC ψ . In this network we find a maximum flow from s to t such that its value is constrained to be at most p j . Note that the number of parallel arcs between a pair of machines is at most n, and hence the computation of the network and the maximum flow takes polynomial time.

We will show in what follows that the value of the maximum flow is indeed the size of the new job, p j . We decompose this flow into paths (removing the flow along cycles if there exist such cycles), and we can assume that each of the flow paths in the network is a simple path from s to t. Consider such a flow path π which is used to route α units of flow. We delete s and t from π, and denote by π′ the resulting path from a machine in M j to a machine whose completion time in the current solution (without job j) is strictly smaller than opt. The last machine in the path must satisfy this property since it has an arc to t. If π′ consists of a single node, then the corresponding machine can receive a part of j of size α without violating the required completion time. Otherwise each arc (ψ,ϕ) of π′ corresponds to a job which is currently processed on ψ, and we move α units of its processing time to ϕ. Note that applying this transformation for all the arcs along the path keeps the amount of processing time in machines which are inner nodes of π′ unchanged, and the completion time of the last machine along π′ increased by exactly α, while the completion time of the first machine along π′ decreased by α (to make room for the new job j). Thus, job j is assigned to machines M j such that ψM j processes j for a period of time equal to the flow along the arc (s,ψ), while every arc between two machines (ψ′,ϕ′) with a positive flow implies that a part of a given size (equal to the value of the flow on that arc) of a specific job (the job for which this arc was constructed) is moved from the first machine ψ′ to the second one ϕ′.

Therefore, the new solution is a schedule of all the jobs including j before time opt, and thus this is an optimal solution.

Lemma 13

The migration factor of the algorithm is at most m−1.

Proof

Consider a flow path π. Then, since π is simple, it has at most m+1 arcs, and thus only m−1 arcs in π′. Denote by α π the amount of flow along π, then the total flow along all the arcs of π′ is at most α π ⋅(m−1). Summing up over all flow paths we conclude that the total flow of the arcs in the network (excluding the arcs which are adjacent to either s or t) is at most p j ⋅(m−1). The total migration of jobs (or parts of jobs) in the current iteration of the algorithm is exactly the total flow of the arcs (excluding the arcs which are adjacent to either s or t), that is at most p j ⋅(m−1), and therefore the migration factor is at most m−1. □

It remains to show that our algorithm will not fail to find a maximum flow of size p j in the network.

Lemma 14

The value of the maximum flow in the network is at least p j .

Proof

Assume by contradiction that the value of the maximum flow is smaller than p j , and consider the residual flow network at the end of the maximum flow computation. Since this is a maximum flow, there is no path from s to t in this network. We denote by S the set of machines which are accessible from s in the residual network. We apply the changes to the solution as implied by our algorithm (though it does not give an allocation of p j processing time units to job j). In this solution each job i which is processed by some machine of S, cannot be processed by any machine of MS (since otherwise, there would be an arc from S to MS in the residual network). Moreover, as there is no path from any node of S to t in the residual network, there is no arc from a machine in S to t in this network, and each machine of S processes a total size of jobs of exactly opt. Therefore, since j was not completely assigned, the total processing time of all jobs which must be processed by machines in S (that is, their processing set is contained in S) is strictly larger than |S|⋅opt. This contradicts the assumption that opt is the optimal cost of a solution to the problem. □

Therefore, we established the following theorem.

Theorem 15

There exists an optimal algorithm for makespan minimization in the fractional restricted assignment model whose migration factor is at most m−1.

Appendix B: Two Machines

2.1 B.1 An Algorithm of Migration Factor 1 for Two Uniformly Related Machines

In this section we design a class of algorithms. Without loss of generality we consider two uniformly related machines of speed ratio s>1, where s 1=1 and s 2=s. Each algorithm maintains an optimal schedule using a migration factor of 1 for a given norm which is the p norm for some p>1 or norm (that corresponds to makespan minimization). Note that by the lower bound of m−1 on the migration factor of an optimal algorithm, in all cases this is the best possible migration factor.

Given a prefix of the input for the makespan minimization problem, \(\textsc{opt}_{t}=\max\{\frac{p_{t}^{\max}}{s},\frac{P_{t}}{s+1}\}\) [22, 26, 27] (here and in the next section opt t denotes an optimal schedule for the first t jobs as well as its cost). If \(\textsc{opt}_{t}=\frac{P_{t}}{s+1}\) then both machines have equal loads, and clearly there is no idle time. If \(\textsc{opt}_{t}=\frac{p_{t}^{\max}}{s}\) then the job of the maximum size is assigned to the second machine and it is possible to avoid idle time on the first machine by assigning the other jobs to run there during the time interval \([0,P_{t}-p_{t}^{\max}]\). This results in a strongly optimal solution (but for uniformly related machines, this does not imply the minimization of any p norm). In the first case there are inputs where two preemptions are required to achieve these loads, and in the second case it is always possible to avoid preemptions altogether. However, since we construct a robust algorithm which modifies the schedule multiple times, our algorithm will use a linear number of preemptions (in the number of jobs).

For the p norm, using the results of [18], an optimal solution cannot have any idle time, and the ratio between the machine loads should be ideally \(s^{\frac{1}{p-1}}\), so that the second (faster) machine is more loaded. Let \(\sigma_{p}=s^{\frac{1}{p-1}}\). Thus the loads of the first and second machine should be \(\frac{1}{1+s\cdot\sigma_{p}}P_{t}\) and \(\frac{\sigma_{p}}{1+s\cdot\sigma_{p}}P_{t}\), respectively, if \(\frac{s\cdot\sigma_{p}}{1+s\cdot\sigma_{p}}P_{t} \geq p_{t}^{\max}\) and otherwise they are \(P_{t}-p_{t}^{\max}\) and \(\frac{p_{t}^{\max}}{s}\). We use σ =1 in the case that we are dealing with makespan. Using this definition of σ , the last properties hold for the case p=∞ as well. We use \(L_{i}^{t}\) to denote the load of machine i in an optimal schedule according to some specific p norm (where 1<p<∞), or in a strongly optimal schedule with respect to makepsan (that is, for p=∞), for the prefix of t≥0 jobs.

Lemma 16

The function \(L_{2}^{t}\) is a non-decreasing function of t, the function \(L_{1}^{t}\) is a strictly increasing function of t, and \(L_{1}^{t}\leq L_{2}^{t}\) for all t.

Proof

The property \(L_{1}^{t}\leq L_{2}^{t}\) holds by the discussion regarding optimal schedules above.

We consider four cases. If \(L_{2}^{t-1}=\frac{ \sigma_{p}}{1+s\cdot\sigma_{p}}P_{t-1}\) and \(L_{2}^{t}=\frac{ \sigma_{p}}{1+s\cdot\sigma_{p}}P_{t}\) then \(L_{1}^{t-1}=\frac{1}{1+s\cdot\sigma_{p}}P_{t-1}\) and \(L_{1}^{t}=\frac{1}{1+s\cdot\sigma_{p}}P_{t}\). Since P t =P t−1+p t >P t−1 then we are done.

Otherwise, if \(L_{2}^{t-1}=\frac{ \sigma_{p}}{1+s\cdot\sigma_{p}}P_{t-1}\) and \(L_{2}^{t}=\frac{p_{t}^{\max}}{s}\). We have \(\frac{s\cdot \sigma_{p}}{1+s\cdot\sigma_{p}}P_{t} < p_{t}^{\max}\) but \(\frac{s\cdot \sigma_{p}}{1+s\cdot\sigma_{p}}P_{t-1} \geq p_{t-1}^{\max}\). Using P t >P t−1 we get \(p_{t-1}^{\max}<p_{t}^{\max}\), and we conclude that \(p_{t}^{\max}=p_{t}\). In this case, \(L_{1}^{t}=P_{t}-p_{t}=P_{t-1}\), and clearly \(L_{1}^{t-1} < P_{t-1}\). In addition, \(L_{2}^{t-1}=\frac{ \sigma_{p}}{1+s\cdot\sigma_{p}}P_{t-1}<\frac{ \sigma_{p}}{1+s\cdot\sigma_{p}}P_{t}<\frac{p_{t}^{\max}}{s}=L_{2}^{t}\).

Otherwise, if \(L_{2}^{t-1}=\frac{p_{t-1}^{\max}}{s}\) and \(L_{2}^{t}=\frac{ \sigma_{p}}{1+s\cdot\sigma_{p}}P_{t}\), we have \(\frac{s\cdot\sigma_{p}}{1+s\cdot\sigma_{p}}P_{t} \geq p_{t}^{\max}\) but \(\frac{s\cdot\sigma_{p}}{1+s\cdot\sigma_{p}}P_{t-1} < p_{t-1}^{\max}\). We get \(L_{2}^{t}\geq\frac{p_{t}^{\max}}{s} \geq \frac{p_{t-1}^{\max}}{s} =L_{2}^{t-1}\) and \(L_{1}^{t}=\frac{1}{1+s\cdot\sigma_{p}}P_{t}\) while \(L_{1}^{t-1}<\frac{1}{1+s\cdot\sigma_{p}}P_{t-1}\), so \(L_{1}^{t-1}<L_{1}^{t}\).

Otherwise, we must have \(L_{2}^{t-1}=\frac{p_{t-1}^{\max}}{s}\) and \(L_{2}^{t}=\frac{p_{t}^{\max}}{s}\), and there are two cases. If \(p_{t}^{\max}=p_{t-1}^{\max}\) then \(L_{2}^{t}=L_{2}^{t-1}\), and \(L_{1}^{t}=P_{t}-p_{t}^{\max}=P_{t}-p_{t-1}^{\max}>P_{t-1}-p_{t-1}^{\max}=L_{1}^{t-1}\). Otherwise, \(p_{t}^{\max}=p_{t}\) and \(p_{t}^{\max}>p_{t-1}^{\max}\). Thus \(L_{2}^{t-1}<L_{2}^{t}\) and \(L_{1}^{t-1}=P_{t-1}-p_{t-1}^{\max}\) while \(L_{1}^{t}=P_{t-1}\), so \(L_{1}^{t}>L_{1}^{t-1}\). □

We show that given an optimal schedule for t−1 jobs, it can be modified using a migration factor of 1 into an optimal schedule for t jobs. The new machine loads \(L_{1}^{t}\) and \(L_{2}^{t}\) are computed first, and the assignment of the new job and the modification of the schedule are based on their values.

Case 1. The load of the second machine does not change, that is, \({L_{2}^{t}=L_{2}^{t-1}}\)

In this case, in both opt t−1 and opt t , the second machine runs a single job, which is not the new job. In this case, the new job is assigned non-preemptively to the first machine, to run during the time slot \([L_{1}^{t-1},L_{1}^{t}]\). No new preemptions are introduced, and no further modifications are applied.

Case 2. The new job needs to run alone on the second machine, that is, \({L_{2}^{t}=\frac{p_{t}}{s}}\)

In this case, all parts of jobs previously assigned to run on the second machine are moved to the first machine to run during the time slot \([L_{1}^{t-1},L_{1}^{t}]\), and the new job is assigned to the second machine during the time slot \([0,L_{2}^{t}]\). The total size of the migrating jobs is at most \(sL_{2}^{t-1} \leq sL_{2}^{t}=p^{t}\). No new preemptions are introduced.

In the remaining cases we have \(L_{2}^{t}=\frac{ \sigma_{p}}{1+s\cdot\sigma_{p}}P_{t}\) and \(L_{1}^{t}=\frac{1}{1+s\cdot\sigma_{p}}P_{t}\).

Case 3. The new job is sufficiently small to run alone on the slow machines, that is, \({p_{t} \leq L_{1}^{t}}\)

The new job is assigned during the time slot \([L_{1}^{t}-p_{t},L_{1}^{t}]\) on the slow machine. By the condition of this case, \(L_{1}^{t}-p_{t}\geq0\). Clearly, \(P_{t}=L_{1}^{t}+sL_{2}^{t}\) and \(P_{t}-p_{t}=L_{1}^{t-1}+sL_{2}^{t-1}\), which implies \(s(L_{2}^{t}-L_{2}^{t-1})=-L_{1}^{t}+L_{1}^{t-1}+p_{t}\). Since \(L_{2}^{t} \geq L_{2}^{t-1}\), we have \(L_{1}^{t}-p_{t} \leq L_{1}^{t-1}<L_{1}^{t}\). If \(L_{2}^{t} > L_{2}^{t-1}\) then the parts of jobs which are assigned during the time slot \([L_{1}^{t}-p_{t},L_{1}^{t-1}]\) on the slow machine are moved to the time slot \([L_{2}^{t-1},L_{2}^{t}]\) on the fast machine. The migration factor cannot exceed 1 since \(L_{1}^{t-1}-(L_{1}^{t}-p_{t})=L_{1}^{t-1}-L_{1}^{t}+p_{t} \leq p_{t}\). At most one new preemption is introduced (if \(L_{1}^{t}-p_{t}>0\) and it was not a preemption time on the first machine, then the part of the job which was running at time \(L_{1}^{t}-p_{t}\) on the first machine is cut further into two parts). This would result in a schedule with the required completion times and no idle time, so all jobs will be assigned. It is left to show that no overlaps between the times that a job is processed on the two machines are created. We show that \(L_{2}^{t-1} \geq L_{1}^{t}-p_{t}\), which implies that the moved jobs do not run in parallel to any job except for the new job. Indeed, we have \(L_{2}^{t-1} \geq L_{1}^{t-1} \geq L_{1}^{t}-p_{t}\).

Case 4. The new job must have a part assigned to the fast machine, that is, \({p_{t} > L_{1}^{t}}\)

We split this case into two sub-cases. If \(L_{1}^{t}+s(L_{2}^{t}-L_{1}^{t}) \leq p_{t}\), then find a point \(0 \leq\theta\leq L_{1}^{t}\) so that \(\theta+ s(L_{2}^{t}-\theta) = p_{t}\). This point must exist due to the condition of this sub-case and since \(sL_{2}^{t} \geq p_{t}\). Assign the new job during the time slot [0,θ] on the slow machine and during \([\theta,L_{2}^{t}]\) on the fast machine. The jobs previously assigned during the time slots which will be used by job t are moved to other free time slots. Thus, the total size of the moved jobs does not exceed p t , and since the new job will be assigned during the entire time that some machine will be active, no overlaps are created for any job. At most three new preemptions are created; the new job is preempted once, and two additional preemptions are due to the following. If jobs are only moved to the first machine, then for each machine, one part previously assigned to it may be cut. If there is free space on the second machine, then parts of jobs are moved only from the first machine, a part of a job may be cut in order to remove the required amount of total processing time, and another part may be cut to split the processing time between the two machines.

Otherwise, we assign the new job into the time slot \([0,L_{1}^{t}]\) on the slow machine and the time slot \([L_{2}^{t}-\gamma,L_{2}^{t}]\) on the fast machine, where \(\gamma=\frac{p_{t}-L_{1}^{t}}{s}\). In this case we have \(L_{2}^{t}-\gamma> L_{1}^{t}\) (and thus the new job does not run in parallel on the two machines), and in fact \(L_{2}^{t}-\gamma \geq L_{2}^{t-1}\), since the time slot \([L_{2}^{t-1},L_{2}^{t}-\gamma]\) is the only possible time slot which can receive the jobs previously assigned to the slow machine. The only job which runs on both machines is the new job, so no overlap is created. The migration factor is at most 1 since the only moved parts of jobs are those that the new job is assigned instead of them, in particular, the total size of moved parts is at most \(L_{1}^{t-1} < L_{1}^{t}\) since they are removed from the first machine, while \(p_{t}>L_{1}^{t}\). One new preemption is created (since the new job is preempted).

We have proved the following theorem.

Theorem 17

For every norm p (1<p≤∞), there exists a polynomial time algorithm of migration factor 1 which maintains an optimal schedule on two uniformly related machines where the number of preemptions is linear in the number of jobs.

2.2 B.2 An Algorithm of Migration Factor 1 for Two Machines in the Restricted Assignment Model

The case of two machines is a special case of nested processing sets, which was studied for preemptive scheduling in [23]. A strongly optimal algorithm of running time O(mn+nlogn) is given in that paper, in addition to an optimal algorithm for makespan minimization of running time O(nlogn).

In this section we design an optimal algorithm with respect to makespan with migration factor 1. The algorithm does not use idle time and therefore it is strongly optimal (since there are just two machines) and minimizes any p norm of the machine loads. By Theorem 12, this is the best possible migration factor of any strongly optimal algorithm. We use the following notation. For a prefix of t jobs of the input and for i=1,2, we let \(P^{i}_{t}=\sum_{1 \leq j \leq t, M_{j}=\{i\}} p_{j}\), and \(P_{t}^{b}=P_{t}-P_{t}^{1}-P_{t}^{2}\). We also let \(p_{t}^{b}=\max_{1 \leq j \leq t, M_{j}=\{1,2\}} p_{j}\). Clearly,

(1)

Note that this bounds is non-decreasing as a function of t.

First, we describe a simple offline algorithm which achieves the bound (1). This algorithm is not robust but it allows us to compute the machine loads at each step. Consider an input of t jobs. The following algorithm can be used if \(\max\{P_{t}^{b}/2,p_{t}^{b},P_{t}^{1}\} \geq P_{t}^{2}\). Otherwise, exchange the roles of the two machines. Let \(L_{1}^{t}=\max\{P_{t}^{b}/2,p_{t}^{b},P_{t}^{1}\}\) and \(L_{2}^{t}=P_{t}-L_{1}^{t}\). Assign the jobs with the processing set {1} consecutively during the time slot \([0,P_{t}^{1}]\) on the first machine. Assign the jobs with the processing set {1,2} during the time slot \([P_{t}^{1},L_{1}^{t}]\) on the first machine and continue in the time slot \([0,P_{t}^{b}+P_{t}^{1}-L_{1}^{t}]\) on the second machine. Note that \(L_{1}^{t} \leq P_{t}^{1}+P_{t}^{b}\), thus this part of the assignment is well-defined. The jobs with the processing set {1,2} can be assigned in an arbitrary order, but for technical reasons we add the requirement that among these jobs, the job which is assigned first should be a job of processing time \(p_{t}^{b}\). The jobs of the processing set {2} are assigned during the time slot \([P_{t}^{b}+P_{t}^{1}-L_{1}^{t},L_{2}^{t}]\) on the second machine. We have \(L_{2}^{t}-(P_{t}^{b}+P_{t}^{1}-L_{1}^{t})=L_{1}^{t}+L_{2}^{t}-P_{t}^{b}-P_{t}^{1}=P_{t}-P_{t}^{b}-P_{t}^{1}=P_{t}^{2}\), so all jobs are assigned. There may be at most one preemption. Since \(L_{1}^{t}\geq p_{t}^{b}\), there is no overlap between the time slots that the preempted job is assigned to, since an overlap would imply that the size of some job that can be assigned to both machines exceeds \(L_{1}^{t}\). The migration factor of this algorithm is unbounded, and therefore we will design a new robust algorithm.

Note that if \(\textsc{opt}_{t}=p_{t}^{b}\), then \(L_{1}^{t}=p_{t}^{b}\), and a job of this size is assigned to the time slot \([P_{t}^{1},L^{t}_{1}]\) on the first machine and to the time slot \([0,P_{t}^{1}]\) on the second machine. In this case \(p_{t}^{b} \leq P_{t}^{b}\), and \(L^{t}_{2}=P_{t}-p_{t}^{b}\geq P_{t}^{1}\). The algorithm creates an arbitrary strongly optimal schedule with the invariant that if \(\textsc{opt}_{t}=p_{t}^{b}\), then the time slot \([L^{t}_{2},p_{t}^{b}]\) on the first machine does not contain anything except for a part of a job of size \(p_{t}^{b}\).

Lemma 18

The loads \(L_{1}^{t}\) and \(L_{2}^{t}\) are non-decreasing as functions of t.

Proof

We prove the claim by induction. For the base case we have \(L_{i}^{1}\geq L_{i}^{0}=0\) for i=1,2.

Consider the first machine. If \(\textsc{opt}_{t}=P_{t}^{2}\), then \(L_{1}^{t}=P_{t}-P_{t}^{2}\) and \(L_{1}^{t} \geq P_{t-1}+p_{t}-P_{t-1}^{2}-p_{t} =P_{t-1}-P_{t-1}^{2} \geq L_{1}^{t-1}\) using \(P_{t}^{2} \leq P_{t-1}^{2}+p_{t}\) (and since the jobs with the processing set {2} cannot be assigned to the first machine). We are left with the case \(L_{1}^{t}=\textsc{opt}_{t}\) and \(\textsc{opt}_{t} > P_{t}^{2}\). In this case, if \(L_{1}^{t-1}=\textsc{opt}_{t-1}\), then we are done using opt t opt t−1. Otherwise \(\textsc{opt}_{t-1}=P_{t-1}^{2} \geq \frac{P_{t-1}}{2}\), so \(P_{t-1}-P_{t-1}^{2} \leq P_{t-1}^{2}\). Hence, \(L_{1}^{t} =\textsc{opt}_{t} \geq P_{t}^{2} \geq P_{t-1}^{2} \geq P_{t-1}-P_{t-1}^{2}=L_{1}^{t-1}\).

Consider the second machine. First assume \(\textsc{opt}_{t-1}=P_{t-1}^{2}\), and thus \(L_{2}^{t-1}=P_{t-1}^{2}\). We have \(L_{2}^{t} \geq P_{t}^{2}\), and we are done using \(P_{t}^{2} \geq P_{t-1}^{2}\). Next, assume \(\textsc{opt}_{t-1}>P_{t-1}^{2}\), and thus \(L_{2}^{t-1}=P_{t-1}-\textsc{opt}_{t-1}\). If \(\textsc{opt}_{t}>P_{t}^{2}\), then \(L_{2}^{t}=P_{t}-\textsc{opt}_{t}\), and we have opt t opt t−1+p t thus \(L_{2}^{t}=P_{t}-\textsc{opt}_{t} \geq P_{t-1}+p_{t}-\textsc{opt}_{t-1}-p_{t}=L_{2}^{t-1}\). Finally, if \(\textsc{opt}_{t}=P_{t}^{2}\) and \(L_{2}^{t}=P_{t}^{2}\), then we find using \(\textsc{opt}_{t-1} \geq\frac{P_{t-1}}{2}\), \(L_{2}^{t-1} \leq \frac{P_{t-1}}{2}\) while \(L_{2}^{t} = P_{t}^{2} \geq\frac{P_{t}}{2} > \frac{P_{t-1}}{2}\). □

The action of the algorithm depends both on the structure of the optimal solutions before the assignment of the new job and after the assignment. The definition of the structure of the optimal solution is the maximum term in the right hand side of (1), breaking ties in favor of opt t being expressed by the second term, and otherwise arbitrarily, but the role of machines is switched only if \(\textsc{opt}_{t}=P^{2}_{t}\). In most of the cases, the only moved parts of jobs are moved to make room for the new job. In such cases clearly the migration factor is no larger than 1. The number of preemptions is at most quadratic in the number of jobs. In all cases except for cases 1 and 2, \(\textsc{opt}_{t}>p_{t}^{b}\).

Case 1. We have M t ={1,2} and p t P t−1

In this case \(p_{t} \geq P^{i}_{t-1}=P^{i}_{t}\), for i=1,2, and the new job determines the new optimal makespan, i.e., opt t =p t . We apply the offline algorithm above, i.e., all jobs are rescheduled to obtain a strongly optimal schedule. The migration factor is 1 since the total size of moved jobs is at most P t−1p t . The resulting schedule has at most one preemption. The resulting loads are \(L_{1}^{t}=p_{t}\) and \(L_{2}^{t}=P_{t-1}\).

Case 2. The new optimal makespan is determined by a previous job of maximum size and processing set {1,2}, that is, opt t =p k for k<t and M k ={1,2}

In this case we also have opt t−1=p k since \(p_{k} \geq P_{t}^{i} \geq P_{t-1}^{i}\) for i=1,2, and \(p_{k} \geq\frac{P_{t}}{2} >\frac{P_{t-1}}{2}\). Therefore, \(L_{1}^{t}=L_{1}^{t-1}\), \(L_{2}^{t-1}=P_{t-1}-p_{k}\) and \(L_{2}^{t}=P_{t}-p_{k} \leq p_{k}=L_{1}^{t}=L_{1}^{t-1}\). By the invariant, there is a single job of size p k =opt t running in the time slot \([L_{2}^{t-1},L_{1}^{t-1}]\) on the first machine, and without loss of generality this is job k. If 2∈M t , then the new job is assigned in the time slot [P t−1p k ,P t p k ] on the second machine. Otherwise, the part of job k assigned during this time slot on the first machine is moved to the same time slot on the second machine, and the new job takes its place. In this last case at most two new preemptions are created (both of job k), and the invariant of the algorithm is maintained since job k is assigned to the first machine during the time slot \([L_{2}^{t},L_{1}^{t}]\).

Case 3. There exists i such that the new optimal makespan is determined by \({P_{t}^{i}}\) and the old optimal makespan was determined by \({P_{t-1}^{i}}\), that is, \({\textsc{opt}_{t}=P_{t}^{i}}\) and \({\textsc{opt}_{t-1}=P_{t-1}^{i}}\)

Since \(\textsc {opt}_{t}=P_{t}^{i}\), every strongly optimal solution has the property that machine i must run all jobs of the processing set {i}, and the other machine must run all other jobs. We distinguish two sub-cases. We have \(L^{t}_{i}>L^{t-1}_{i}\) if and only if M t ={i}. In this case \(L^{t-1}_{i}=P_{t-1}^{i}\) and \(L^{t}_{i}=P_{t}^{i}=P_{t-1}^{i}+p_{t}\). The new job can be assigned completely to machine i during the time slot \([P_{t-1}^{i},P_{t}^{i}]\). Otherwise (if \(L^{t}_{i}=L^{t-1}_{i}\) and 3−iM t ) \(L^{t}_{3-i}=L^{t-1}_{3-i}+p_{t}\), and the job is assigned completely to the other machine during the time slot \([L^{t-1}_{3-i},L^{t-1}_{3-i}+p_{t}]\). We have \(L^{t-1}_{3-i}+p_{t}=P_{t}-P_{t}^{i} \leq P_{t}^{i}=L_{i}^{t}\), since \(\textsc{opt}_{t}=P_{t}^{i} \geq\frac{P_{t}}{2}\). No jobs are migrated and no new preemptions are created.

Case 4. There exists i such that the new optimal makespan is determined by \({P_{t}^{i}}\) and the old optimal makespan was determined by the jobs whose processing set consists only of the other machine 3−i, that is, \({\textsc{opt}_{t}=P_{t}^{i}}\) and \({\textsc{opt}_{t-1}=P_{t-1}^{3-i}}\)

In this case, the schedule prior to the arrival of the new job is such that the jobs with the processing set {1,2} are assigned to machine i, and after the arrival of the new job, all such jobs are assigned to the other machine. The new job clearly has the processing set {i}. Thus, the algorithm moves all such parts of jobs of processing set {1,2} to the other machine, and the new job is assigned into the created gaps, and into \([L^{t-1}_{i},L^{t}_{i}]\). The total size which is allocated for the assignment of the new job is \(P_{t}^{b}+L^{t}_{i}-L^{t-1}_{i}=P_{t}^{b}+P_{t}^{i}-(P_{t-1}^{b}+P_{t-1}^{i})=P_{t-1}^{b}+(P_{t-1}^{i}+p_{t})-P_{t-1}^{b}-P_{t-1}^{i}=p_{t}\). Thus, the total size of the moved jobs does not exceed p t .

Note that the moved jobs are assigned non-preemptively, which holds even for a job which was assigned preemptively in opt t−1. We show that the number of preemptions increases additively by at most t−1. Let r be the number of parts into which the new job is split. There are at least r−1 parts of jobs with the processing set {1,2} which are moved. The number of preemptions for the new job is r−1. If rt then we are done. Otherwise, the moved jobs were previously split into at least r−1 parts in total (counting complete jobs as well) which were combined into at most t−1 parts as a result of moving them to the other machine. Thus, rt preemptions no longer exist, while r−1 new preemptions are created (for the new job), increasing the number of preemptions by at most t−1.

Case 5. There exists i such that the new optimal makespan is determined by the average load and the old optimal makespan was determined by the jobs whose processing set consists only of machine i, that is, \({\textsc{opt}_{t}=\frac{P_{t}}{2}}\) and \({\textsc{opt}_{t-1}=P_{t-1}^{i}}\)

In this case, the other machine 3−i must be in the processing set of the new job. We distinguish two cases. If the processing set is M t ={1,2}, assign the new job into the time slot \([L^{t-1}_{i},P_{t}/2]\) on machine i, and the time slot \([P_{t}/2-p_{t},L^{t-1}_{i}]\) on the other machine. Note that \(\frac{P_{t}}{2} \geq p_{t}\) since \(\textsc{opt}_{t} \geq p_{t}^{b} \geq p_{t}\) and \(L^{t-1}_{3-i} \leq L^{t-1}_{i}=\textsc{opt}_{t-1}\). The parts of jobs which were assigned to run on machine 3−i during the time slot \([P_{t}/2-p_{t},L^{t-1}_{3-i}]\) are moved to run during the time slot \([L^{t-1}_{i},L^{t}_{3-i}]\) on the same machine (in parallel to the new job). Note that \(L^{t}_{3-i}=\frac{P_{t}}{2} =L^{t}_{i} \geq L^{t-1}_{i}\). In this case at most two new preemptions are introduced, one for the new job and one in the case that the part of a job running at the time P t /2−p t on machine 3−i was cut into two parts.

If the processing set of t is {3−i} then parts of jobs with the processing set {1,2} are moved from machine 3−i to machine i into the time slot \([L^{t-1}_{i}=P_{t}^{i},P_{t}/2]\). Note that previously machine i only contained jobs of the processing set {i} and that \(P_{t}^{i} \leq\frac{P_{t}}{2}\) and \(P_{t}^{3-i} \leq \frac{P_{t}}{2}\), so it is necessary to move a total size of \(\frac{P_{t}}{2}-P_{t}^{i}\) where (using \(P_{t}^{3-i} \leq\frac{P_{t}}{2}\), which gives \(P_{t}-P_{t}^{b}-P_{t}^{i} \leq\frac{P_{t}}{2}\), or alternatively, \(P_{t}-2P_{t}^{i} \leq2P_{t}^{b}\)) \(0 \leq\frac{P_{t}}{2}-P_{t}^{i} \leq P_{t}^{b}\). Jobs are moved one by one, until the correct total size is moved. Each such job is assigned non-preemptively, except for possibly the last moved job which may be moved partially. The new job is assigned into the created gaps and into the time slot \([L^{t-1}_{3-i},L^{t}_{3-i}]\). Since the jobs moved to machine i are assigned into the time slot \([L^{t-1}_{i},L^{t}_{i}]\) and \(L^{t-1}_{3-i} \leq L^{t-1}_{i}\), no overlaps are created. Once again the only moved jobs are those that make room for the new job, so the migration factor remains 1. For every job which was moved completely, all its preemptions are removed, so if it previously consisted of k parts then k−1 preemptions are removed but at most k new preemptions are created in the new job. As for a job which was moved partially, if k of its parts were moved (the last one may have been moved only partially), k−2 preemptions were removed (if k=1 then the number of preemptions for this job may be increased by 1), but k new preemptions were created for the new job. The number of preemptions increased by at most (t−1)+1=t.

Case 6. Both the new optimal makespan and the old optimal makespan are determined by the average load, that is, \({\textsc{opt}_{t}=\frac{P_{t}}{2}}\) and \({\textsc{opt}_{t-1}=\frac{P_{t-1}}{2}}\)

If the new job has the processing set {1,2}, then the new job is assigned during the time slot [P t−1/2,P t /2] on machine 1 and during [P t−1/2−p t /2,P t−1/2] on machine 2. Note that \(\frac{P_{t}}{2} \geq p_{t}\) or equivalently, P t−1+p t ≥2p t , so P t−1/2−p t /2≥0. The jobs previously assigned during [P t−1/2−p t /2,P t−1/2] on machine 2 are moved to the time slot [P t−1/2,P t−1/2+p t /2=P t /2]. The new job runs during this time slot on the other machine so no overlaps are created. At most two new preemptions are introduced.

If the new job has a processing set {i}, then it is assigned during [P t−1/2,P t /2] on machine i. The total size of jobs of the processing set {1,2} previously assigned to machine i is \(\frac{P_{t-1}}{2}-P_{t-1}^{i}\). We have \(P_{t-1}/2+p_{t}/2=P_{t}/2 \geq P_{t}^{i}=p_{t}+P_{t-1}^{i}\), so \(\frac{P_{t-1}}{2}-P_{t-1}^{i} \geq p_{t}/2\). Parts of these jobs of total size p t /2 are moved to the other machine as in Case 5, and job t is assigned instead of these parts. As in that case, the number of preemptions may increase by at most t. The moved jobs are scheduled in parallel to the new job so no overlaps are created.

Case 7. The old optimal makespan was either determined by one job or by the average load, and the new makespan is determined by the jobs of the processing set {i}, that is, \({\textsc {opt}_{t}={P_{t}^{i}}}\) and \({\textsc{opt}_{t-1}=\max\{ p_{t-1}^{b},\frac{P_{t-1}}{2} \} }\)

In this case M t ={i}. All jobs with the processing set {1,2} are moved to machine 3−i to the time slot \([L^{t-1}_{3-i},L^{t}_{3-i}]\), to make room for the new job. These jobs are assigned non-preemptively. The new job is assigned to machine i into the gaps and during the time slot \([L^{t-1}_{i},L^{t}_{i}]\). There will be no overlaps since every job will run only on one of the machines. Similarly to other cases, the number of preemptions may increase by at most t and the migration factor is 1.

Case 8. The old optimal makespan was determined by one job, and the new makespan is determined by the average load, that is, \({\textsc{opt}_{t}=\frac{P_{t}}{2}}\) and \({\textsc{opt}_{t-1}=p_{t-1}^{b}}\)

Recall that by the invariant, the time slot \([L^{t-1}_{2},L^{t-1}_{1}]\) on machine 1 is fully occupied by a part of a job of size \(p_{t-1}^{b}\). If the new job has machine 2 in its processing set, a part of size \(L^{t}_{2}-L^{t-1}_{2}\) is assigned to the time slot \([L^{t-1}_{2},L^{t}_{2}=P_{t}/2]\) on machine 2. Note that the length of this slot satisfies \(L^{t}_{2}-L^{t-1}_{2}=P_{t}/2-(P_{t-1}-p_{t-1}^{b})=P_{t}/2+p_{t-1}^{b}-P_{t-1}\) and \((p_{t}+P_{t-1})/2=P_{t}/2\geq p_{t}^{b} \geq p_{t-1}^{b}\), so \(\frac {P_{t}}{2}-L^{t-1}_{2}=P_{t}/2+p_{t-1}^{b}- P_{t-1} \leq p_{t}\).

To assign the remainder of the new job, consider the time slot \([0,p_{t}-(P_{t}/2-L^{t-1}_{2})]\). First, we note that \(0 \leq p_{t}-(P_{t}/2-L^{t-1}_{2}) \leq L^{t-1}_{2}\), which holds since \(\textsc{opt}_{t}=\frac{P_{t}}{2} \geq p_{t}^{b} \geq p_{t}\), if M t ={1,2}, and \(\textsc{opt}_{t}=\frac{P_{t}}{2} \geq P_{t}^{2} \geq p_{t}\), if M t ={2}. If the processing set of the new job is {1,2}, since before the arrival of the new job, a job of size \(p_{t-1}^{b}\) was running at all times, we remove such parts running on some machine during this time slot and move them to the time slot \([L^{t-1}_{1},L^{t}_{1}]\) on the first machine, and assign parts of the new job instead of them. No overlap is created since only the new job and a job of size \(p_{t-1}^{b}\) were moved, and none of these two jobs can create overlap (they are assigned to run in parallel during \([L^{t-1}_{1},L^{t}_{1}]\)). If the moved job had k parts during the time slot \([0,p_{t}-(P_{t}/2-L^{t-1}_{2})]\) (alternating between the two machines), then at least k−2 preemptions are removed and at most k new preemptions are created for the new job, which results in at most two new preemptions.

If the processing set of the new job is {2}, then since \(P_{t}/2 \geq P_{t}^{2}\), the total size of jobs of processing set {1,2} which were assigned to machine 2 is at least \(L^{t-1}_{2}-P_{t-1}^{2}=L^{t-1}_{2}-P_{t}^{2}+p_{t} \geq p_{t}-(P_{t}/2-L^{t-1}_{2})\), since \(P_{t}^{2} \leq\frac{P_{t}}{2}\). Parts of such jobs are moved to the time slot \([L^{t-1}_{1},L^{t}_{1}]\) on the first machine and we assign job t into these time slots on machine 2. As in other cases, this may create at most t new preemptions.

Finally, we consider the case M t ={1}. We have \(L^{t-1}_{2}+p_{t}=P_{t-1}-p_{t-1}^{b}+p_{t} = P_{t}/2+(P_{t}/2-p_{t}^{b}) \geq P_{t}/2\) (since \(p_{t}^{b}=p_{t-1}^{b}\)). We assign a part of the new job into the time slot \([L^{t-1}_{2},L^{t}_{1}=P_{t}/2]\) on the first machine. Given the invariant, the part of the job of size \(p_{t-1}^{b}\) which was assigned to the time slot \([L^{t-1}_{2},L^{t-1}_{1}]\) on machine 1 is moved to machine 2 (to the same time slot). Note that \(L^{t}_{2}=P_{t}/2 \geq p_{t}^{b} = p_{t-1}^{b}=L^{t-1}_{1}\). Since \(P_{t}/2 \geq P_{t}^{1}\), the first machine must still contain jobs with the processing set {1,2} of a total size which is at least the remainder of the new job. Parts of such jobs are moved into the time slot \([L^{t-1}_{1},L^{t}_{2}]\) on the second machine and are replaced with parts of the new job. No overlap is created since moved jobs are only assigned in parallel to the new job. The number of additional preemptions is at most t (we give preference to moving parts of the same job of size \(p_{t-1}^{b}\) which was moved, and we assigned such parts just after the previously moved part, so it is either the case that the entire part of this job is moved to the second machine without introducing new preemptions for this job, or that no other jobs are moved).

We have proved the following theorem.

Theorem 19

There exists a polynomial time algorithm of migration factor 1 which maintains a strongly optimal schedule on two machines in the restricted assignment model where the number of preemptions is polynomial in the number of jobs.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Epstein, L., Levin, A. Robust Algorithms for Preemptive Scheduling. Algorithmica 69, 26–57 (2014). https://doi.org/10.1007/s00453-012-9718-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00453-012-9718-3

Keywords

Navigation