Skip to main content
Log in

A cutting and scheduling problem in float glass manufacturing

  • Published:
Journal of Scheduling Aims and scope Submit manuscript

Abstract

This paper considers a cutting and scheduling problem of minimizing scrap motivated by float glass manufacturing and introduces the float glass scheduling problem. We relate it to classical problems in the scheduling literature such as no-wait hybrid flow shops and cyclic scheduling. We show that the problem is NP-hard, and identify when each of the problem’s components are polynomially solvable and when they induce hardness. In addition, we propose a simple heuristic algorithm, provide its worst-case performance bounds, and demonstrate that the bounds are tight. When the number of machines is two, the worst-case performance is 5/3.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Birewar, D. B., & Grossmann, I. E. (1989). Incorporating scheduling in the optimal design of multiproduct batch plants. Computers and Chemical Engineering, 13, 141–161.

    Article  Google Scholar 

  • Birewar, D. B., & Grossmann, I. E. (1989). Efficient optimization algorithms for zero-wait scheduling of multiproduct batch plants. Industrial and Engineering Chemistry Research, 28, 1333–1345.

    Article  Google Scholar 

  • Garey, M. R., & Johnson, D. S. (1979). Computers and Intractability: A guide to the theory of NP-completeness. New York: W.H. Freeman and Co.

    Google Scholar 

  • Graham, R. L. (1969). Bounds on multiprocessing timing anomalies. SIAM Journal of Applied Mathematics, 17, 263–269.

    Google Scholar 

  • Gupta, J. N. D., & Tunc, E. A. (1991). Schedules for a two stage hybrid flowshop with parallel machines at the second stage. International Jouranal of Production Research, 29, 1489–1502.

    Article  Google Scholar 

  • Linn, R., & Zhang, W. (1999). Hybrid flow shop scheduling: A survey. Computers and Industrial Engineering, 37, 57–61.

    Article  Google Scholar 

  • McCormick, S. T., Pinedo, M. L., Shenker, S., & Wolf, B. (1989). Sequencing in an assembly line with blocking to minimize cycle time. Operations Research, 37, 925–935.

    Article  Google Scholar 

  • Na, B., Ahmed, S., Nemhauser, G.L., & Sokol, J. (2013). Optimization of automated float glass lines. International Journal of Production Economics (to appear).

  • Pinedo, M. L. (2008). Scheduling: Theory, algorithms, and systems (3rd ed.). New York: Springer.

  • Sriskandarajah, C. (1993). Performance of scheduling algorithms for no-wait flowshops with parallel machines. European Journal of Operational Research, 70, 365–378.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Byungsoo Na.

Appendix

Appendix

1.1 Relation between a mixed covey and minimal coveys

In this section of the appendix, we provide a rule for dividing a mixed covey into minimal coveys, and prove that the process does not increase the amount of scrap in the solution.

Recall that a covey in which each job appears at most once is referred to as a minimal covey, and a covey in which at least one job appears more than once is called a mixed covey. For example, the covey \(\{a, x, y, b, z, y, z, x, c\}\) is a mixed covey because jobs \(x, y,\) and \(z\) appear more than once in the covey. We refer to such jobs that appear more than once in a covey as the covey’s duplicating jobs.

By the machine dedication restriction, all instances of a duplicating job in a mixed covey must be assigned to the same offloading machine. Therefore, within a mixed covey, the time between any two consecutive cuts of the same job (including wrapping around from the end of the covey to the start) must be at least as long as the cycle time of the offloading machine.

It is more convenient to analyze FGSP with minimal coveys than with mixed coveys, but it is important to ensure that restricting the solution space to minimal coveys does not increase the amount of scrap required. The following algorithm for creating minimal coveys from a mixed covey guarantees that scrap will not increase.

Algorithm: Mixed-to-Minimal Coveys

figure e

For example, consider the covey \(\{a, x, y, x, b, y\}\) that is to be run \(K\) times. Step 0 creates the sequence \(\{a, x, y, x, b, y, a, x, y, x, b, y, a, x, y, x, b, y, \ldots , a, x, y, x, b, y\}\). The first iteration of Step 1 finds minimal covey \(\{a, x, y\}\) before a duplicate job (\(x\)) is found. The remaining sequence is \(\{x, b, y, a, x, y, x, b, y,\) \(a, x, y, x, b, y, \ldots , a, x, y, x, b, y\}\). The second iteration of Step 1 finds minimal covey \(\{x, b, y, a\}\) before duplicate job \(x\) is found, and the remaining sequence is \(\{ x, y, x, b, y, a, x, y, x,\) \( b, y, \ldots , a, x, y, x, b, y\}\). Iterations of Step 1 continue to alternate between finding minimal covey \(\{x, y\}\) and minimal covey \(\{x, b, y, a\}\), until the last iteration when only \(\{x, b, y\}\) remains in the sequence.

So, at the end of Step 1, the set of minimal coveys, each to be run once, is: \(\{a, x, y\}, \{x, b, y, a\}, \{x, y\}, \{x, b, y, a\}, \{x, y\}, \ldots , \{x, b, y, a\}, \{x, y\}, \{x, b, y\}\).

In Step 2, we group similar coveys to get the final minimal-covey solution:

  • \(\{a, x, y\}_{(1 \mathrm{run})}\)

  • \(\{x, b, y, a\}_{(K-1 \mathrm{runs})}\)

  • \(\{x, y\}_{(K-1 \mathrm{runs})}\)

  • \(\{x, b, y\}_{(1 \mathrm{run})}\).

A covey of which the number of rotations (runs) is one referred to as a singleton covey. In the above example, \(\{a, x, y\}\) is the first singleton covey and \(\{x, b, y\}\) is the last singleton covey.

Proposition 10

Excluding transient scrap between coveys and the first and last singleton coveys if applicable, the amount of scrap incurred by the minimal covey solution created by the Algorithm Mixed-to-Minimal Coveys is no greater than the scrap incurred by the corresponding mixed covey solution.

Proof

Let \(T\) be the offloading machine cycle time. Consider any covey \(C\) that is not a first or last singleton covey in the solution created by the Algorithm Mixed-to-Minimal Coveys. Denote the jobs of a run of this covey as \(j_1,\ldots ,j_{|C|}\). Let \(j^{\prime }\) be the job immediately following these jobs of the run of \(C\) in the original mixed covey.

By Step 1 of the Algorithm Mixed-to-Minimal Coveys, \(j^{\prime }\) must be a duplicate of a job in \(C\); otherwise, the algorithm would have included \(j^{\prime }\) in \(C\). Without loss of generality, suppose \(j^{\prime }\) is a duplicate of the \(k\)th job in \(C\) (i.e., job \(j_k\)). In the original mixed covey solution, \(j^{\prime }\) and the jobs of \(C\) must have appeared in the order \(j_1,\ldots , j_{|C|}, j^{\prime }\). Therefore, between duplicate jobs \(j_k\) and \(j^{\prime }\), the mixed covey must have incurred at least \(\max \{0, \ T - (t_k + \cdots + t_{|C|} ) \} = S_\mathrm{{mixed}}\) scrap.

On the other hand, the minimal covey \(C\) has total cutting time \(t_1 + \cdots + t_{|C|}\), so \(C\) incurs scrap equal to \(S_\mathrm{{minimal}} = \max \{0, \ T - (t_1 + \cdots + t_{|C|}) \}\). Since \(k \ge 1\), it must be that \(S_\mathrm{{minimal}} \le S_\mathrm{{mixed}}\).

So, the scrap incurred by a run of minimal covey \(C\) is no more than the scrap incurred between the same set of jobs in the original mixed covey. Since the minimal coveys exactly partition the jobs of the original mixed covey (other than the first and last transient singletons), and the jobs considered when calculating each \(S_\mathrm{{mixed}}\) are a subset of the jobs in the corresponding minimal covey, it must therefore be true that the total scrap incurred by the repeating minimal coveys is no more than the total scrap incurred between those same jobs in the original mixed covey. \(\square \)

1.2 Match largest and smallest jobs

Proposition 11

The algorithm match largest and smallest jobs produces an optimal solution in the time model when the number of machines is two.

Proof

We prove the proposition for an even number \(2k\) of jobs; for an odd number of jobs, we can reduce to the even case by adding a dummy job with \(t=0\). We first prove that an optimal solution exists where every covey contains two jobs, and then show that the algorithm produces a solution that is at least as good as any other solution with two jobs in each covey, and is thus optimal overall.

To prove that an optimal solution exists where each covey has exactly two jobs, consider an optimal solution \(S_1\) with a covey \(C_p\) that consists of one job \(p\) with processing time \(t_p\). Since the number of jobs is even, there exists another covey \(C_q \in S_1\) that also consists of one job \(q\) with processing time \(t_q\). We can easily construct another solution \(S_2\) that is identical to \(S_1\) except that coveys \(C_p\) and \(C_q\) are replaced by a single covey \(C_r\) containing jobs \(p\) and \(q\). The completion time of \(S_2\) is no greater than that of \(S_1\):

$$\begin{aligned} C_{\max }(S_1) - C_{\max }(S_2)&= \big \{\max \{T,\ t_p\} + \max \{T, \ t_q \} \big \}\\&- \max \{T, \ t_p + t_q\} \ge 0. \end{aligned}$$

Without loss of generality, assume that the jobs \(j_1, \ldots , j_{2k}\) have stage 1 processing times \(t_1 \le t_2 \le \cdots \le t_{2k}\). Then, applying the algorithm yields a solution with the following coveys: \( \{j_1, j_{2k}\}, \{j_2, j_{2k-1} \}, \cdots , \{j_k, j_{k+1} \}\). Each covey consists of two jobs, and the jobs \(j_i\) and \(j_{2k-i+1}\) are matched.

Consider a solution \(S_2 \ne S^*\) with two jobs in each covey, such that the completion time of \(S_2\) is at least as small as that of any other solution with two jobs per covey.

We can transform \(S_2\) into \(S^*\) using the following \(k\)-step exchange procedure. We begin with \(S_2\). At each step \(i\) of the procedure, for \(1 \le i \le k\), if jobs \(j_i\) and \(j_{2k-i+1}\) are in the same covey, we leave the solution unchanged. Otherwise, there must be jobs \(j_{u_i}\) and \(j_{v_i}\) such that the solution contains coveys \(\{j_i, j_{u_i}\}\) and \(\{j_{v_i}, j_{2k-i+1}\}\). In this case, modify the solution by replacing coveys \(\{j_i, j_{u_i}\}\) and \(\{j_{v_i}, j_{2k-i+1}\}\) with coveys \(\{j_i, j_{2k-i+1}\}\) and \(\{j_{u_i}, j_{v_i}\}\). After \(k\) steps, we will be left with solution \(S^*\). We note that for each step \(i\) where a modification is made, \(i < u_i, v_i < 2k-i+1\) since the procedure guarantees all jobs smaller than \(i\) and larger than \(2k-i+1\) will already be matched as in solution \(S^*\). Thus, \(t_i < t_{u_i}, t_{v_i} < t_{2k-i+1}\).

Below, we prove that no step of the exchange procedure will increase the completion time. Therefore, \(C_{\max }(S^*) \le C_{\max }(S_2)\), so \(S^*\) must be an optimal solution.

Claim

No step of the exchange procedure increases completion time.

Proof of Claim

If no modification is made, there is no change in completion time. So, consider a step where coveys \(\{j_i, j_{u_i}\}\) and \(\{j_{v_i}, j_{2k-i+1}\}\) are replaced by coveys \(\{j_i, j_{2k-i+1}\}\) and \(\{j_{u_i}, j_{v_i}\}\). We show the proof when \(t_i+t_{2k-i+1} \le t_{u_i}+t_{v_i}\); the proof for \(t_i+t_{2k-i+1} > t_{u_i}+t_{v_i}\) is similar.

  • Case 1: \(t_{u_i} \le t_{v_i}\) For the following subcases, we will prove that the completion time of coveys \(\{j_i, j_{u_i}\}\) and \(\{j_{v_i}, j_{2k-i+1}\}\) is no greater than that of coveys \(\{j_i, j_{2k-i+1}\}\) and \(\{j_{u_i}, j_{v_i}\}\); i.e., \(\max \{T, \ t_i+t_{u_i}\} + \max \{T, \ t_{v_i} +t_{2k-i+1}\} \ge \max \{T, \ t_i+t_{2k-i+1}\} + \max \{T, \ t_{u_i} +t_{v_i}\}\).

  • Subcase 1.1: \( t_{v_i}+t_{2k-i+1} \le T \)

    $$\begin{aligned}&\max \{T, t_i+t_{u_i}\} + \max \{T, t_{v_i}+ t_{2k-i+1}\} = T + T \\&\quad = \max \{T, t_i+t_{2k-i+1}\}+ \max \{T, t_{u_i}+t_{v_i}\} \end{aligned}$$
  • Subcase 1.2: \( t_i + t_{2k-i+1} < T \le t_{v_i} + t_{2k-i+1}\)

    $$\begin{aligned}&\max \{T, t_i+t_{u_i}\} + \max \{T, t_{v_i}+t_{2k-i+1}\}\\&\quad = T + \max \{T, t_{v_i}+t_{2k-i+1}\} \\&\quad \ge T + \max \{T, t_{u_i}+t_{v_i}\} \\&\qquad (\because \max \{T, t_{v_i}+t_{2k-i+1}\}\\&\quad \ge \max \{T, t_{v_i}+t_{u_i}\}) \\&\quad = \max \{T, t_i+t_{2k-i+1}\}+ \max \{T, t_{u_i}+t_{v_i}\} \end{aligned}$$
  • Subcase 1.3: \( T \le t_i + t_{2k-i+1}\)

    $$\begin{aligned}&\max \{T, t_i+t_{u_i}\} + \max \{T, t_{v_i}+t_{2k-i+1}\}\\&\quad = \max \{T, t_i+t_{u_i}\}+ t_{v_i}+t_{2k-i+1} \\&\quad \ge (t_i+t_{u_i})+ (t_{v_i}+ t_{2k-i+1}) \\&\quad = \max \{T, t_i+t_{2k-i+1}\}+ \max \{T, t_{u_i}+t_{v_i}\} \end{aligned}$$
  • Case 2: \(t_{u_i} > t_{v_i}\)

Case 2 can be proved similarly to Case 1. \(\square \)

1.3 Trivial solution for FGSP

FGSP has a trivial optimal solution in the following case when the number of machines is two.

Proposition 12

When the number of machines is two, if the number of units of one job (the “long job”) is no less than the sum of the number of units of all the other jobs, an optimal schedule of FGSP is for every covey to include the long job. That is, all other jobs are processed on one machine while the long job is simultaneously processed on the other machine. This characterizes all optimal solutions if no job’s unit processing time is \(T\) or greater.

Proof

(proof by contradiction) Assume that in the optimal schedule there exists a covey that does not include the long job, which we arbitrarily refer to as job \(J_0\). There are two cases: (i) there exists a one-job covey with job \(J_1 \ne J_0\), and (ii) there exists a two-job covey \(\{J_1,J_2\}\) where \(J_1 \ne J_0\) and \(J_2 \ne J_0\). (A solution might also satisfy both cases.)

Figure 7a-1 shows such a case, and Fig. 7a-2 shows the intuition of how to create an improved solution \(S^*_{1}\) by shifting all the jobs on the second machine to eliminate the one-job covey that does not include \(J_0\). As shown below, \(C_{\max }(S_1) \ge C_{\max }(S^*_{1})\), with strict improvement unless jobs \(J_0\) and \(J_1\) each require at least \(T\) processing time by themselves.

$$\begin{aligned} C_{\max }(S_1)&= N_1 \max (t_0,T) \!+\! N_3 \max (t_1,T)\!+\! C_{\max }(N_2\,\, \mathrm{area}) \\&= (N_1-N_3) \max (t_0,T)+N_3 \max (t_0,T)\\&+ N_3 \max (t_1, T) + C_{\max }(N_2\,\, \mathrm{area}) \\&\ge (N_1-N_3) \max (t_0,T) + N_3 \max (t_0+t_1,\ T)\\&+ C_{\max }(N_2\,\,\mathrm{area}) \quad (\because \max (A,T) + \max (B,T)\\&\ge \max (A+B,\ T) )\\&= C_{\max } (S_1^*) \end{aligned}$$

Figure 7b-1, b-2 show similar intuition for the second case.

$$\begin{aligned} C_{\max }(S_2)&= N_1 \max (t_0,T) +N_3 \max (t_1+t_2,\ T)\\&+C_{\max }(N_2\,\, \mathrm{{ area}}) \\&= (N_1-2N_3) \max (t_0,T)+2N_3 \max (t_0,T)\\&+ N_3 \max (t_1+t_2, \ T) + C_{\max }(N_2\,\, \mathrm{{ area}}) \\&\ge (N_1-2N_3) \max (t_0,T)\\&+ N_3 \max (t_0+t_1,\ T) + N_3 \max (t_0+t_2,\ T)\\&+ C_{\max }(N_2\,\, \mathrm{{ area}}) \\&\Big (\mathrm{by\, Lemma 13:2 } \max (t_0,T) \!+ \!\max (t_1\!+\!t_2,\ T)\\&\ge \max (t_1+t_0,\ T)+ \max (t_2+t_0,\ T) \Big )\\&= C_{\max } (S_2^*) \end{aligned}$$

Therefore, in an optimal schedule, all coveys should have the long job, \(J_0\). \(\square \)

Fig. 7
figure 7

Trivial optimal schedule of FGSP when the number of machines is two. (a-1) Schedule \(S_1\). (a-2) Schedule \(S_{1}^{*}\). (b-1) Schedule \(S_2\). (b-2) Schedule \(S_{2}^{*}\)

Lemma 13

When \(t_0, t_1,t_2,\) and \(T \ge 0\), we have

$$\begin{aligned}&\max (t_1+t_0,\ T)+ \max (t_2+t_0,\ T) \le 2\max (t_0,T) \\&\quad + \max (t_1+t_2,\ T). \end{aligned}$$

Proof

Without loss of generality, assume that \(t_1 \ge t_2\). Then, we can consider three cases.

Case 1: \(t_0 \ge t_1 \ge t_2\)

$$\begin{aligned}&\max (t_1+t_0,\ T)+ \max (t_2+t_0,\ T) \\&\quad = \max (t_1+t_0-t_1,\ T-t_1)+t_1\\&\qquad + \max \big (t_2+t_0 - (t_0-t_1), \ T-(t_0-t_1) \big ) + (t_0-t_1) \\&\qquad \big (\because t_1\ge 0, \ (t_0-t_1) \ge 0 \ \big ) \\&\quad \le \max (t_0,\ T)+t_0 + \max (t_1+t_2, \ T ) \\&\qquad \big (\because \max (t_0,\ T-t_1) \le \max (t_0,\ T),\\&\qquad \max \big (t_1+t_2,\ T-(t_0-t_1) \big ) \le \max (t_1+t_2,\ T) \ \big )\\&\quad \le 2 \max (t_0,\ T)+ \max (t_1+t_2, \ T ) \quad \big (\because t_0 \!\le \! \max (t_0,\ T) \ \big ) \end{aligned}$$

Case 2: \( t_1 \ge t_0 \ge t_2\)

$$\begin{aligned}&\max (t_1+t_0,\ T)+ \max (t_2+t_0,\ T) \\&\quad = \max \big (t_1+t_0 - (t_0-t_2), \ T-(t_0-t_2) \big ) + (t_0-t_2)\\&\qquad +\max (t_2+t_0-t_2,\ T-t_2)+t_2 \\&\qquad \big (\because (t_0-t_2) \ge 0, \ t_2\ge 0 \ \big ) \\&\quad \le \max (t_1+t_2, \ T ) +t_0+ \max (t_0,\ T) \\&\qquad \big (\because \max \big (t_1+t_2,\ T-(t_0-t_2) \big ) \!\le \! \max (t_1+t_2,\ T),\\&\qquad \max (t_0,\ T-t_2) \le \max (t_0,\ T) \ \big )\\&\quad \le \max (t_1\!+\!t_2, \ T )\!+\! 2 \max (t_0,\ T) \quad \big (\because t_0 \!\le \! \max (t_0,\ T) \ \big ) \end{aligned}$$

Case 3: \( t_1 \ge t_2 \ge t_0 \)

$$\begin{aligned}&\max (t_1+t_0,\ T)+ \max (t_2+t_0,\ T) \\&\quad = \max (t_1+t_0 - t_1, \ T-t_1 \big ) + t_1\\&\qquad + \max (t_2+t_0-t_2,\ T-t_2)+t_2 \quad (\because t_1 \ge 0, \ t_2\ge 0 \ ) \\&\quad \le \max (t_0 , \ T ) + \max (t_0,\ T)+ t_1+t_2 \\&\qquad \big (\because \max (t_0,\ T-t_1) \le \max (t_0,\ T),\\&\qquad \max (t_0,\ T-t_2) \le \max (t_0,\ T) \ \big ) \\&\quad \le 2 \max (t_0 , \ T ) + \max (t_1+t_2,\ T) \\&\qquad \big ( \because t_1+t_2 \le \max (t_1+t_2, \ T ) \ \big ) \end{aligned}$$

Therefore, we have \( \max (t_1+t_0,\ T)+ \max (t_2+t_0,\ T) \le 2\max (t_0,T) + \max (t_1+t_2,\ T) \). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Na, B., Ahmed, S., Nemhauser, G. et al. A cutting and scheduling problem in float glass manufacturing. J Sched 17, 95–107 (2014). https://doi.org/10.1007/s10951-013-0335-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10951-013-0335-z

Keywords

Navigation