Skip to main content
Log in

A linear programming-based method for job shop scheduling

  • Published:
Journal of Scheduling Aims and scope Submit manuscript

Abstract

We present a decomposition heuristic for a large class of job shop scheduling problems. This heuristic utilizes information from the linear programming formulation of the associated optimal timing problem to solve subproblems, can be used for any objective function whose associated optimal timing problem can be expressed as a linear program (LP), and is particularly effective for objectives that include a component that is a function of individual operation completion times. Using the proposed heuristic framework, we address job shop scheduling problems with a variety of objectives where intermediate holding costs need to be explicitly considered. In computational testing, we demonstrate the performance of our proposed solution approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Adams, J., Balas, E., & Zawack, D. (1988). The shifting bottleneck procedure for job shop scheduling. Management Science, 34(3), 391–401.

    Article  Google Scholar 

  • Asadathorn, N. (1997). Scheduling of assembly type of manufacturing systems: algorithms and systems developments. PhD thesis, Department of Industrial Engineering, New Jersey Institute of Technology, Newark, NJ.

  • Avci, S., & Storer, R. (2004). Compact local search neighborhoods for generalized scheduling. Working paper.

  • Bulbul, K. (2002). Just-in-time scheduling with inventory holding costs. PhD thesis, University of California at Berkeley.

  • Bulbul, K. (2011). A hybrid shifting bottleneck-tabu search heuristic for the job shop total weighted tardiness problem. Computers & Operations Research, 38(6), 967–983. http://dx.doi.org/10.1016/j.cor.2010.09.015

    Article  Google Scholar 

  • Bulbul, K., Kaminsky, P., & Yano, C. (2004). Flow shop scheduling with earliness, tardiness and intermediate inventory holding costs. Naval Research Logistics, 51(3), 407–445.

    Article  Google Scholar 

  • Bulbul, K., Kaminsky, P., & Yano, C. (2007). Preemption in single machine earliness/tardiness scheduling. Journal of Scheduling, 10(4–5), 271–292.

    Article  Google Scholar 

  • Chang, S.-C., & Liao, D.-Y. (1994). Scheduling flexible flow shops with no setup effects. IEEE Transactions on Robotics and Automation, 10(2), 112–122.

    Article  Google Scholar 

  • Chang, Y. L., Sueyoshi, T., & Sullivan, R. (1996). Ranking dispatching rules by data envelopment analysis in a jobshop environment. IIE Transactions, 28(8), 631–642.

    Google Scholar 

  • Demirkol, E., Mehta, S., & Uzsoy, R. (1997). A computational study of shifting bottleneck procedures for shop scheduling problems. Journal of Heuristics, 3(2), 111–137.

    Article  Google Scholar 

  • Dyer, M., & Wolsey, L. (1990). Formulating the single machine sequencing problem with release dates as a mixed integer program. Discrete Applied Mathematics, 26(2–3), 255–270.

    Article  Google Scholar 

  • Graham, R., Lawler, E., Lenstra, J., & Rinnooy Kan, A. (1979). Optimization and approximation in deterministic sequencing and scheduling: a survey. Annals of Discrete Mathematics, 5, 287–326.

    Article  Google Scholar 

  • Jayamohan, M., & Rajendran, C. (2004). Development and analysis of cost-based dispatching rules for job shop scheduling. European Journal of Operations Research, 157(2), 307–321.

    Article  Google Scholar 

  • Kaskavelis, C., & Caramanis, M. (1998). Efficient Lagrangian relaxation algorithms for industry size job-shop scheduling problems. IIE Transactions, 30(11), 1085–1097.

    Google Scholar 

  • Kedad-Sidhoum, S., & Sourd, F. (2010). Fast neighborhood search for the single machine earliness-tardiness scheduling problem. Computers & Operations Research, 37(8), 1464–1471.

    Article  Google Scholar 

  • Kreipl, S. (2000). A large step random walk for minimizing total weighted tardiness in a job shop. Journal of Scheduling, 3(3), 125–138.

    Article  Google Scholar 

  • Kutanoglu, E., & Sabuncuoglu, I. (1999). An analysis of heuristics in a dynamic job shop with weighted tardiness objectives. International Journal of Production Research, 37(1), 165–187.

    Article  Google Scholar 

  • Laha, D. (2007). Heuristics and metaheuristics for solving scheduling problems. In D. Laha & P. Mandal (Eds.), Handbook of Computational Intelligence in Manufacturing and Production Management (Chap. 1, pp. 1–18). Hershey: Idea Group.

    Chapter  Google Scholar 

  • LEKIN®-Flexible Job-Shop Scheduling System (2002). Version 2.4. http://www.stern.nyu.edu/om/software/lekin/index.htm.

  • Lenstra, J., Rinnooy Kan, A., & Brucker, P. (1977). Complexity of machine scheduling problems. Annals of Discrete Mathematics, 1, 343–362.

    Article  Google Scholar 

  • Mason, S., Fowler, J., & Carlyle, W. (2002). A modified shifting bottleneck heuristic for minimizing total weighted tardiness in complex job shops. Journal of Scheduling, 5(3), 247–262.

    Article  Google Scholar 

  • Ohta, H., & Nakatanieng, T. (2006). A heuristic job-shop scheduling algorithm to minimize the total holding cost of completed and in-process products subject to no tardy jobs. International Journal of Production Economics, 101(1), 19–29.

    Article  Google Scholar 

  • Ovacik, I. M., & Uzsoy, R. (1996). Decomposition methods for complex factory scheduling problems. Berlin: Springer.

    Google Scholar 

  • Park, M.-W., & Kim, Y.-D. (2000). A branch and bound algorithm for a production scheduling problem in an assembly system under due date constraints. European Journal of Operations Research, 123(3), 504–518.

    Article  Google Scholar 

  • Pinedo, M., & Singer, M. (1999). A shifting bottleneck heuristic for minimizing the total weighted tardiness in a job shop. Naval Research Logistics, 46(1), 1–17.

    Article  Google Scholar 

  • Singer, M. (2001). Decomposition methods for large job shops. Computers & Operations Research, 28(3), 193–207.

    Article  Google Scholar 

  • Sourd, F. (2009). New exact algorithms for one-machine earliness-tardiness scheduling. INFORMS Journal on Computing, 21(1), 167–175.

    Article  Google Scholar 

  • Tanaka, S., & Fujikuma, S. (2008). An efficient exact algorithm for general single-machine scheduling with machine idle time. In IEEE international conference on automation science and engineering, 2008. CASE 2008 (pp. 371–376).

    Chapter  Google Scholar 

  • Thiagarajan, S., & Rajendran, C. (2005). Scheduling in dynamic assembly job-shops to minimize the sum of weighted earliness, weighted tardiness and weighted flowtime of jobs. Computers and Industrial Engineering, 49(4), 463–503.

    Article  Google Scholar 

  • Vepsalainen, A. P. J., & Morton, T. E. (1987). Priority rules for job shops with weighted tardiness costs. Management Science, 33(8), 1035–1047.

    Article  Google Scholar 

  • Xhafa, F., & Abraham, A. (Eds.) (2008). Studies in computational intelligence: Vol. 128. Metaheuristics for scheduling in industrial and manufacturing applications. Berlin: Springer.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kerem Bülbül.

Appendices

Appendix A: Development of ϵ ij and π ij

Recall that our goal is to demonstrate that the increase in the optimal objective value of \((\mathbf {TTJm})(\mathcal {M}^{S})\) due to an additional constraint (17) or (18) can be bounded from below by applying sensitivity analysis to the optimal solution of \((\mathbf {TTJm})(\mathcal {M}^{S})\). This analysis provides the unit earliness and tardiness costs of job j in the subproblem for machine i denoted by ϵ ij and π ij , respectively. We assume that the linear program \((\mathbf {TTJm})(\mathcal {M}^{S})\) is solved by the simplex algorithm, and an optimal basic sequence \(\mathcal {B}\) is available at the beginning of the current iteration.

Before we proceed with the analysis, we note that \((\mathbf {TTJm})(\mathcal {M}^{S})\) is an LP in standard form:

(20)
(21)
(22)

where A is an (m×n) matrix of the coefficients of the structural constraints, c is a (1×n) row vector of the objective coefficients, and b is an (m×1) column vector of the right hand side coefficients. Given a basic sequence \(\mathcal {B}\) corresponding to the indices of the m basic variables and the associated nonbasic sequence \(\mathcal {N}\), we also define \(\mathbf {A}_{\mathcal {B}}\) as the basis matrix, as the basic solution, \(\bar {\mathbf {y}}=\mathbf {c}_{\mathcal {B}}\mathbf {A}_{\mathcal {B}}^{-1}\) as the vector of dual variables, \(\bar {\mathbf {c}}=\mathbf {c}-\bar {\mathbf {y}}\mathbf {A}\) as the vector of reduced costs, and \(\bar {\mathbf {A}}=(\begin{array}{c@{\ }c}\bar {\mathbf {A}}_{\mathcal {B}}&\bar {\mathbf {A}}_{\mathcal {N}} \end{array})=(\begin{array}{c@{\ }c}\mathbf {A}_{\mathcal {B}}^{-1}\mathbf {A}_{\mathcal {B}}&\mathbf {A}_{\mathcal {B}}^{-1}\mathbf {A}_{\mathcal {N}}\end{array})=(\begin{array}{c@{\ }c}\mathbf{I}& \bar {\mathbf {A}}_{\mathcal {N}}\end{array})\). An optimal basic sequence \(\mathcal {B}\) satisfies the conditions:

(23)
(24)
(25)

In our notation, Z i. and Z .j denote the ith row and jth column of a matrix Z, respectively.

Now, assume that \((\mathbf {TTJm})(\mathcal {M}^{S})\) is solved by the simplex method, and an optimal basic sequence \(\mathcal {B}\) is available along with the optimal operation completion times \(C_{ij}^{*}\). Then, we either add

$$\begin{array}{r}C_{ij}+s_{ij}= d_{ij} - \delta\\ \noalign {\vspace {3pt}}s_{ij}\geq 0\end{array}\quad\mathrm{or}\quad \begin{array}{r}C_{ij} -s_{ij}= d_{ij} +\delta\\ \noalign {\vspace {3pt}}s_{ij}\geq 0\end{array} $$
(26)

to this model, where \(d_{ij}=C_{ij}^{*}\). In either case, the model is expanded by one more constraint and one more variable, and the current optimal solution does not satisfy (26) because δ is strictly greater than zero and s ij ≥0 is required. However, we can easily construct a new basic solution to restart the optimization. This basic solution violates primal feasibility (23) while preserving dual feasibility (24) as we show below. Thus, we can continue with the dual simplex method in order to find the next optimal solution. In the remainder of this section, we demonstrate that the first iteration of the dual simplex method can be performed implicitly based on data readily available in the optimal basic solution of \((\mathbf {TTJm})(\mathcal {M}^{S})\). In addition, the dual variable associated with (26) obtained from this iteration provides us with the unit earliness or tardiness cost for job j in the single-machine subproblem of machine i.

The updated problem data after adding (26) and s ij are denoted by a prime:

where the last column in A′ corresponds to the (n+1)st variable s ij , A (m+1). represents the coefficients of the original variables in the (m+1)st constraint (26), b m+1 is the right hand side of (26), and c n+1 is the objective coefficient of s ij . In order to compute a basic solution, we need to associate a basic variable with (26). A natural candidate is the slack/surplus variable s ij which leads to the new basic sequence \(\mathcal {B}'=\mathcal {B}\cup \{n+1\}\). Then, the basis matrix is constructed as

(27)

where the last column corresponds to s ij and \(\mathbf {A}_{m+1,\mathcal {B}}=(0\ 0\ \ldots\ 1\ \ldots\ 0\ 0)\) is a (1×m) row vector of the coefficients of the basic variables \(\mathcal {B}\) in (26). Clearly, the coefficient 1 in \(\mathbf {A}_{m+1,\mathcal {B}}\) corresponds to C ij because the completion time variables are always positive and basic. Assuming that C ij is the jth basic variable, the inverse of the basis is obtained as

$$\begin{array}{@{}rcl}(\mathbf {A}'_{\mathcal {B}'})^{-1} &=&\left( \begin{array}{c@{\quad }c}\mathbf {A}_{\mathcal {B}}^{-1}&\mathbf {0}\\ \noalign {\vspace {3pt}} -\mathbf {A}_{m+1,\mathcal {B}}\mathbf {A}_{\mathcal {B}}^{-1}&1\\\end{array}\right) \\\noalign {\vspace {6pt}}&=&\left(\begin{array}{c@{\quad }c}\mathbf {A}_{\mathcal {B}}^{-1}&\mathbf {0}\\ \noalign {\vspace {3pt}} -(\mathbf {A}_{\mathcal {B}}^{-1})_{j.}&1\\\end{array}\right)\quad\text{or}\quad \\\noalign {\vspace {6pt}}(\mathbf {A}'_{\mathcal {B}'})^{-1}&=&\left(\begin{array}{c@{\quad }c}\mathbf {A}_{\mathcal {B}}^{-1}&\mathbf {0}\\ \noalign {\vspace {3pt}} \mathbf {A}_{m+1,\mathcal {B}}\mathbf {A}_{\mathcal {B}}^{-1}&-1\\\end{array}\right) \\\noalign {\vspace {6pt}}&=&\left(\begin{array}{c@{\quad }c}\mathbf {A}_{\mathcal {B}}^{-1}&\mathbf {0}\\ \noalign {\vspace {3pt}} (\mathbf {A}_{\mathcal {B}}^{-1})_{j.}&-1\\\end{array}\right)\end{array} $$
(28)

where \((\mathbf {A}_{\mathcal {B}}^{-1})_{j.}\) is the row of \(\mathbf {A}_{\mathcal {B}}^{-1}\) associated with the basic variable C ij . The resulting basic solution is

(29)

in both cases and violates primal feasibility because the new basic variable s ij =−δ is strictly negative. Similarly, we compute

(30)

and

(31)

in both cases. Note that \(\bar {\mathbf {c}}'\geq \mathbf {0}\) since \(\bar {\mathbf {c}}\geq \mathbf {0}\) because \(\mathcal {B}\) is an optimal basic sequence for \((\mathbf {TTJm})(\mathcal {M}^{S})\). Thus, \(\mathcal {B}'\) is primal infeasible and dual feasible which suggests that we can continue with the dual simplex method. The basic variable s ij <0 is the only candidate for the leaving variable. The entering variable t is determined by the following ratio test in the dual simplex method:

(32)

where \(\bar {\mathbf {A}}'_{(m+1).}=(\mathbf {A}'_{\mathcal {B}'})^{-1}_{(m+1).}\mathbf {A}'\). Plugging in the appropriate expressions for \((\mathbf {A}'_{\mathcal {B}'})^{-1}_{(m+1).}\) from (28) and A′ leads to the following explicit set of formulas for \(\bar {\mathbf {A}}'_{(m+1).}\):

$$\begin{array}{@{}l}\bar {\mathbf {A}}'_{(m+1)k}=\left\{ \begin{array}{l@{\quad }l}-\bar {\mathbf {A}}_{jj}+1=0& \mathrm{for}\ C_{ij}\\ \noalign {\vspace {2pt}}1&\mathrm{for} \ s_{ij}\\ \noalign {\vspace {2pt}}-\bar {\mathbf {A}}_{jk}&\text{o/w}\end{array}\right\}\quad\text{or}\\\noalign {\vspace {8pt}}\bar {\mathbf {A}}'_{(m+1)k}=\left\{\begin{array}{l@{\quad }l}\bar {\mathbf {A}}_{jj}-1=0& \mathrm{for}\ C_{ij}\\ \noalign {\vspace {2pt}}1&\mathrm{for} \ s_{ij}\\ \noalign {\vspace {2pt}}\bar {\mathbf {A}}_{jk}&\text{o/w}\end{array}\right\}.\end{array} $$
(33)

Recall that s ij is the basic variable associated with row m+1, and hence \(\bar {\mathbf {A}}'_{(m+1)(n+1)}=1\) must hold as above. In addition, all components of \(\bar {\mathbf {A}}'_{(m+1).}\) corresponding to the remaining basic variables must be zero, including those corresponding to the completion time variables. Inserting (31) and (33) into (32), the ratio test takes the form:

$$\everymath{\displaystyle }\begin{array}{@{}l}\frac{\bar {\mathbf {c}}'_t}{\bar {\mathbf {A}}'_{(m+1)t}}=\max_{k\neq j|\bar {\mathbf {A}}_{jk}>0}\frac{\bar {\mathbf {c}}_k}{-\bar {\mathbf {A}}_{jk}}=\frac{\bar {\mathbf {c}}_t}{-\bar {\mathbf {A}}_{jt}}\leq 0\quad\text{or}\\\noalign {\vspace {6pt}}\frac{\bar {\mathbf {c}}'_t}{\bar {\mathbf {A}}'_{(m+1)t}}=\max_{k|\bar {\mathbf {A}}_{jk}<0}\frac{\bar {\mathbf {c}}_k}{\bar {\mathbf {A}}_{jk}}=\frac{\bar {\mathbf {c}}_t}{\bar {\mathbf {A}}_{jt}}\leq 0.\end{array} $$
(34)

The crucial observation is that all quantities required to perform the ratio test (34) are computed based on the current optimal basic sequence \(\mathcal {B}\) of \((\mathbf {TTJm})(\mathcal {M}^{S})\).

Replacing s ij by variable t in the basic sequence \(\mathcal {B}'\) leads to the new basic sequence \(\mathcal {B}''=\mathcal {B}\cup \{t\}\). Next, we can carry out the pivoting operations in the simplex method. Here, we only need the updated dual variables \(\bar {\mathbf {y}}''\) and the new objective function value. The dual variables are determined using the formula

(35)

Plugging in the relevant expressions, we obtain

(36)
(37)

Based on the conditions of the ratio test (34), we observe that the dual variable \(\bar {\mathbf {y}}''_{m+1}\) associated with the new constraint (26) is nonpositive for C ij +s ij =d ij δ (C ij d ij δ) and nonnegative for C ij s ij =d ij +δ (C ij d ij +δ), as expected. We also note that information already present in the optimal basic solution of \((\mathbf {TTJm})(\mathcal {M}^{S})\) is sufficient to compute \(\bar {\mathbf {y}}''\). Then, the objective value associated with \(\mathcal {B}''\) is calculated by

(38)

or

(39)

where \((\mathbf {A}_{\mathcal {B}}^{-1})_{j.}\mathbf {b}=C_{ij}^{*}=d_{ij}\) since \((\mathbf {A}_{\mathcal {B}}^{-1})_{j.}\mathbf {b}\) provides the optimal value of the jth basic variable in \((\mathbf {TTJm})(\mathcal {M}^{S})\).

Finally, we prove Proposition 3.1, our main result in this section, which allows us to specify the appropriate E/T cost parameters in the single-machine subproblems. Consider two successive iterations k and k+1 of SB-TPD. In iteration k, the optimal timing problem \((\mathbf {TTJm})(\mathcal {M}^{S})\) yields an optimal objective value \(z_{(\mathbf {TTJm})}(\mathcal {M}^{S})\) and the optimal completion times \(C_{ij}^{*}\) for all j and for all i=1,…,m j . Next, machine i b is selected to be scheduled, and the optimal timing problem \((\mathbf {TTJm})(\mathcal {M}^{S}\cup \{i^{b}\})\) is solved providing a new set of optimal completion times \(C'_{ij}\) for all j and for all i=1,…,m j . Then, Proposition 3.1, which we repeat below for completeness, establishes a lower bound on the increase in the objective value of the optimal timing problem from iteration k to k+1.

Proposition 3.1

Consider the optimal timing problems \((\mathbf {TTJm})(\mathcal {M}^{S})\) and \((\mathbf {TTJm})(\mathcal {M}^{S}\cup \{i^{b}\})\) solved in iterations k and k+1 of SB-TPD where i b is the bottleneck machine in iteration k. For any operation \(o_{i^{b}j}\), if \(C'_{i^{b}j}=C_{i^{b}j}-\delta\) or \(C'_{i^{b}j}=C_{i^{b}j}+\delta\) for some δ>0, then \(z_{(\mathbf {TTJm})}(\mathcal {M}^{S}\cup \{i^{b}\}) - z_{(\mathbf {TTJm})}(\mathcal {M}^{S})\geq |\bar {\mathbf {y}}''_{m+1}| \delta\geq 0\), where \(\bar {\mathbf {y}}''\) is as defined in (36)(37).

Proof

We refer to the optimal timing problem \((\mathbf {TTJm})(\mathcal {M}^{S})\) with the appropriate additional constraint \(C_{i^{b}j}\leq d_{i^{b}j}-\delta\) or \(C_{i^{b}j}\geq d_{i^{b}j}+\delta\) as \((\mathbf {TTJm})'(\mathcal {M}^{S})\). The optimal objective value of \((\mathbf {TTJm})'(\mathcal {M}^{S})\) is denoted by \(z_{(\mathbf {TTJm})^{\prime}}(\mathcal {M}^{S})\).

The optimal solution of \((\mathbf {TTJm})(\mathcal {M}^{S}\cup \{i^{b}\})\) satisfies all constraints present in \((\mathbf {TTJm})'(\mathcal {M}^{S})\), in addition to the machine capacity constraints for machine i b. Therefore, \(z_{(\mathbf {TTJm})}(\mathcal {M}^{S}\cup \{i^{b}\})\) can be no less than \(z_{(\mathbf {TTJm})^{\prime}}(\mathcal {M}^{S})\), and we can prove the desired result by showing that \(z_{(\mathbf {TTJm})^{\prime}}(\mathcal {M}^{S})\geq z_{(\mathbf {TTJm})}(\mathcal {M}^{S}) -\bar {\mathbf {y}}''_{m+1}\delta\) or \(z_{(\mathbf {TTJm})^{\prime}}(\mathcal {M}^{S})\geq z_{(\mathbf {TTJm})}(\mathcal {M}^{S})+\bar {\mathbf {y}}''_{m+1}\delta\) as appropriate. Clearly, we can solve \((\mathbf {TTJm})'(\mathcal {M}^{S})\) by starting from the optimal solution of \((\mathbf {TTJm})(\mathcal {M}^{S})\) and applying the dual simplex method as discussed above. From (38)–(39), we already know that the increase in the objective function in the first iteration of the dual simplex method is at least \(-\bar {\mathbf {y}}''_{m+1}\delta\geq 0\) or \(+\bar {\mathbf {y}}''_{m+1}\delta\geq 0\) if \(C_{i^{b}j}\leq d_{i^{b}j}-\delta\) or \(C_{i^{b}j}\geq d_{i^{b}j}+\delta\) is added to \((\mathbf {TTJm})(\mathcal {M}^{S})\), respectively. The proof is completed by noting that the dual simplex method produces non-decreasing objective values over the iterations. So, we have \(z_{(\mathbf {TTJm})}(\mathcal {M}^{S}\cup \{i^{b}\})\geq z_{(\mathbf {TTJm})^{\prime}}(\mathcal {M}^{S})\geq z_{(\mathbf {TTJm})}(\mathcal {M}^{S})- \bar {\mathbf {y}}''_{m+1}\delta\) or \(z_{(\mathbf {TTJm})}(\mathcal {M}^{S}\cup \{i^{b}\})\geq z_{(\mathbf {TTJm})^{\prime}}(\mathcal {M}^{S})\geq z_{(\mathbf {TTJm})}(\mathcal {M}^{S})+\bar {\mathbf {y}}''_{m+1}\delta\) as desired. □

Based on Proposition 3.1, if the completion time of \(o_{i^{b}j}\) decreases by δ time units in the optimal timing problem after inserting some set of disjunctive arcs on machine i b, then the increase in the optimal objective function is no less than \(-\bar {\mathbf {y}}''_{m+1}\delta\). This allows us to interpret the quantity \(-\bar {\mathbf {y}}''_{m+1}\) as the unit earliness cost of operation \(o_{i^{b}j}\) in the subproblem of machine i b, i.e., we set \(\epsilon_{ij}=-\bar {\mathbf {y}}''_{m+1}=\frac{\bar {\mathbf {c}}_{t}}{\bar {\mathbf {A}}_{jt}} = -\max_{k\neq j|\bar {\mathbf {A}}_{jk}>0}\frac{\bar {\mathbf {c}}_{k}}{-\bar {\mathbf {A}}_{jk}}\) as in (19). A similar argument leads to \(\pi_{ij}=\bar {\mathbf {y}}''_{m+1}=-\frac{\bar {\mathbf {c}}_{t}}{\bar {\mathbf {A}}_{jt}}=-\max_{k|\bar {\mathbf {A}}_{jk}<0}\frac{\bar {\mathbf {c}}_{k}}{\bar {\mathbf {A}}_{jk}}\) for the corresponding unit tardiness cost.

Next, we investigate whether the approach presented here can be extended to multiple implicit dual simplex iterations that would allow us to construct a better approximation of the actual cost function for C ij depicted in Fig. 2. To this end, using the formula

(40)

and inserting the expressions for \(\bar {\mathbf {c}}'\) and \(\bar {\mathbf {A}}'_{(m+1).}\) from (31) and (33) respectively, we first calculate the set of reduced costs \(\bar {\mathbf {c}}''\) resulting from the first pivoting operation:

$$\begin{array}{@{}l}\bar {\mathbf {c}}''_k=\left\{ \begin{array}{l@{\quad }l}\bar {\mathbf {c}}_k-\frac{\bar {\mathbf {c}}_t}{-\bar {\mathbf {A}}_{jt}}\cdot 0=0& \mathrm{for} \ C_{ij}\\ \noalign {\vspace {4pt}}\bar {\mathbf {c}}_k-\frac{\bar {\mathbf {c}}_t}{-\bar {\mathbf {A}}_{jt}}\cdot 1=\frac{\bar {\mathbf {c}}_t}{\bar {\mathbf {A}}_{jt}}&\mathrm{for} \ s_{ij}\\ \noalign {\vspace {4pt}}\bar {\mathbf {c}}_k-\frac{\bar {\mathbf {c}}_t}{\bar {\mathbf {A}}_{jt}}\bar {\mathbf {A}}_{jk}&\text{o/w}\end{array}\right\}\quad\mathrm{or}\\ \noalign {\vspace {8pt}}\bar {\mathbf {c}}''_k=\left\{ \begin{array}{l@{\quad }l}\bar {\mathbf {c}}_k-\frac{\bar {\mathbf {c}}_t}{\bar {\mathbf {A}}_{jt}}\cdot 0=0& \mathrm{for} \ C_{ij}\\ \noalign {\vspace {4pt}}\bar {\mathbf {c}}_k-\frac{\bar {\mathbf {c}}_t}{\bar {\mathbf {A}}_{jt}}\cdot 1=-\frac{\bar {\mathbf {c}}_t}{\bar {\mathbf {A}}_{jt}}& \mathrm{for} \ s_{ij}\\ \noalign {\vspace {4pt}}\bar {\mathbf {c}}_k-\frac{\bar {\mathbf {c}}_t}{\bar {\mathbf {A}}_{jt}}\bar {\mathbf {A}}_{jk}&\text{o/w}\end{array}\right\},\end{array} $$
(41)

where all required quantities are readily available in the optimal basic solution of \((\mathbf {TTJm})(\mathcal {M}^{S})\) as before. We also observe that the updated reduced cost of s ij is nonnegative in both cases as required in the dual simplex method. Then, \(\bar {\mathbf {b}}''\) is computed based on:

(42)

Substituting \(\bar {\mathbf {b}}'\) and \(\bar {\mathbf {A}}'_{(m+1)t}\) from (29) and (33) respectively, we obtain the new values of the basic variables:

$$\begin{array}{@{}l}\bar {\mathbf {b}}''_i=\left\{\begin{array}{l@{\quad }l}\bar {\mathbf {b}}_i + \frac{\bar {\mathbf {A}}'_{it}}{-\bar {\mathbf {A}}_{jt}}\delta &i\neq m+1\\\frac{\delta}{ \bar {\mathbf {A}}_{jt}}&i=m+1\end{array}\right\}\quad\mathrm{or}\\ \noalign {\vspace {5pt}}\bar {\mathbf {b}}''_i=\left\{\begin{array}{l@{\quad }l}\bar {\mathbf {b}}_i + \frac{\bar {\mathbf {A}}'_{it}}{\bar {\mathbf {A}}_{jt}}\delta &i\neq m+1\\\frac{-\delta}{ \bar {\mathbf {A}}_{jt}}&i=m+1\end{array}\right\},\end{array} $$
(43)

where \(\bar {\mathbf {b}}''_{m+1}>0\) in both cases for the new basic variable t. Until now, we have only used information that is readily available in the current basic optimal solution of \((\mathbf {TTJm})(\mathcal {M}^{S})\) and made no assumptions other than linearity regarding the specific objective or constraints of the job shop scheduling problem (Jm). However, computing (43) cannot be accomplished without compromising the generality of the analysis. Observe that the entering variable t is determined by the ratio test in (34) and may be different for each operation o ij . Furthermore, in (43) we have \(\bar {\mathbf {A}}'_{.t}=(\mathbf {A}'_{\mathcal {B}'})^{-1}\mathbf {A}'_{.t}\), where the entries of \(\mathbf {A}'_{.t}\) are given by the specific formulation under consideration. Consequently, the next basic sequence \(\mathcal {B}''\), the associated values for the basic variables, and the resulting leaving variable all depend on o ij and the specific job shop problem of interest. We conclude that we no longer have a single basis that allows us to compute all required quantities as efficiently as in (19). Estimating the actual cost function in Fig. 2 more accurately boils down to solving an LP with a parametric right hand side per operation o ij , \(i \in \mathcal {M}\setminus \mathcal {M}^{S}\), jJ i . We refrain from this in order to retain the generality of our proposed approach and avoid the extra computational burden. Furthermore, our numerical results in Sect. 4 clearly demonstrate that the information retrieved from the subproblems is sufficiently accurate in a great majority of cases.

Appendix B: Computing ϵ ij and π ij efficiently

A fundamental implementation issue is computing the cost coefficients ϵ ij and π ij in the single-machine subproblems efficiently. The analysis in Appendix A reveals that the only information required to this end is the reduced costs of the nonbasic variables and the rows of \(\bar {\mathbf {A}}\) associated with the completion time variables in a basic optimal solution of the current optimal timing problem. This information may be directly available from the linear programming solver employed to solve the optimal timing problem. Otherwise, the computational burden of computing \(\mathbf {A}_{\mathcal {B}}^{-1}\) renders this step time consuming. In this section, we show how \(\mathbf {A}_{\mathcal {B}}^{-1}\) and \(\bar {\mathbf {A}}\) can be computed efficiently for \((\widehat {\mathbf {TTJm}})\) by exploiting the structure and sparsity of A. In this analysis, we only assume that the basic sequence \(\mathcal {B}\) and the corresponding nonbasic sequence \(\mathcal {N}\) associated with the optimal basis of the current timing problem are available. Denoting the number of operations by \(t=\sum_{j=1}^{n} m_{j}\), we note that the optimal basis is a q×q matrix and has the following block form:

$$\mathbf {A}_{\mathcal {B}}= \left(\begin{array}{c@{\quad }c}\mathbf {T}&\mathbf {U}\\ \mathbf {V}&\mathbf {W}\\\end{array}\right),$$
(44)

where T is a t×t square matrix composed of the coefficients of the completion time variables (all completion time variables are always basic) in the ready time and operation precedence constraints (2)–(3), U is a t×(qt) matrix of the coefficients of the remaining basic variables in the same constraints, V is a (qtt matrix of the coefficients of the completion time variables in the last qt constraints (7), (8), (12), and W is a (qt)×(qt) matrix of the coefficients of the remaining basic variables in the same constraints. Then, \(\mathbf {A}_{\mathcal {B}}^{-1}\) is obtained as

$$\mathbf {A}_{\mathcal {B}}^{-1}=\left(\begin{array}{c@{\quad }c}\mathbf {T}^{-1} + \mathbf {T}^{-1}\mathbf {U}\mathbf {Z}^{-1}\mathbf {V}\mathbf {T}^{-1}&-\mathbf {T}^{-1}\mathbf {U}\mathbf {Z}^{-1}\\ \noalign {\vspace {3pt}}-\mathbf {Z}^{-1}\mathbf {V}\mathbf {T}^{-1}&\mathbf {Z}^{-1}\\\end{array}\right), $$
(45)

where Z=WVT −1 U is the Schur complement of T. It is a simple exercise to show that T is invertible, and Z must be invertible because \(\mathbf {A}_{\mathcal {B}}\) is invertible. An important property of \((\widehat {\mathbf {TTJm}})\) is that all variables in \((\widehat {\mathbf {TTJm}})\) except for the completion time variables and the makespan variable are either slack or surplus variables with a single nonzero coefficient in the corresponding column of A.

For computing the values of ϵ ij and π ij for an operation o ij as defined in (19), we only need the entries of the row \(\bar {\mathbf {A}}_{j.}=(\mathbf {A}_{\mathcal {B}}^{-1})_{j.}\mathbf {A}\) corresponding to the nonbasic variables, where C ij is the jth basic variable as in Appendix A. We denote these entries by \(\bar {\mathbf {A}}_{j\mathcal {N}}\). Thus, computing the cost coefficients in all single-machine subproblems requires no more than computing the first t rows of \(\bar {\mathbf {A}}\) given by the first row of (45). We detail the steps of this task below, where representing a matrix M in sparse form refers to storing the nonzero entries of M column by column in a single vector while the corresponding row indices are saved in a separate vector. A third vector holds a pointer to the last entry of each column in the other two vectors.

T −1 is computed only once during the initialization of the algorithm and saved in sparse form. Then, we perform the following steps each time we need to set up the single-machine subproblems after optimizing \((\widehat {\mathbf {TTJm}})\):

  1. 1.

    Store U in sparse form by noting that all nonzero coefficients in U correspond to the waiting time variables.

  2. 2.

    Compute VT −1 row by row in dense form by observing the following:

    1. (a)

      In a row of V corresponding to a due date constraint (7) or a C max constraint (8) for job j, there is a single entry +1 in the column corresponding to \(C_{m_{j}j}\).

    2. (b)

      A row of V corresponding to a machine capacity constraint (12) includes exactly two nonzero coefficients +1 and −1 corresponding to two consecutive operations performed on the same machine.

  3. 3.

    Compute −VT −1 U in dense form by re-arranging the columns of VT −1 and observing the following:

    1. (a)

      There is exactly a single nonzero entry +1 or −1 in the columns of U corresponding to the waiting time variables.

    2. (b)

      All other columns of U are identical to zero.

  4. 4.

    Compute Z=WVT −1 U in dense form. W is retrieved column by column in sparse form, and the nonzero entries are added to the appropriate entries of −VT −1 U calculated in the previous step. We can completely disregard the waiting time variables in this step because the associated coefficients in W are all zero.

  5. 5.

    Compute Z −1.

  6. 6.

    Compute T −1 UZ −1 and −T −1 UZ −1 in dense form:

    1. (a)

      The nonzero entries in a given row of T −1 U may be determined by using T −1 and U already stored in sparse form. Since there is at most a single nonzero entry in each column of U, we can traverse the columns of T −1 in the order specified by the associated row indices of the nonzero entries in U and easily calculate the nonzero entries of T −1 U in the specified row.

    2. (b)

      T −1 UZ −1 and −T −1 UZ −1 are then computed row by row in dense form by taking linear combinations of the rows of Z −1 as specified by the nonzero entries in the rows of T −1 U calculated above.

  7. 7.

    Compute T −1+T −1 UZ −1 VT −1 in dense form by multiplying T −1 UZ −1 and VT −1, and then adding T −1 available in sparse form to the result.

  8. 8.

    For the objective function coefficients of the operations in the single-machine subproblems, we need to compute the first t rows of \(\bar {\mathbf {A}}_{.\mathcal {N}} = \mathbf {A}_{\mathcal {B}}^{-1}\mathbf {A}_{.\mathcal {N}}\). This is performed column by column for each \(k\in \mathcal {N}\). We have \(\bar {\mathbf {A}}_{.k}=\mathbf {A}_{\mathcal {B}}^{-1}\mathbf {A}_{.k}\), where A .k includes a single nonzero entry +1 or −1 because all nonbasic variables in \((\widehat {\mathbf {TTJm}})\) are either slack or surplus variables. Thus, we simply need to retrieve the first t entries in the proper column of \(\mathbf {A}_{\mathcal {B}}^{-1}\) computed previously and multiply them with −1 if necessary.

Appendix C: Solution quality vs. time

3.1 C.1 Job shop total weighted E/T problem with intermediate inventory holding costs

To investigate the behavior of SB-TPD over the course of the algorithm, we present a detailed analysis of the solution quality versus the solution time and the number of feasible schedules constructed in SB-TPD in Fig. 3.

Fig. 3
figure 3

The progress of the solution gaps with respect to the best available solution for the job shop total weighted E/T problem with intermediate inventory holding costs

To this end, we take a snapshot of the optimality gaps after 60, 120, 180, 300, 450, and 600 seconds of solution time and depict the empirical cumulative distributions of these gaps for f=1.0,1.3, f=1.5, and f=1.7,2.0 in Figs. 3(a), 3(c), and 3(e), respectively. In these figures, gaps larger than 100% appear as 100%. After 120 seconds of solution time, 4.5% (4 out of 88), 27.3% (12 out of 44), and 73.9% (65 out of 88) of the instances are within 10% of the best solution for f=1.0,1.3, f=1.5, and f=1.7,2.0, respectively. The corresponding figures after 300 seconds of solution time are obtained as 14.8% (13 out of 88), 54.5% (24 out of 44), and 89.8% (79 out of 88). For f=1.0,1.3, the gaps are no more than 5% and 10% for 13.6% (12 out of 88) and 27.3% (24 out of 88) of the instances after 600 seconds of solution time, respectively. For f=1.5, 40.9% (18 out of 44) and 65.9% (29 out of 44) of the instances are within 3% and 10% of the best solution after 600 seconds of solution time, respectively. For f=1.7,2.0, the best solution is obtained for 44.3% (39 out of 88) of the instances in 600 seconds of solution time, and the gap is within 5% for 85.2% (75 out of 88) of the instances while this number increases to 95.5% (84 out of 88) for a gap of at most 10%. Similar figures are produced for the number feasible schedules constructed in SB-TPD and appear on the right hand side in Fig. 3. Note that each leaf node in the search tree in SB-TPD corresponds to a feasible schedule for \((\widehat {\mathbf {Jm}})\). Intuitively, if the shifting bottleneck framework performs well, then good solutions should be identified early in the search tree. These figures indicate that with as few as 100 feasible schedules constructed over the entire course of the algorithm high-quality solutions are obtained. Furthermore, these figures attest to the enhanced performance of our heuristic as f grows. With increasing f, we come across excellent solutions very early in the algorithm. For instance, for f=1.5 the best available solutions are identified in more than 20% of the instances with at most 20 feasible schedules. This figure increases to above 25% for f=1.7,2.0.

3.2 C.2 Job shop total weighted completion time problem with intermediate inventory holding costs

For this case, we see in Fig. 4(b) that the very first schedule constructed achieves a gap of no more than 10% with respect to the best available solution in more than 1/3 of the instances. With at most 10 feasible schedules constructed, the gap is reduced to less than 5% in more than 50% of the instances.

Fig. 4
figure 4

The progress of the solution gaps with respect to the best available solution for the job shop total weighted completion time problem with intermediate inventory holding costs

3.3 C.3 Job shop makespan problem with intermediate inventory holding costs

The effectiveness of SB-TPD for this problem is readily apparent from Fig. 5(b). The very first schedule constructed is no more than 15% away from the best known solution for 30% (13 out of 44) of the instances. With at most 10 feasible schedules constructed, this gap falls below 10% in 50% of the instances.

Fig. 5
figure 5

The progress of the solution gaps with respect to the best available solution for the job shop makespan problem with intermediate inventory holding costs

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bülbül, K., Kaminsky, P. A linear programming-based method for job shop scheduling. J Sched 16, 161–183 (2013). https://doi.org/10.1007/s10951-012-0270-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10951-012-0270-4

Keywords

Navigation