Skip to main content

Advertisement

Log in

Scheduling parallel-machine batch operations to maximize on-time delivery performance

  • Published:
Journal of Scheduling Aims and scope Submit manuscript

Abstract

In this paper we study the problem of minimizing total weighted tardiness, a proxy for maximizing on-time delivery performance, on parallel nonidentical batch processing machines. We first formulate the (primal) problem as a nonlinear integer programming model. We then show that the primal problem can be solved exactly by solving a corresponding dual problem with a nonlinear relaxation. Since both the primal and the dual problems are NP-hard, we use genetic algorithms, based on random keys and multiple choice encodings, to heuristically solve them. We find that the genetic algorithms consistently outperform a standard mathematical programming package in terms of solution quality and computation time. We also compare the smaller problem instances to a breadth-first tree search algorithm that gives evidence of the quality of the solutions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In (15), \(\alpha (x) >0\), hence x is infeasible to (OP). So the TWT appearing in (15) could be smaller than \({\mathrm {TWT}}^*\).

  2. Since \(x^* \notin \left\{ \,x \in {\mathscr {X}} \mid \alpha (x) >0 \,\right\} \), we cannot simply conclude that \(\min _{x\in {\mathscr {X}},\,\alpha (x) > 0} \left\{ f(x)+\lambda \alpha (x)\right\} = f(x^*)\).

References

  • Akturk, M. S., & Ozdemir, D. (2001). A new dominance rule to minimize total weighted tardiness with unequal release dates. European Journal of Operational Research, 135(2), 394–412.

    Article  Google Scholar 

  • Bayen, A., Tomlin, C., Ye, Y., & Zhang, J. (2003). MILP formulation and polynomial time algorithm for an aircraft scheduling problem. In Proceedings of the 42nd IEEE conference on decision and control (Vol. 5, pp. 5003–5010).

  • Bazaraa, M. S., Sherali, H. D., & Shetty, C. M. (2006). Nonlinear programming: Theory and algorithms (3rd ed.). New York: Wiley.

    Book  Google Scholar 

  • Bean, J. C. (1994). Genetic algorithms and random keys for sequencing and optimization. ORSA Journal on Computing, 6(2), 154–160.

    Article  Google Scholar 

  • Brucker, P., Gladky, A., Hoogeveen, H., Kovalyov, M. Y., Potts, C., Tautenhahn, T., et al. (1998). Scheduling a batching machine. Journal of Scheduling, 1(1), 31–54.

    Article  Google Scholar 

  • Cakici, E., Mason, S. J., Fowler, J. W., & Geismar, H. N. (2013). Batch scheduling on parallel machines with dynamic job arrivals and incompatible job families. International Journal of Production Research, 51(8), 2462–2477.

    Article  Google Scholar 

  • Chandru, V., Lee, C. Y., & Uzsoy, R. (1993a). Minimizing total completion time on a batch processing machine with job families. Operations Research Letters, 13, 61–65.

    Article  Google Scholar 

  • Chandru, V., Lee, C. Y., & Uzsoy, R. (1993b). Minimizing total completion time on batch processing machines. International Journal of Production Research, 31, 2097–2121.

    Article  Google Scholar 

  • Dobson, G., & Nambimadom, R. S. (2001). The batch loading and scheduling problem. Operations Research, 49(1), 52–65.

    Article  Google Scholar 

  • Dorigo, M. (1992). Optimization, learning and natural algorithms. PhD thesis, Politecnico di Milano, Italy.

  • Dupont, L., & Dhaenens-Flipo, C. (2002). Minimizing the makespan on a batch machine with non-identical job sizes: An exact procedure. Computers & Operations Research, 29(7), 807–819.

    Article  Google Scholar 

  • Fairley, A. (1991). Comparison of methods of choosing the crossover point in the genetic crossover operation. Technical report, Department of Computer Science, University of Liverpool, Liverpool, UK.

  • Fowler, J. W., Hogg, G. L., & Phillips, D. T. (1992). Control of multiproduct bulk service diffusion/oxidation processes. IIE transactions, 24(4), 84–96.

    Article  Google Scholar 

  • Fowler, J. W., Hogg, G. L., & Phillips, D. T. (2000). Control of multiproduct bulk server diffusion/oxidation processes. Part 2: Multiple servers. IIE Transactions, 32(2), 167–176.

    Google Scholar 

  • Glassey, C. R., & Weng, W. W. (1991). Dynamic batching heuristic for simultaneous processing. IEEE Transactions on Semiconductor Manufacturing, 4(2), 77–82.

    Article  Google Scholar 

  • Glover, F., Kelly, J. P., & Laguna, M. (1995). Genetic algorithms and tabu search: Hybrids for optimization. Computers & Operations Research, 22(1), 111–134.

    Article  Google Scholar 

  • Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. Reading, MA: Addison-Wesley.

    Google Scholar 

  • Gonçalves, J. F., Mendes, J. J. M., & Resende, M. G. C. (2005). A hybrid genetic algorithm for the job shop scheduling problem. European Journal of Operational Research, 167, 77–95.

    Article  Google Scholar 

  • Gonçalves, J. F., Resende, M. G. C., & Mendes, J. J. M. (2011). A biased random-key genetic algorithm with forward–backward improvement for the resource constrained project scheduling problem. Journal of Heuristics, 17(5), 467–486.

    Article  Google Scholar 

  • Graham, R. L., Lawler, E. L., Lenstra, J. K., & Rinnooy Kan, A. H. G. (1979). Optimization and approximation in deterministic sequencing and scheduling: A survey. Annals of Discrete Mathematics, 5, 287–326.

    Article  Google Scholar 

  • Hadj-Alouane, A. B., & Bean, J. C. (1997). A genetic algorithm for the multiple-choice integer program. Operations Research, 45(1), 92–101.

    Article  Google Scholar 

  • Hochbaum, D. S., & Landy, D. (1997). Scheduling semiconductor burn-in operations to minimize total flowtime. Operations Research, 45(6), 874–885.

    Article  Google Scholar 

  • Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor, MI: University of Michigan Press.

    Google Scholar 

  • Jula, P., & Leachman, R. C. (2010). Coordinated multistage scheduling of parallel batch-processing machines under multiresource constraints. Operations Research, 58(4–Part–1), 933–947.

    Article  Google Scholar 

  • Kirkpatrick, S., Gelatt, C. D, Jr, & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671–680.

    Article  Google Scholar 

  • Koh, S. G., Koo, P. H., Ha, J. W., & Lee, W. S. (2004). Scheduling parallel batch processing machines with arbitrary job sizes and incompatible job families. International Journal of Production Research, 42, 4091–4107.

    Article  Google Scholar 

  • Kurz, M. E., & Askin, R. G. (2004). Scheduling flexible flow lines with sequence-dependent setup times. European Journal of Operational Research, 159(1), 66–82.

    Article  Google Scholar 

  • Kurz, M. E., & Mason, S. J. (2008). Minimizing total weighted tardiness on a batch-processing machine with incompatible job families and job ready times. International Journal of Production Research, 46(1), 131–151.

    Article  Google Scholar 

  • Lee, C. Y., & Uzsoy, R. (1999). Minimizing makespan on a single batch processing machine with dynamic job arrivals. International Journal of Production Research, 37, 219–236.

    Article  Google Scholar 

  • Lee, C. Y., Uzsoy, R., & Martin-Vega, L. A. (1992). Efficient algorithms for scheduling semiconductor burn-in operations. Operations Research, 40, 764–775.

    Article  Google Scholar 

  • Li, C. L., & Lee, C. Y. (1997). Scheduling with agreeable release times and due dates on a batch processing machine. European Journal of Operational Research, 96(3), 564–569.

    Article  Google Scholar 

  • Malve, S., & Uzsoy, R. (2007). A genetic algorithm for minimizing maximum lateness on parallel identical batch processing machines with dynamic job arrivals and incompatible job families. Computers & Operations Research, 34(10), 3016–3028.

    Article  Google Scholar 

  • Mehta, S. V., & Uzsoy, R. (1998). Minimizing total tardiness on a batch processing machine with incompatible job families. IIE Transactions, 30, 165–178.

    Google Scholar 

  • Mendes, J. J. M., Gonçalves, J. F., & Resende, M. G. C. (2009). A random key based genetic algorithm for the resource constrained project scheduling problem. Computers & Operations Research, 36(1), 92–109.

    Article  Google Scholar 

  • Norman, B. A., & Bean, J. C. (1999). A genetic algorithm methodology for complex scheduling problems. Naval Research Logistics, 46, 199–211.

    Article  Google Scholar 

  • Norman, B. A., & Bean, J. C. (2000). Scheduling operations on parallel machine tools. IIE Transactions, 32, 449–459.

    Google Scholar 

  • Potts, C. N., & Kovalyov, M. Y. (2000). Scheduling with batching: A review. European Journal of Operational Research, 120, 228–249.

    Article  Google Scholar 

  • Reeves, C. R. (1997). Genetic algorithms for the operations researcher. INFORMS Journal on Computing, 9, 231–250.

    Article  Google Scholar 

  • Samanlioglu, F., Kurz, M. B., Ferrell, W. G., & Tangudu, S. (2007). A hybrid random-key genetic algorithm for a symmetric travelling salesman problem. International Journal of Operational Research, 2(1), 47–63.

    Article  Google Scholar 

  • Smith, A. E., & Tate, D. M. (1993). Genetic optimization using a penalty function. In S. Forrest (Ed.), Proceedings of the fifth international conference on genetic algorithms (pp. 499–505). San Mateo, CA: Morgan Kaufmann.

  • Snyder, L., & Daskin, M. (2006). A random-key genetic algorithm for the generalized traveling salesman problem. European Journal of Operational Research, 174, 38–53.

    Article  Google Scholar 

  • Spears, W. M., & De Jong, K. A. (1991). On the virtues of parameterized uniform crossover. In Proceedings of the fourth international conference on genetic algorithms, San Diego, CA (pp. 230–236).

  • Sung, C. S., & Choung, Y. I. (2000). Minimizing makespan on a single burn-in oven in semiconductor manufacturing. European Journal of Operational Research, 120, 559–574.

    Article  Google Scholar 

  • Uzsoy, R. (1994). Scheduling a single batch processing machine with nonidentical job sizes. International Journal of Production Research, 32, 1615–1635.

    Article  Google Scholar 

  • Uzsoy, R., & Yang, Y. (1997). Minimizing total weighted completion time on a single batch processing machine. Production and Operations Management, 6(1), 57–73.

    Article  Google Scholar 

  • Uzsoy, R., Lee, C. Y., & Martin-Vega, L. A. (1992). A review of production planning and scheduling models in the semiconductor industry Part I: System characteristics, performance evaluation and production planning. IIE Transactions, 24, 47–60.

    Article  Google Scholar 

  • Wang, C. S., & Uzsoy, R. (2002). A genetic alogrithm to minimize maximize lateness on a batch processing machine. Computers & Operations Research, 29, 1621–1640.

    Article  Google Scholar 

  • Weng, W. W., & Leachman, R. C. (1993). An improved methodology for real-time production decisions at batch-process work stations. IEEE Transactions on Semiconductor Manufacturing, 6(3), 219–225.

    Article  Google Scholar 

  • Xu, S., & Bean, J. C. (2007). A genetic algorithm for scheduling parallel non-identical batch processing machines. In Proceedings of the IEEE symposium on computational intelligence in scheduling (pp. 143–150).

  • Yilmaz Eroglu, D., Ozmutlu, H. C., & Ozmutlu, S. (2014). Genetic algorithm with local search for the unrelated parallel machine scheduling problem with sequence-dependent set-up times. International Journal of Production Research, 52(19), 5841–5856.

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank two anonymous referees and the Associate Editor for their insightful comments and constructive suggestions which significantly improved the presentation of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to James C. Bean.

Appendix: Nonlinear penalty function method

Appendix: Nonlinear penalty function method

In this section we discuss the nonlinear penalty function method. We study the more general discrete optimization problem and show that the (primal) optimization problem can be solved exactly by solving a dual problem with a nonlinear relaxation, which establishes the theoretical foundation of using the Multiple Choice Genetic Algorithm to solve the original scheduling problem (OP) defined in Sect. 3.

Certain notations will be used. Let \({\mathbb {N}} = \{1,2,3,\ldots \}\) be the set of natural numbers and \({\mathbb {R}}\) be the set of real numbers. Let \({\mathbb {R}}_{+}\) and \({\mathbb {R}}_{++}\) denote the set of nonnegative and positive real numbers, respectively. Denote by \({\mathbb {R}}^n\) the set of n-dimensional real vectors. If \(u \in {\mathbb {R}}^n\), then \(u \le 0\) and \(u=0\) mean that all components of u are nonpositive and zeros, respectively. We use the symbol min to refer to either “minimize (an optimization problem)” or “minimum (value).” Similar simplification applies to the symbol max. The exact meaning will be clear from context. If \((\cdot )\) stands for an optimization problem, then \(v(\cdot )\) is its optimal value.

Consider the following discrete optimization problem, which we call the primal problem (P):

$$\begin{aligned} \begin{array}{lll} (P)~~~~ &{} \min \limits _{x}~ &{} f(x) \\ &{} \mathrm{s.t.} &{} g(x) \le 0 \\ &{} &{} h(x) = 0 \\ &{} &{} x \in {\mathscr {X}} \end{array} \end{aligned}$$

where \(x=(x_{1},\dots ,x_{n}) \in {\mathbb {R}}^n\) is a vector of decision variables, with each component taking some discrete values. Both \(g(x)=\left( g_1(x),\dots ,g_q(x)\right) \) and \(h(x)=\left( h_1(x),\dots ,h_l(x)\right) \) are vector functions with q and l components, where \(q, l \in {\mathbb {N}}\). Functions f, \(g_{1}\), ..., \(g_{q}\), and \(h_{1}\), ..., \(h_{l}\) are bounded and real-valued functions that can be arbitrary nonlinear or nonconvex. The feasible set of the problem is denoted by \({\mathscr {S}} = \{\,x \in {\mathbb {R}}^n \mid g(x) \le 0, h(x) = 0\,\} \cap {\mathscr {X}} \). The set \({\mathscr {S}}\) consists of explicit constraints \(g(x) \le 0\) and \(h(x) = 0\) and other constraints represented by the set \({\mathscr {X}}\), which is a finite and nonempty subset of \({\mathbb {R}}^n\). The set \({\mathscr {X}}\) might represent some simple constraints that can be easily handled, such as lower and upper bounds on the variables. Assume that the constraints \(g(x) \le 0\) and \(h(x) = 0\) are “complicating” in terms of solving the problem, while \({\mathscr {X}}\) is “easy.” Any vector (point) in \({\mathscr {S}}\) is called feasible, while any point not in \({\mathscr {S}}\) is called infeasible. A point \(x^*\) is called optimal or a solution if it solves the problem, and \(f(x^*)=v(P)\) is the optimal value.

The penalty function method drops the complicating constraints \(g(x) \le 0\) and \(h(x) = 0\) by introducing a weighted penalty for constraint violations. In general, for a minimization problem, a penalty function shall incur a positive penalty for infeasible solutions and no penalty for feasible solutions (Bazaraa et al. 2006).

Definition 1

A function \(\alpha (x) :{\mathbb {R}}^n \mapsto {\mathbb {R}}\) is called a penalty function for problem (P) if it satisfies: (i) \(\alpha (x) > 0\) if \(g(x) > 0\) or \(h(x) \ne 0\); (ii) \(\alpha (x) = 0\) if \(g(x) \le 0\) and \(h(x) = 0\).

Various forms of penalty functions satisfying the above definition exist. A suitable nonlinear penalty function is defined by \( \alpha (x) {:=}\sum _{i=1}^{q}\left[ \max \left( 0,g_i(x)\right) \right] ^2 + \sum _{i=1}^{l}\left[ h_i(x)\right] ^2. \nonumber \)

The transformed objective function \(F(x,\lambda ) :{\mathbb {R}}^n \times {\mathbb {R}}_{+} \mapsto {\mathbb {R}}\) is defined by \( F(x,\lambda ) {:=}f(x) +\lambda \alpha (x), \) where \(\lambda \ge 0\) is called the penalty parameter.

Rather than solve the problem (P), we consider the following penalty problem:

$$\begin{aligned} \begin{array}{lll} (PP_{\lambda })~~~~ &{} \min \limits _{x} ~ &{} F(x,\lambda ) \\ &{} \mathrm{s.t.} &{} x \in {\mathscr {X}} \end{array} \end{aligned}$$

Hadj-Alouane and Bean (1997) extended the results of penalty function methods from continuous nonlinear program to multiple choice integer program with linear objective function and linear constraints. General linear constraints are relaxed by a nonlinear penalty function and the corresponding dual problem has weak and strong duality. We now generalize their results to finite discrete optimization with nonlinear objective function and nonlinear constraints. This generalization is meaningful, since many practical problems, including the batch scheduling problems, can be formulated as finite discrete optimization models.

Proposition 1

(Weak Duality) \(v(PP_{\lambda }) \le v(P)\) for all \(\lambda \ge 0\); that is, the optimal value of the penalty problem provides a lower bound on the optimal value of the primal problem.

Proof

Let \(\bar{x}\) be a feasible solution to the primal problem (P); that is, \(\bar{x} \in {\mathscr {S}}\). So \(\bar{x} \in {\mathscr {X}}\). Since \(\bar{x} \in {\mathscr {S}}\), \(g(\bar{x}) \le 0\) and \(h(\bar{x})=0\). It follows that \(\alpha (\bar{x}) = 0\). The following relations then hold for \(\lambda \ge 0\):

$$\begin{aligned} v(PP_{\lambda }) = \min _{x \in {\mathscr {X}}} \{ f(x) + \lambda \alpha (x) \} \le f(\bar{x}) + \lambda \alpha (\bar{x}) = f(\bar{x}). \end{aligned}$$

Since the above relations hold for all the feasible solutions to (P), they must also hold for \(x^*\), which is the optimal solution to (P). That is, \(v(PP_{\lambda }) \le f(x^*) = v(P)\). \(\square \)

Definition 2

Consider a general optimization problem

$$\begin{aligned} \begin{array}{lll} (GP)~~~~ &{} \min \limits _x ~ &{} f(x) \\ &{} \mathrm{s.t.} &{} x \in {\mathscr {X}} \end{array} \end{aligned}$$

where f is a real-valued function defined on \({\mathbb {R}}^n\) and \({\mathscr {X}}\) is a nonempty subset of \({\mathbb {R}}^n\). For \(\epsilon \in {\mathbb {R}}_{++}\), we call \(\bar{x}\) an \(\epsilon \)-optimal solution to (GP) if \(\bar{x} \in {\mathscr {X}}\) and \(f(\bar{x}) \le \inf _{x\in {\mathscr {X}}} f(x) + \epsilon \).

Lemma 1 establishes the relationship between \(v(PP_{\lambda })\) and v(P) for a given \(\lambda \ge 0\).

Lemma 1

  1. (a)

    For a given \(\lambda \ge 0\), if \(\bar{x}\) is \(\epsilon \)-optimal to \((PP_{\lambda })\), \(g(\bar{x}) \le 0\), and \(h(\bar{x}) = 0\), then \(\bar{x}\) is also \(\epsilon \)-optimal to (P).

  2. (b)

    For a given \(\lambda \ge 0\), if \(x^*\) is optimal to \((PP_{\lambda })\), \(g(x^*) \le 0\), and \(h(x^*) = 0\), then \(x^*\) is also optimal to (P).

Proof

  1. (a)

    Since \(\bar{x}\) is feasible for \((PP_{\lambda })\), \(\bar{x} \in {\mathscr {X}}\). Moreover, \(g(\bar{x}) \le 0\) and \(h(\bar{x}) = 0\). Thus \(\bar{x}\) is also feasible for (P) and \(\alpha (\bar{x}) = 0\). If \(\bar{x}\) is \(\epsilon \)-optimal to \((PP_{\lambda })\), then \(f(\bar{x}) + \lambda \alpha (\bar{x}) \le v(PP_{\lambda }) + \epsilon \) by Definition 2. Therefore, \(f(\bar{x}) \le v(PP_{\lambda }) + \epsilon \). It follows that \(f(\bar{x}) \le v(P) + \epsilon \) since \(v(PP_{\lambda }) \le v(P)\) by Proposition 1.

  2. (b)

    If \(x^*\) is optimal to \((PP_{\lambda })\), then \(v(PP_{\lambda }) = f(x^*)+\lambda \alpha (x^*)\). Since \(g(x^*) \le 0\), \(h(x^*) = 0\), and \(x^* \in {\mathscr {X}}\), \(x^*\) is also feasible for (P). Then \(\alpha (x^*) = 0\) and \(f(x^*) \ge v(P)\). Hence \(v(PP_{\lambda }) = f(x^*)\). Then \(f(x^*) \le v(P)\) since \(v(PP_{\lambda }) \le v(P)\) by Proposition 1. It follows that \(f(x^*) = v(P)\). \(\square \)

The penalty problem provides lower bounds for the primal problem. Under certain conditions, strong duality holds; that is, we can obtain the optimal solution to the primal problem by optimally solving the corresponding penalty problem. Proposition 2 establishes conditions for the existence of \(\lambda \) for which strong duality holds.

Proposition 2

(Strong Duality) Let \({\mathscr {S}}_\lambda \) be the finite set of optimal solutions to \((PP_{\lambda })\). If (P) is feasible, then there exists \(\bar{\lambda } \ge 0\), such that for all \(\lambda \ge \bar{\lambda }\), there exists \(x^* \in {\mathscr {S}}_\lambda \) such that \(\alpha (x^*) = 0\) and \(x^*\) is optimal to (P).

Proof

The proof is similar to that of Theorem 2 in Hadj-Alouane and Bean (1997). Since by assumption f(x), g(x), and h(x) are bounded and real-valued functions, (P) is also bounded. And since \({{\mathscr {X}}}\) is finite, there exists an \(x^* \in {\mathscr {X}}\), such that \(\alpha (x^*)=0\), and \(x^*\) is optimal to (P). It remains to show that there exists \(\bar{\lambda } \ge 0\) such that \(x^* \in {\mathscr {S}}_\lambda \) for all \(\lambda \ge \bar{\lambda }\).

Define \( {\mathscr {Y}} {:=}\left\{ \, x \in {\mathbb {R}}^n \mid \alpha (x) = 0 \,\right\} \). The complement of \({\mathscr {Y}}\) is \({\mathscr {Y}}^c {:=}\left\{ \,x \in {\mathbb {R}}^n \mid \alpha (x) > 0 \,\right\} \). We have \( {\mathscr {Y}} \cup {\mathscr {Y}}^c = {\mathbb {R}}^n \supset {\mathscr {X}} = ({\mathscr {X}} \cap {\mathscr {Y}} ) \cup ({\mathscr {X}} \cap {\mathscr {Y}}^c )\), and \( x^* \in {\mathscr {X}} \cap {\mathscr {Y}}\).

$$\begin{aligned} v(PP_{\lambda })&= \min _{x \in {{\mathscr {X}}}}\left\{ f(x)+\lambda \alpha (x)\right\} \\&= \min \Bigl \{\min _{x\in \left( {\mathscr {X}} \cap {\mathscr {Y}} \right) } \left\{ f(x)+\lambda \alpha (x)\right\} ,\\&\quad \min _{x\in \left( {\mathscr {X}} \cap {\mathscr {Y}}^c \right) } \left\{ f(x)+\lambda \alpha (x)\right\} \Bigr \}\\&= \min \Bigl \{\min _{x\in \left( {\mathscr {X}} \cap {\mathscr {Y}} \right) } f(x), \min _{x\in \left( {\mathscr {X}} \cap {\mathscr {Y}}^c \right) } \left\{ f(x)+\lambda \alpha (x)\right\} \Bigr \}\\&= \min \Bigl \{f(x^*), \min _{x\in \left( {\mathscr {X}} \cap {\mathscr {Y}}^c \right) } \left\{ f(x)+\lambda \alpha (x)\right\} \Bigr \} \\&= \min \Bigl \{f(x^*), \min _{x\in {{\mathscr {X}}},\,\alpha (x) > 0} \left\{ f(x)+\lambda \alpha (x)\right\} \Bigr \}. \end{aligned}$$

Let \(\lambda _0 {:=}\max _{x\in {{\mathscr {X}}},\,\alpha (x) > 0}[f(x^*)-f(x)]/\alpha (x) \in {\mathbb {R}}\). Since \({\mathscr {X}}\) is finite, f, g, and h are bounded, and \(\alpha (x) > 0\), \(\lambda _0\) exists and is bounded. And \(\alpha (x) > 0\) implies that x is infeasible for (P). Thus, the sign of \(f(x^*)-f(x)\) is indefinite, as is the sign of \(\lambda _0\). Choose \(\bar{\lambda }=\max \left\{ \lambda _0,0 \right\} \). Thus, \(\bar{\lambda }\) is also bounded, \(\bar{\lambda } \ge \lambda _0\), and \(\bar{\lambda } \ge 0\).

\(\forall \lambda \ge \bar{\lambda }\), let \(\bar{x}_\lambda = \hbox {arg} \,\hbox {min}_{x\in {{\mathscr {X}}},\,\alpha (x) > 0}\left\{ f(x)+\lambda \alpha (x) \right\} \). We see that \(\bar{x}_\lambda \) exists, since \({\mathscr {X}}\) is finite and nonempty, f, g, and h are bounded. Also notice that \( \alpha (\bar{x}_\lambda ) >0\). Since \(\lambda _0 {:=} \max _{x\in {\mathscr {X}},\,\alpha (x) > 0}\frac{f(x^*)-f(x)}{\alpha (x)}\) and \(\bar{x}_\lambda \in \left\{ x \in {\mathscr {X}} \mid \alpha (x) >0 \right\} \), it follows that \(\lambda _0 \ge \frac{f(x^*)-f(\bar{x}_\lambda )}{\alpha (\bar{x}_\lambda )}\).

Combining the above results, we have \(\lambda \ge \bar{\lambda } \ge \lambda _0\). For any \(x \in {\mathscr {X}}\) such that \(\alpha (x) > 0\), we have \( f(x)+\lambda \alpha (x) \ge f(\bar{x}_\lambda )+\lambda \alpha (\bar{x}_\lambda ) \ge f(\bar{x}_\lambda )+\bar{\lambda }\alpha (\bar{x}_\lambda ) \ge f(\bar{x}_\lambda )+\lambda _0\alpha (\bar{x}_\lambda ) \ge f(\bar{x}_\lambda )+\frac{f(x^*)-f(\bar{x}_\lambda )}{\alpha (\bar{x}_\lambda )}\cdot \alpha (\bar{x}_\lambda ) = f(x^*). \)

That is, \(\min _{x\in {\mathscr {X}},\,\alpha (x) > 0} \left\{ f(x)+\lambda \alpha (x)\right\} \ge f(x^*)\).Footnote 2 Thus,

$$\begin{aligned} v(PP_{\lambda }) = \min \Bigl \{f(x^*), \min _{x\in {\mathscr {X}},\,\alpha (x) > 0} \left\{ f(x)+\lambda \alpha (x)\right\} \Bigr \} = f(x^*). \end{aligned}$$

Since \(x^* \in {\mathscr {X}}\), \(x^*\) is feasible for \((PP_\lambda )\). The above result implies that \(x^*\) is optimal to \((PP_\lambda )\); that is, \(x^* \in {\mathscr {S}}_\lambda \). \(\square \)

We do not assume that the optimization problems are convex in the above results. The results are valid for both convex and nonconvex optimization problems. The only assumptions are that the objective and constraints functions are discrete and the general set \({\mathscr {X}}\) is finite and nonempty.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, S., Bean, J.C. Scheduling parallel-machine batch operations to maximize on-time delivery performance. J Sched 19, 583–600 (2016). https://doi.org/10.1007/s10951-015-0449-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10951-015-0449-6

Keywords

Navigation