Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter July 12, 2016

New Properties for Solving the Single-Machine Scheduling Problem with Early/Tardy Jobs

  • Hemmak Allaoua EMAIL logo and Bouderah Brahim

Abstract

This paper presents a mathematically enhanced genetic algorithm (MEGA) using the mathematical properties of the single-machine scheduling of multiple jobs with a common due date. The objective of the problem is to minimize the sum of earliness and tardiness penalty costs in order to encourage the completion time of each job as close as possible to the common due date. The importance of the problem is derived from its NP-hardness and its ideal modeling of just-in-time concept. This philosophy becomes very significant in modern manufacturing and service systems, where policy makers emphasize that a job should be completed as close as possible to its due date. That is to avoid inventory costs and loss of customer’s goodwill. Five mathematical properties are identified and integrated into a genetic algorithm search process to avoid premature convergence, reduce computational effort, and produce high-quality solutions. The computational results demonstrate the significant impact of the introduced properties on the efficiency and effectiveness of MEGA and its competitiveness to state-of-the-art approaches.

1 Introduction

The spectacular development in computational power in recent years demonstrated that using the generic search concept alone of any meta-heuristic (such as genetic algorithm, GA) for solving NP-hard combinatorial optimization problems is insufficient to obtain good results due to the generic nature of such methods. This observation led many researchers to embed domain-specific properties within the generic search concept to clearly improve GA effectiveness, avoid its premature convergence, and reduce its required processing time. The topic of this work is tracking single-machine early/tardy scheduling against a common due date with a variant of GA in which two new properties of this problem will be presented, proofed, then integrated.

Thus, a new variant of GA is proposed to solve the single-machine scheduling problem of independent jobs where the objective is to minimize the sum of earliness and tardiness penalties of jobs having a common due date (ETCDD). The theoretical importance of the single-machine ETCDD problem is due to its complexity as an NP-hard problem [4, 7, 20, 22] and due to its practical importance in just-in-time (JIT) modern production and service processing. In JIT, policy makers emphasize that a job should be completed as close as possible to its common due date to avoid inventory cost and loss of customer’s goodwill. This concept is actually adopted by many economic and industrial companies around the world. The goal is to realize better results than existing works in the field and to reduce time processing that is required to compute results.

In scheduling on a single machine against a common due date, there are two classes of common due date problems that have proven to be NP-hard, namely the restrictive and non-restrictive common due date problem. As one job at most can be completed exactly at the due date, some of the jobs might be completed earlier than the common due date (d), while other jobs finish late. In this work, the treated problem is such as the restrictive case of the problem where the common due date is less than the sum of the processing times of all the jobs and each job possesses different earliness/tardiness penalties.

This problem has proven to be the most difficult problem in this area of research.

Reviewing the single-machine ETCDD literature, it was found that three different properties were embedded into various constructive heuristics to obtain good approximation results [4, 7, 8, 10, 14, 20]. However, to capitalize on the potential of such properties, two new additional propensities are proposed with supporting proofs. The set of all mathematical properties are integrated inside a GA search process to derive what is called a mathematically enhanced GA (MEGA) procedure. The quality of MEGA is demonstrated on a set of 200 benchmark instances available in Refs. [7, 8] and compared to the state-of-the-art algorithms for the problem in Refs. [7, 12, 11, 13, 20].

The remaining parts of the paper are organized as follows: Sections 2 to 4 provide background on the problem, including a statement, literature review, and mathematical properties. In Section 5, the implementation details of our proposed MEGA approach are presented. Computational results and discussions are provided in Section 6. Finally, we conclude with a summary of findings and perspectives on future work in Section 7.

2 Literature Review

In recent years, the single-machine ETD problem has received a lot of attention from researchers and practitioners. Its importance arises from its NP-hard complexity and its practical importance within the JIT context as in intelligent scheduling [21]. The large earliness and tardiness times of jobs are highly discouraged in order to minimize the inventory storage and late delivery costs to meet the JIT principle, which calls for the right amount of goods/services to be produced and delivered at exactly the right time and the right quantity.

A literature search on the single-machine ETD problem was conducted, and the following studies were found. They are arranged from the earliest to the latest as follows: Biskup [5], Gordon et al. [9], Feldmann and Biskup [8], Hino et al. [15], Lin et al. [19], Liao and Cheng [18], Nearchou [20], Ronconi and Kawamura [22], Alvarez-Valdes et al. [1], Lassig et al. [17], as well as Ying [25], Baker and Scudder [4], Vallée and Yiltizogli [24], and Cha et al. [6]. Lässig et al. has presented a study on the general case of the common due date scheduling problem. A linear algorithm for a given gob sequence was presented to optimize a given job sequence with a run-time complexity of O(n), where n is the number of jobs [17].

Jafarnejad et al. elaborated a study to develop a goal function for optimizing single-machine scheduling problems, that combines both earliness and tardiness penalties. This study is also determined to minimize the earliness and tardiness penalties in this area [16]. Baker and Scudder developed a branch and bound algorithm to find optimal solutions to an ETCDD problem and reported the results of computational experiments. They also tested some heuristic procedures and found that surprisingly good performance can be achieved by a list schedule followed by an adjacent pairwise interchange procedure [3]. Shahriari et al. investigated a JIT single-machine scheduling problem with a periodic preventive maintenance. Also to maintain the quality of the products, there is a limitation on the maximum number of allowable jobs in each period. The proposed bi-objective mixed-integer model minimizes total earliness/tardiness and make span simultaneously [23].

Awasthi et al. considered the problem of scheduling jobs on single and parallel machines where all the jobs possess different processing times but a common due date. There was a penalty involved with each job if it is processed earlier or later than the due date. The objective of the problem was to find the assignment of jobs to machines, the processing sequence of jobs, and the time at which they are processed, which minimizes the total penalty incurred due to tardiness or earliness of the jobs. An exact polynomial algorithm was presented for optimizing a given job sequence for single and parallel machines with the run-time complexities of O(n log n) and O(mn2 log n), respectively, where n is the number of jobs and m the number of machines [2].

3 Background of the Problem

The single-machine scheduling problem with early/tardy jobs around a common due date involves a set of jobs; each job has its own processing time requirements. All jobs must be processed on a single machine, and a penalty cost is incurred when a job is completed before (earliness) or after (tardiness) the common due date. The objective is to minimize the summation of earliness and tardiness penalty costs in order to encourage the completion time of each job to be as close as possible to the common due date. The following notations and statements are proposed to understand better the properties of the problem. The following notations define the problem statement:

  • n: integer number of jobs to be scheduled;

  • I: set of n jobs: I={1, 2, …, n};

  • d: common due date of all the n jobs;

  • Ci: complete time of job i;

  • pi: processing time of job i;

  • Ei=max {dCi, 0} (earliness of job i);

  • Ti=max {Cid, 0} (tardiness of job i);

  • αi: penalty per unit time of earliness for job i;

  • βi: penalty per unit time of tardiness for job i;

  • h: parameter of common due date tightness used as follows: d=h * T, where T=k=1npi;h∈{0.2, 0.4, 0.6, 0.8};

  • B: the set of jobs to complete at or before the common due date d;

  • A: the set of jobs to complete after d;

|A|+|B|={n1 if there are a straddled jobn else;

A straddled job is a job that starts before d and completes after d. If, in a given sequence, there is a job that completes exactly at d, there will be no straddled job.

  1. Each job has to be processed on the single machine without interruption.

  2. Each job is available at time 0.

  3. Each job must be processed just once.

  4. For each job i, the processing time pi, the cost per unit time of earliness αi, and the cost per unit time of tardiness βi are given and assumed integers. The objective function to be minimized can be expressed as the sum of weighted penalties of earliness and tardiness as follows: k=1n(αiEi+βiTi). Any permutation of n jobs will be a feasible solution to the problem, and there are exponential numbers of such permutations (O(n!)). The optimal sequence is the permutation that has the minimum objective value among all permutations.

The optimal solution for the single-machine ETD problem satisfies the following three optimality properties:

Property 1: It does not contain any idle time between any consecutive jobs.

Property 2: It is V-shaped around the common due date: the jobs completing before or on the common due date are sorted in decreasing order of the ratios pi/αi, and the jobs starting on or after the common due date are sorted in increasing order of the ratios pi/βi.

Property 3: In an optimal schedule, either the first job starts at time zero or the completion time of one job coincides with the common due date.

The proofs of these properties are established using proof by contradiction in Baker and Scudder [3], Jafarnejad et al. [16], Gordon et al. [9], Feldmann and Biskup [8], Lin et al. [24], Biskup [23], and Hall and Posner [3].

4 New Mathematical Properties

The most difficult part of the ETCDD problem usually focuses on how to find the beginning time of optimal sequence, because that will multiply the time complexity of the problem by d–1 (where d is the common due date for all jobs). That means that the beginning time t0 belongs to [0, d–1].

The three problem properties presented above describe just the optimal sequence without specifying any details about the beginning time of the optimal sequence. That is happening often when h=0.6 or h=0.8 (in these cases, the common due date d is equal to 0.6*sum_pi or 0.8*sum_pi, so it is quite possible that scheduling starts at t0>0). Therefore, in this work, two new properties were developed and then proofed for determining the beginning time t0 of the optimal sequence. On one hand, that will considerably improve the solution quality, and, on the other hand, that will reduce clearly the time computing of the algorithm. These properties are called 4 and 5, respectively.

Property 4: There is an optimal schedule in which the arithmetic mean [m=(iBαiCi+iAβiCi)(iBαi+iAβi)] is equal to the common due date d.

where

  • αi, βi, Ci, A, and B are such as defined in the problem statement above;

  • αi: penalty per unit time of earliness for job i;

  • βi: penalty per unit time of tardiness for job i;

  • Ci: complete time of job i (here, the completion times Cis are computed with starting time t0=0 for any given sequence);

  • B: the set of jobs to complete at or before the common due date d;

  • A: the set of jobs to complete after d;

  • m: the arithmetic mean of (Ci, μi);

  • μi is defined such as μi=αi if Cid and μi=βi if Ci>d, ∀iI;

  • d: the common due date for all jobs.

Proof. The arithmetic mean is average of values (Ci, μi) is defined by

[m=(iBαiCi+iAβiCi)(iBαi+iAβi)].

Then, the mean difference of the (Ci, μi) values is expressed by

Δ=(iBαi|mCi|+iAβi|Cim|)(iBαi+iAβi).

Let consider the function Gp as follows:

GP(x)=(iBαi|pCi|+iAβi|Cip|)(iBαi+iAβi).

From the arithmetic mean properties [the above ratio Gp(x) reaches its minimum when p=m], we can deduce the relationship in Eq. (1) that

(1)Min{Gp(x)}=Gm(x)=D. (1)

Let consider the function F as follows:

F(x)=k=1n(αiEi+βiTi), where x is a permutation sequence of jobs in set I.

As Ei=dCi for Cid and Ti=Cid for Ci>d, then Ei=|dCi| and Ti=|Cid|; thus, replacing them in F(x), we get

F(x)=(iBαi|dCi|+iAβi|Cid|), and consequently,

(2)F(x)(iBαi+iAβi)=(iBαi|dCi|+iAβi|Cid|)(iBαi+iAβi)=Gd(x). (2)

As (iBαi+iAβi) is a constant value, we conclude from Eqs. (1) and (2) that F(x) becomes minimal when d=m. It should be noted that when the above ratio reaches a minimum value, F(x) also reaches its minimum value, because F(x) and Gd(x) are proportional and the denominator (iBαi+iAβi) is a constant value.      □

Corollary:

  1. If dm, then the optimal schedule may start at time t0=0.

  2. If d>m, then the optimal schedule may start at time t0=dm.

Consequence of using property 4: The main difficulty of the early/tardy scheduling problem is always to determine the starting time of jobs. Therefore, in this paper, an approach is proposed to solve this difficulty based on the above corollary as follows:

  1. Compute the value of the above average m. If dm, just the sequences that start at time 0 may be sought; else, if d>m, only the sequences that start at dm will be examined.

  2. Then, property 3 is applied, i.e. either of the jobs may complete at time d.

Therefore, it must choose the starting time of the schedule to be as close as possible to dm. Note that m can be moved but never d, as d is constant and m is depending on Ci. That means that when m<d, the starting time t0 can be delayed to obtain m=d, because the arithmetic mean is a linear function [i.e. mean (xi+c)=mean (xi)+c]. On the other hand, when md, m cannot coincide with d because of the time limit (0), as it is shown in Figure 1. It should be noted that m is computed with the starting time t0=0; if m<d, the starting time can be taken as t0=dm to have m coinciding with d (i.e. m is moved to d).

Figure 1: Possible Positions of d against m.
Figure 1:

Possible Positions of d against m.

However, if dm, the stating time t0 can be chosen to make m coincide with d; the best sequences may have a starting time t0=0 (Figure 1).

Property 5: If m<d, then there is an optimal schedule in which iBαi=iAβi.

Where

  • d, αi, βi, A, B are such as defined in the problem statement above;

  • m: the arithmetic mean of (Ci, μi);

  • μi is defined such as μi=αi if Cid and μi=βi if Ci>d, ∀iI.

Proof. Let F(x)=k=1n(αiEi+βiTi). As, Ei=dCi for Cid and Ti=Cid for Ci>d, replacing the terms Ei and Ti in F(x), we would get F(x)=d(iBαiiAβi)+(iAβiCiiBαiCi).

Let F(x)=iAβiCiiBαiCi; if md, we can easily find two sets A and B such as iBαiiAβi. This can be done by setting initially A=Φ and B=I, then muting iteratively jobs from A to B until reaching iBαiiAβi. As consequence, F(x) becomes the sum of two positive integers:

d(iBαiiAβi) and (iAβiCiiBαiCi) (because F(x) and iBαiiAβi) are positive.

As F(x)≥0 and iBαiiAβi, then F′(x)≥0; then F(x) becomes minimal when iBαi=iAβi.

      □

Corollary: To reduce the size of the space of solutions, it may just seek solutions having iBαiiAβi.

Consequence of using property 5: This type of problem becomes more difficult when d is large (i.e. often especially when d is given greatest). Therefore, this property is useful to find the starting time according the following algorithm (Algorithm 1):

Algorithm 1:

Computing starting time using property 5.

1.For each sequence of jobs, x=(1, 2, …, n)
2.Ci=d; i=1; sum_αi=0; sum_βi=i=1nβi;
3. While (sum_αi<sum_βi and Ci≥0)
4.  Ci=Cipi;
5.  sum_αi=sum_αi+αi;
6.  sum_βi=sum_βiβi;
7.  i=i+1;
8. end while
9. starting_time=Ci.

This algorithm starts by making all jobs after d, then the jobs are moved before d one by one until realizing iBαiiAβi.

That will allow to choose the starting time as close as possible to the value Ci, which realizes iBαi=iAβi.

Note that the two properties are more used in the most difficulty cases where h=0.6 and h=0.8, i.e. when there is a large margin to choose the starting time.

Example Size n=10 jobs; instance k=1 (source: Biskup and Feldman benchmarks) [7].

Job i12345678910
pi206131312121231213
αi41527956610
βi51513136815181
  1. Sum pi=T=k=1npi=116;d=Th; h∈{0.2, 0.4, 0.6, 0.8}.

Results

hdt0Opt.Optimal sequence
0.223019364 2 7 3 9 6 5 8 1 10
Time =0 d (straddled job 7)
0.446010254 2 3 7 9 6 5 8 1 10
Time=0 d (straddled job 9)
0.66918414 2 3 7 9 6 5 8 1 10
Time=1 d (no straddled job)
0.892168184 2 1 3 7 6 9 5 8 10
Time=16 d (no straddled job)

Straddled job means a job starts just before d and completes after d.

It is easy to verify that

  1. There is no idle time in the optimal schedule. That is because Ci+1=Ci+pi (property 1).

  2. Jobs in B are sorted by decreasing order of ratio pi/αi and jobs in set A are sorted by increasing order of ratio pi/βi (property 2) (V-shaped property), with B set of jobs completing at or before d and A set of jobs completing after d.

  3. In an optimal job, there is a job that starts at 0 or a job completes at d (property 3).

  4. If d≤mean, then the optimal time must start at 0 (as in h=0.2 and h=0.4 above). Else, the optimal schedule may start at t0=d–mean (as in h=0.6 and h=0.8 above). Note that it is not always t0=0 for h=0.2, 0.4 and t0>0 for h=0.6, 0.8. However, it is depending on the instance data (property 4).

  5. In each optimal sequence above, we have iBαiiAβi (property 5).

5 MEGA

In this section, a variant implementation of GA will be presented that is used to solve the ETCDD problem. The five properties described above are integrated, as will be clarified later. However, implementing GA needs firstly to choose

  • A good individual encoding process;

  • A suitable fitness function for evaluating individuals;

  • An adequate selection policy;

  • Crossover and mutation probabilities;

  • Empirically stopping criterion.

The GA algorithm process starts by generating a randomly initial population, then genetic operators (selection, crossover, mutation) [23, 24] are applied with certain probabilities to produce a new generation of offsprings. These offsprings are supposed to inherit the characteristics of their parents and evolve into better individuals. At each generation, the best individuals replace the weakest fit individuals. The GA process is iteratively repeated for a number of generations, and the best-found individual is declared as the final obtained solution. The GA process is generally described as follows (Algorithm 2):

Algorithm 2:

GA for ETCDD.

1.Begin
2. Population and parameters’ initializations;
3. Fitness evaluation;
4. Repeat
5.  Pair selection of individuals;
6.  Crossover;
7.  Mutation;
8.  Fitness evaluation;
9. Until (stopping criterion);
10.End.

However, the design of the GA algorithm depends on the designer’s choices for the GA components to solve the problem at hand. Such components include individual’s encoding, definition of fitness function for individuals’ evaluation, selection of pairs of individuals, probability crossover, probability mutation, and criteria for stopping. The components’ choices must be carefully defined and their parameter values must be carefully chosen, as they both considerably determine the solution quality of the proposed GA algorithm. Therefore, such components will be individually explained next.

5.1 Individual’s Encoding

Traditionally, characteristics (also called genetic phenotypes) of an individual solution are represented using binary strings of 0s and 1s; however, other encodings are also possible. The whole abstract representation of a solution is often called chromosome (or genotypes of individuals). In our implementation, a chromosome is a feasible solution that consists of an array of (n+1) integer numbers; the first entry represents the starting time to begin the processing of the sequence of jobs, and the other remaining n entries represent the sequence of jobs. Example: the following vector encodes a sequence of jobs (2, 6, 1, 4, 5, 3) that begins at time 4:

A population of m individuals will be represented by a two-dimensional matrix, m∗(n+1).

5.2 Individual’s Fitness Function

The fitness function is defined over the genetic representation of a solution. It measures the quality of the generated solution in each chromosome. The fitness function is always problem dependent. An ideal fitness function should correlate closely with the algorithm’s goal, and should be quickly computed. However, it should not favor solutions to avoid converging quickly toward a local solution. Many processes are possible to adjust the fitness function to prevent premature convergence or to diversify the population. The linearization and exponentiation function defines our fitness function:

Fitness(x)=h(1obj(x)/sum_obj)^0.1+1.

where h is the common due date rate, x is a chromosome, and obj(x) is the objective value solution/chromosome.

Note that obj(x) is inversely proportional to fitness(x), that is because obj(x) is to be minimized and fitness(x) is to be maximized.

The used factor h, the exponent 0.1, and the term +1 are used here to realize a compromise between the intensification and diversification processes generally used in meta-heuristics. Thus, this rule will provide different values and allow a chance to weak individuals. In short, this rule was tested and used in many related works and had justified its efficiency [12, 14].

5.3 Population’s Initialization

The initial population is initialized using randomly generated individuals. Traditionally, a uniform law of probability must be used to cover all space of solutions (space search). The aim is to prevent premature convergence of the algorithm to local solutions. The size of the population depends on the nature of the problem. Occasionally, the population may be “seeded” with solutions in areas where optimal solutions are likely to be found. The population size is chosen sufficiency great to represent the evolutionary phenomenon as much as possible, and it is almost proportional to the problem size n (number of jobs). For reporting our computational results, the population size, Population_size=50 ∗ n, is chosen.

5.4 Population’s Evaluation

The evaluation of population at every generation computes the objective for each individual, then its fitness function. It also determines the minimum population value (best among all objective values) and saves the corresponding solution. At the last generation, this step allows to determine the best individual in the population and to declare it as the best obtained solution during the whole search process.

5.5 Selection of Individuals

The selection process determines which individuals are to participate in the production of new individuals (off-springs) to include in the next generation. Individual solutions are selected based on a fitness-based process, where solutions with highest fitness values are more likely to be selected for reproduction. The selection process evaluates the fitness of each solution in the whole population (or a random sample of solutions in the population when time consuming) to select the best solutions for reproduction. In our implementation, a fortune wheel method is used to select individuals among those having great fitness. The random roulette function uses the following implementation (Algorithm 3):

Algorithm 3:

Roulette implementation

1.Int Function roulette();
2. Int i;
3. Float r, s1;
4.r=Rnd() * sum_fitness;
5.s=0;
6.i=0;
7. While s<r
8.  i=i+1;
9.  s=s+fitness(i);
10. End While
11. Return I;
12.End Function.

This random roulette process is applied twice whenever a pair of parents is needed for breeding.

5.6 Crossover Operator

The crossover operator allows the production of new off-springs (child solutions) from a couple of parent solutions. For each new couple of off-springs to be produced, a couple of solutions is selected for breeding from the population pool and the crossover operator is executed as shown below. Given two selected parents (parent 1 and parent 2), a two-point crossover is implemented. It randomly selects two positions, and exchanges the middle parts between parents to create two new children. In the proposed implementation, the first point is chosen before the common due date d, and the second point is chosen after d for better efficiency. In addition, the probability rate is set to call the crossover operator at pc=0.75, for reporting our experimental results and best solutions.

When the crossover operation generates a gene called “lethal gene” composed of repeated job, it is certain that another job is missing from the sequence. In such case, a repair mechanism is initiated to include the missing one and replaces the lethal job.

5.7 Mutation

The mutation genetic operator is normally introduced to produce small modifications to the obtained child by the crossover operator. The aim is to introduce into the population new characters that do not exist among the parents to assure the diversity of search into new solutions space, hopefully of better quality. In the proposed implementation, the mutation probability rate at which two random jobs will switch positions is set to pm=0.01 as follows:

It should be noted that the switched jobs are executed randomly in a similar way to the crossover operator; the first position is selected before the common due date d, and the second position is after d. Otherwise, the mutation operator will not have any effect on the solution because of property 2.

5.8 Mathematical Property Usage

The mathematical properties are used to evaluate the sequence at each generation whenever a new sequence is generated, for which the mean (m) is computed as indicated in Section 2.4 (property 4) to determine the starting time of the schedule. Then, the jobs in the sequence are sorted as described in property 2. Property 1 is always applied in the evaluation phase, which means there is no idle time between two jobs. Properties 3 and 5 are often used when there are opportunities to start the sequence at time >0 (i.e. the cases when d is large at h=0.6 or h=0.8). These mathematical properties are applied on each sequence at the following moments:

  • After generating each sequence at the initialization phase;

  • After generating each child at the crossover phase;

  • After muting the individual at the mutation phase.

5.9 Termination Criterion

In the literature, many forms of termination criteria to stop an algorithm can be used, including execution of a fixed number of iterations; a prefixed computational effort is reached; the best solution is not improved after a given number of iterations; and the optimal solution is found (when known). For the current implementation, a fixed number of generations is invoked, with values proportional to the size of problem n. The computational results are reported when the number of iterations is reached, (number_of_iterations=n2).

Note that the GA parameters (population size, crossover probability) were defined by Goldeberg, but in broad ranges for more flexibility of the algorithm when applied to a problem. These intervals are of course inspired by the natural phenomenon of the meta-heuristic. In addition, the values chosen here were tested and used in related works and had justified their efficiency [12, 14].

6 Computational Results

The set of benchmarks that was developed by Biskup and Feldman [7, 8] was employed to demonstrate the efficiency and effectiveness of the proposed MEGA approach. The benchmark set consisted of instances with variable sizes, n=10, 20, 50, 100, 200 jobs. The produced results (MEGA) were compared to those obtained by Nearchou [20]. It was found that Nearchou’s results are the best results in the literature in most cases, and they were selected for comparison with the proposed algorithm. For each size of n=10 to n=200, there were 10 different instances with different values of rate h=0.2, 0.4, 0.6, 0.8 to determine the common due date (i.e. there were 80 benchmarks for each size).

Table 1 summarizes the comparative results, starting from the problem’s characteristics in column one followed by our CPU time, associated MEGA average solutions over the 10 instances per size, Nearchou’s average solutions, the average of the differences per instance, and to the comments on the number of the new best solutions obtained by the proposed approach (B), the number of equal solutions (E), and the number of our worst solutions. The results were obtained on a PC with Intel Core 2 Duo 2.4 GHz CPU and 4 G RAM.

Table 1:

MEGA Results Compared with Nearchou’s Average of Best Results.

Size, InstanceCPU TimeMEGANearchouGapComments
n=10, h=0.2459.5 ms1674.41674.40
n=10, h=0.4462.6 ms973.1973.10
n=10, h=0.6475.7 ms734.3734.30
n=10, h=0.8483.1 ms715.3715.30All optimal
n=20, h=0.21.592 s6178.36178.30
n=20, h=0.41.555 s363536350
n=20, h=0.61.582 s2811.42811.40
n=20, h=0.81.606 s2724.82724.80All optimal
n=50, h=0.28.821 s35,492.735,496.1–3.4(2B)
n=50, h=0.48.474 s20,432.520,432.8–0.3(1B)
n=50, h=0.68.872 s15,896.515,898.9–2.4(1B)
n=50, h=0.88.969 s15,847.215,847.20
n=100, h=0.21.325 min132,417.4132,435.3–17.9(9B, 1E)
n=100, h=0.41.301 min78,112.478121–8.6(4B, 2E, 4W)
n=100, h=0.61.409 min64,935.164,9332.1(2B, 2E, 6W)
n=100, h=0.81.481 min64,90464,906.5–2.5(3B, 6E, 1W)
n=200, h=0.24.302 min512,282.9512,319.7–36.8(3B, 1E, 6W)
n=200, h=0.44.335 min303,374.6303,503.1–128.5(7B, 3W)
n=200, h=0.64.528 min256,356.3257,208.7–852.4(10B)
n=200, h=0.84.445 min256,255.4257,152–896.6(10B)
  1. B, MEGA finds a new best solution; E, equal solution values; W, MEGA is worse than Nearchou’s.

The summary results in Table 1 show clearly that for smaller-sized instances of n=10 and n=20, both our MEGA approach and that of Nearchou obtained the optimal solutions for such instances, which were validated using our approach. For the largest-sized instances, Nearchou’s approach becomes ineffective for tight due dates with h=0.6 and h=0.8. For n=50, n=100, and n=200, these results show 4 new best solutions, 18 new best solutions, versus 12 worse solutions, and 30 new best solutions versus 9 worse solutions, respectively.

In order to show the gap, the results are reported in Figure 2.

Figure 2: Graphical Representation of Results.
Figure 2:

Graphical Representation of Results.

Table 2 summarizes the percentages of the obtained results compared to Nearchou’s results. These results are then reported in Figure 3 to show the efficiency of our approach.

Table 2:

Summary Results.

BetterEqualWorse
Number of instances5112920
Percentages25.5%64.5%10%
Figure 3: Graphical Representation of Results.
Figure 3:

Graphical Representation of Results.

These results allow noting that

  • For n≤20, both MEGA and Nearchou’s approach give optimal solutions.

  • In both approaches, the objective is inversely proportional to h. Indeed, when h increases, d increases, and we will have more opportunities to choose the starting time t0.

  • For n=50, all results are better than those of Nearchou. As our algorithms, EDGA provides the same results as those of Nearchou except for the case n=50, h=0.2, k=3, for which MEGA is better.

  • From n≥100, Nearchou’s approach is more efficient than that of Feldmann when h=0.2 and h=0.4, and the opposite when h=0.6 and h=0.8. However, the algorithm is EDGA in a uniform manner.

7 Conclusion and Perspectives

In this paper, a new mathematically enhanced variant of GA was proposed for solving single-machine scheduling problems with the objective of minimizing the sum of earliness and tardiness penalty costs for the deviation from a common due date. This type of problems are known and proofed as NP-hard. That is why an approximate method such as GA is necessary to adopt. Two new properties for the problem were identified, and their associated proofs were provided.

The already existing three properties might stay insufficient, because they only describe the optimal sequence structure without giving any details about the beginning time of the optimal sequence.

It should be noted that these new properties allow confirming that the sequence may start at time zero or be delayed to minimize the early/tardy penalties against the common due date. They also allow computing exactly the starting time of the optimal sequence. The experimental results show that these properties have a significant contribution in terms of solution quality and the time complexity of the approach. They are especially used to compute the beginning time of the optimal sequence. The results were compared to the best ones of the literature in the field of combinatorial optimization.

As for future perspectives and based on the encouraging results, it is suggested to study the impact of each property alone on the required computational effort and quality of solutions. Further, we will investigate the new proposed approach to solutions of other related scheduling programs like single-machine scheduling with multiple due dates, as well as parallel machine scheduling problems.

Bibliography

[1] R. Alvarez-Valdes, E. Crespo, J. M. Tamarit and F. Villa, Minimizing weighted earliness-tardiness on a single machine with a common due date using quadratic models, TOP20 (2012), 754–767.10.1007/s11750-010-0163-7Search in Google Scholar

[2] A. Awasthi, J. Lassig and O. Kramer, Common due-date problem: exact polynomial algorithms for a given job sequence, 15th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing. Data Structures and Algorithms (cs.DS); Combinatorics (math.CO). Timisoara, Romania (2013).Search in Google Scholar

[3] K. R. Baker and G. D. Scudder, Scheduling with earliness and tardiness penalties: a review, Eur. J. Oper. Res.160 (2005), 190–201.10.1016/j.ejor.2004.03.006Search in Google Scholar

[4] K. R. Baker and G. D. Scudder, Minimizing earliness and tardiness costs in stochastic scheduling, Eur. J. Oper. Res.236 (2013), 445–452.10.1016/j.ejor.2013.12.011Search in Google Scholar

[5] D. Biskup, Single-machine scheduling with learning considerations, Eur. J. Oper. Res.115 (1999), 173–178.10.1016/S0377-2217(98)00246-XSearch in Google Scholar

[6] C. N. Cha, S. Lim and Y. K. Jeong, Single-machine job scheduling about a common due date with arbitrary earliness/tardiness penalties using a genetic algorithm, Asia Pacific Management Review7 (2002), 239–254.Search in Google Scholar

[7] M. Feldman and D. Biskup, Benchmarks for scheduling on a single machine against restrictive and unrestrictive common due dates, Comput. Ind. Eng.28 (2001), 787–801.10.1016/S0305-0548(00)00008-3Search in Google Scholar

[8] M. Feldman and D. Biskup, Single-machine scheduling for minimizing earliness and tardiness penalties by meat-heuristic approaches, Comput. Ind. Eng.44 (2003), 307–323.10.1016/S0360-8352(02)00181-XSearch in Google Scholar

[9] V. Gordon, J. M. Proth and C. Chu, Invited review: A survey of the state-of-the-art of common due date assignment and scheduling research, Eur. J. Oper. Res.139 (2002), 1–25.10.1016/S0377-2217(01)00181-3Search in Google Scholar

[10] R. Hassin and M. Shani, Machine scheduling with earliness and tardiness and non-execution penalties, Comput. Ind. Eng.32 (2005), 683–705.10.1016/j.cor.2003.08.012Search in Google Scholar

[11] A. Hemmak and B. Bouderah, Hybrid algorithm for optimization problems applied to single machine scheduling, Int J. Comput. Appl.66 (2013), 7–11.Search in Google Scholar

[12] A. Hemmak and B. Bouderah, Sieve algorithm – a new method for optimization problems, Int. J. Advance. Soft Comput. Appl.5 (2013), 1–15.Search in Google Scholar

[13] A. Hemmak and B. Bouderah, A mono crossover genetic algorithm for TSP, Global J. Tech.7 (2015), 109–115.Search in Google Scholar

[14] A. Hemmak and I. H. Osman, Variable parameters lengths genetic algorithm for minimizing earliness-tardiness penalties of single machine scheduling with a common due date, Electron. Notes Discrete Math.36 (2010), 471–478.10.1016/j.endm.2010.05.060Search in Google Scholar

[15] C. M. Hino, D. P. Ronconi and A. B. Mendes, Minimizing earliness and tardiness penalties in a single machine problem with a common due date, Eur. J. Oper. Res.160 (2005), 190–201.10.1016/j.ejor.2004.03.006Search in Google Scholar

[16] A. Jafarnejad, S. M. Abtahi and S. M. R. Davoodi, Optimizing the earliness and tardiness penalties in the single-machine scheduling problems with focus on the just in time, Int. J. Acad. Res. Bus. Soc. Sci.3 (2013), 315–322.10.6007/IJARBSS/v3-i7/55Search in Google Scholar

[17] J. Lassig, A. Awasthi and O. Kramer, Common due-date problem: linear algorithm for a given job sequence, in: 2014 IEEE 17th International Conference on Computational Science and Engineering, University of Electronic Science and Technology of China, 2014.10.1109/CSE.2014.51Search in Google Scholar

[18] C. J. Liao and C. C. Cheng, A variable neighborhood search for minimizing single machine weighted earliness and tardiness with common due date, Comput. Ind. Eng.52 (2007), 404–413.10.1016/j.cie.2007.01.004Search in Google Scholar

[19] S. W. Lin, S. Y. Chou and K. C. Ying, A sequential exchange approach for minimizing earliness-tardiness penalties of single machine scheduling with a common due date, Eur. J. Oper. Res.177 (2007), 1294–1301.10.1016/j.ejor.2005.11.015Search in Google Scholar

[20] A. C. Nearchou, A differential evolution approach for the common due date early/tardy job scheduling problem, Comput. Oper. Res.35 (2008), 1329–1343.10.1016/j.cor.2006.08.013Search in Google Scholar

[21] Z. Ning, C. Tao and L. Fei, A hybrid heuristic algorithm for the intelligent transportation scheduling problem of the BRT system, J. Intell. Syst.24 (2015), 437–448.10.1515/jisys-2014-0134Search in Google Scholar

[22] D. P. Ronconi and M. S. Kawamura, The single machine earliness and tardiness scheduling problem: lower bounds and a branch-and-bound algorithm, Comput. Appl. Math.29 (2010), 107–124.10.1590/S1807-03022010000200002Search in Google Scholar

[23] M. Shahriari, N. Shoja, A. E. Zade, S. Barak and M. Sharifi, JIT single machine scheduling problem with periodic preventive maintenance, J. Ind. Eng. Int. (2016), 1–12.10.1007/s40092-016-0147-9Search in Google Scholar

[24] T. Vallée and M. Yiltizogli, Présentation des algorithmes génétiques et leurs applications en économie, Mai5 (2004), Rapport technique.Search in Google Scholar

[25] K. C. Ying, Minimizing earliness-tardiness penalties for common due date single-machine scheduling problems by a recovering beam search algorithm, Comput. Ind. Eng.55 (2008), 494–502.10.1016/j.cie.2008.01.008Search in Google Scholar

Received: 2016-5-23
Published Online: 2016-7-12
Published in Print: 2017-7-26

©2017 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 29.3.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2016-0063/html
Scroll to top button