Elsevier

Computers & Electrical Engineering

Volume 50, February 2016, Pages 143-165
Computers & Electrical Engineering

An unfair semi-greedy real-time multiprocessor scheduling algorithm

https://doi.org/10.1016/j.compeleceng.2015.07.003Get rights and content

Abstract

Most real-time multiprocessor scheduling algorithms for achieving optimal processor utilization, adhere to the fairness rule. Accordingly, tasks are executed in proportion to their utilizations at each time quantum or at the end of each time slice in a fluid schedule model. Obeying the fairness rule results in a large number of scheduling overheads, which affect the practicality of the algorithm. This paper presents a new algorithm for scheduling independent real-time tasks on multiprocessors, which produces very few scheduling overheads while maintaining high schedulability. The algorithm is designed by totally relaxing the fairness rule and adopting a new semi-greedy criterion instead. Simulations have shown promising results, i.e. the scheduling overheads generated by the proposed algorithm are significantly fewer than those generated by state-of-the-art algorithms. Although the proposed algorithm sometimes misses a few deadlines, these are sufficiently few to be tolerated in view of the considerable reduction achieved in the scheduling overheads.

Introduction

Real-time systems maintain their correctness by producing output results within specific time constraints called deadlines [1]. The deadlines of a given real-time taskset cannot be met without the use of an optimal scheduling algorithm unless some constraints are imposed. 1 An optimal scheduling algorithm, with regard to a system and a task model, can be defined as one which can successfully schedules all of the tasks without missing any deadline for any schedulable taskset [2], [3], [4].

Optimal real-time multiprocessor scheduling algorithms always achieve high processor utilization that is equal to the number of processors in the system. Most of these algorithms achieve optimality by adhering to the fairness rule completely or partially. Under the fairness rule, tasks are forced to make progress in their executions in proportion to their utilizations. An example of an algorithm that strictly follows the fairness rule is P-fair [5], which forces all tasks to advance their executions in proportion to their utilizations at each time quantum. DP (Deadline Partitioning) algorithms such as LLREF (Largest Local Remaining Executions First), LRE-TL (Largest Remaining Execution-Time and Local time domain) and DP-Wrap (Deadline Partitioning-Wrap) [3], [6], [7] partially follow the fairness rule by forcing tasks to make progress in their executions in proportion to their utilizations at the end of each TL-plane (time slice) in a fluid schedule model, which corresponds to the deadline of tasks in the system. Although adhering to the fairness rule always ensures optimality, it produces a large number of scheduling overheads in terms of task preemptions and migrations which adversely affect the practicality of the algorithm [2], [7] because the processors will be busy executing the scheduler itself rather than executing the actual work [2]. In fact, the empirical study in [8] confirmed that preemption and migration delays could be as high as 1 ms on a multiprocessor system that contains 24 cores running at 2.13 GHz with three levels of cache memory. Therefore, a real-time multiprocessor scheduling algorithm should consider a reduction in the scheduling overheads in order to be practically implemented.

To further explain the problem of following the fairness rule, consider the taskset shown in Table 1 [6] to be scheduled on a system of 4 processors. In DP algorithms, such as LLREF, LRE-TL, and DP-Wrap, the fairness rule is always ensured at the deadline of tasks; they divide the time into TL-planes, i.e. time slices as mentioned previously, which are bounded by two successive deadlines, and the end of each TL-plane corresponds to the deadline of a task in the system. Hence, tasks are marshalled in the intervals [0, 5), [5, 7), [7, 10), [10, 14), [14, 15), [15, 16), [16, 17), [17, 19), [19, 20), [20, 21), [21, 25), [25, 26), [26, 28), and [28, 29), which correspond to the first 14 TL-planes, after which all tasks would finish at least one period of their executions. This means that at the beginning of each TL-plane, all tasks have to be allocated local executions proportional to their utilizations and marshalled until the end of the time slice at which they must all be preempted. This will result in numerous preemptions as well as migrations. For example, although task T2 has worst-case execution requirements of 1 and period of 16, it is forced to make progress in its executions in each TL-plane even though it can wait for 15 units of time before it become critical. The same case holds for task T5 (worst-case execution requirements of 2 and period of 26) which can wait for 24 units of time before it become critical, however, it is also forced to make progress in its execution in each TL-plane. This means that task T1 will be preempted 6 times before it reaches its deadline, and similarly, task T5 will be preempted 11 times before it reaches its deadline.

In this paper, we present an efficient global real-time multiprocessor scheduling algorithm, namely, USG ( U nfair S emi- G reedy). It is “Unfair” because we have totally relaxed the fairness rule, and it is “Semi-Greedy” because we have employed two policies: the Non-Preemptability policy to avoid the problem of greedy schedulers as well as to reduce the scheduling overheads, and the Zero-Laxity policy to maintain the criticality of the system as well as to increase the schedulability of the algorithm.

The remainder of this paper is organized as follows. Section 2 briefly reviews related studies. Section 3 describes the task model and defines the terms that will be used in this paper. Section 4 presents the proposed algorithm and illustrates its underlying mechanism with examples. Section 5 analyses the deadline misses under the proposed algorithm. Section 6 discusses the run time analysis of the proposed algorithm. Section 7 presents and discusses the results obtained using the proposed algorithm. Finally, Section 8 states the conclusions.

Section snippets

Related work

LLF (Least Laxity First) [9], initially introduced as the least slack algorithm, is a fully dynamic scheduling algorithm, i.e. the priorities of jobs change dynamically according to their laxity which in turn changes over time. Although this dynamicity of LLF can increase its schedulability, it has a negative impact because it generates a large number of preemptions and migrations, which adversely affect its practicality. Therefore, LLF has not attracted much research attention even though its

Model and term definitions

In this paper, we consider the problem of scheduling n independent periodic tasks with implicit deadlines (deadlines equal to periods, i.e. di = pi) on a platform of m symmetric SMPs (Shared-Memory Multiprocessors). In real-time systems, a periodic task is one that is released periodically at a constant rate. Usually, two parameters are used to describe a periodic task Ti: its worst-case execution time ei and its period pi. An instance of a periodic task (i.e. release) is known as a job and is

The proposed algorithm

The key concept underlying the proposed algorithm is the total relaxation of the fairness rule in order to avoid a large number of scheduling overheads in the form of task preemptions and migrations. However, totally relaxing the fairness rule leads to greedy schedulers which fail to schedule some tasksets, as explained in [7]. A greedy scheduler is one in which a job is executed according to a specific priority, e.g. according to the job’s deadline or laxity. EDF and LLF are well-known

Deadline misses under USG

In this section, we show how and when USG can miss deadlines and fail to schedule tasksets.

Theorem 1

Let T = {T1, T2,  , Tn} be a schedulable taskset, i.e. Um and Umax  ≤ 1. Then, USG can fail to schedule such a taskset iff the number of tasks that reach zero laxity at the same time is >m.

Proof

Suppose that task Tx missed its deadline at time t = x; then, the following conditions hold for task Tx:

  • Tx has zero laxity at time t = x.

  • Tx has been preempted by another task Ti that also reaches zero laxity at the same

The complexity of USG

The complexity, i.e. the running time analysis, of USG depends on the event handler procedure being called. In the following subsections, we discuss the running time of each of the event handler procedures in USG. Table 4 summarizes the complexity of the USG procedures.

Results and discussion

To test the performance of USG in terms of scheduling overheads as well as schedulability, we used a standard procedure for generating the real-time tasksets. This procedure has been used in many recent works [2], [7], [19]. The data (tasksets) has to be generated randomly, as it is generally not easy to obtain real-time data for multiprocessors, especially data with hard real-time constraints. This is attributable to at least three factors. First, hard real-time systems are normally targeted

Conclusion

This paper presented an efficient real-time multiprocessor scheduling algorithm for reducing the scheduling overheads in terms of the number of task preemptions and migrations while maintaining high levels of schedulability. The key concept underlying the algorithm is the total relaxation of the fairness rule in order to reduce scheduling overheads. Even though this ensures the optimality of the algorithm, the overheads have a significant impact on the algorithm’s practicality. The proposed

Hitham Alhussian received his BSc and MSc in Computer Science from the School of Mathematical Sciences, Khartoum University, Sudan. He obtained his PhD from Universiti Teknologi Petronas, Malaysia. Currently, he is a Postdoctoral Researcher in the High-performance Computing Center in Universiti Teknologi Petronas. His main research interests include real-time Systems and parallel and distributed Systems.

References (30)

  • S. Funk et al.

    DP-fair: a unifying theory for optimal hard real-time multiprocessor scheduling

    Real-Time Syst

    (2011)
  • Bastoni A, Brandenburg BB, Anderson JH. An empirical comparison of global, partitioned, and clustered multiprocessor...
  • J.Y.-T. Leung

    A new algorithm for scheduling periodic, real-time tasks

    Algorithmica

    (1989)
  • J. Lee et al.

    Laxity dynamics and LLF schedulability analysis on multiprocessor platforms

    Real-Time Syst

    (2012)
  • Lee SK. On-line multiprocessor scheduling algorithms for real-time tasks. In: IEEE region 10’s ninth annual...
  • Cited by (6)

    Hitham Alhussian received his BSc and MSc in Computer Science from the School of Mathematical Sciences, Khartoum University, Sudan. He obtained his PhD from Universiti Teknologi Petronas, Malaysia. Currently, he is a Postdoctoral Researcher in the High-performance Computing Center in Universiti Teknologi Petronas. His main research interests include real-time Systems and parallel and distributed Systems.

    Nordin Zakaria obtained his PhD from Universiti Sains Malaysia in 2007, working in the field of computer graphics. Since then, his areas of research have been diverse, encompassing high-performance computing, quantum computing, and motion capture and visualization. He is currently heading the High-performance Computing Center in Universiti Teknologi Petronas.

    Ahmed Patel received MSc/PhD degrees from Trinity College, Dublin, specializing in packet-switched networks. Currently, he is a Full Professor at Jazan University, Saudi Arabia. His research covers networking, security, forensic computing, and distributed systems. He has authored 260 publications and co-authored several books. He is a member of the Editorial Advisory Board of International Journals and has participated in Irish, Malaysian, and European funded research projects.

    Reviews processed and recommended for publication to the Editor-in-Chief by Guest Editor Dr. Yingpeng Sang.

    View full text