Scheduling Algorithms

From OSDev.wiki
Revision as of 10:10, 3 December 2006 by osdev>Telexicon (Ported over from old OS Faq)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
This page is a work in progress.
This page may thus be incomplete. Its content may be changed in the near future.

Known scheduling algorithms include Round Robin Scheduling, Priority-Based Scheduling, Weighted Scheduling, Rate Monotonic Scheduling, Feedback Scheduling, ...

To make some clearance in the mist, and to help you choose scheduling algorithms, here is a simple classification:

The trick with any scheduling algorithms is that it must fulfill a number of criteria:

  • no task must be starved of resources - all tasks must get their chance at CPU time;
  • if using priorities, a low-priority task must not hold up a high-priority task;
  • the scheduler must scale well with a growing number of tasks (ideally, being O(1), i.e. taking constant time no matter how many tasks are queued - yes, this has been done, e.g. in the Linux kernel by Ingo Molnar).

An Example (Priority-Based Round Robin)

Imagine you have a Scheduler, which is responsible for the management of 10 Queues of runnable processes. A runnable process is stuck to the tail of the queue corresponding to his priority. Say, 1 is the highest priority and 10 the lowest.

The Scheduler then checks each of the ten ready-queues for runnable processes (if (queue[Priority].head->next!=NULL)), picks the first runnable process (so, the higher the priority the earlier a process gets its turn of CPU time) and hands it to the task switching mechanism.

Now, imagine the following: You have a queue whose property is Round-Robin in the list of queues: The actual running process at the head of this queue has a certain share of time. Each time the timer IRQ fires, this share is decremented by one. If the share is down to zero, the timer isr invokes the round-robin scheduler, which picks the process from the queues head and stuffs it to the tail of it whilst renewing its share of time. The next process is to get the CPU for its share of time.

The task of scheduling is usually not concentrated in one single function but there are more functions working together.

Let's have a look on the round robin scheduler with three processes in the queue: A B C:

A(time 0) B(time 10) C(time 10)  A's time slice is zero: let's do round robin scheduling:
B(time 10) C(time 10) A(time 10) ... one clock tick occurs ... the next one ...
B(time 8) C(time 10) A(time 10)  ... several clock ticks occur ... b's time slice is worn out ...
C(time 10) A(time 10) B(time 10) ... ten clock ticks later ...
A(time 10) B(time 10) C(time 10) ... now A has it's share of [[CPU]] time again.

Real-Time Scheduling Algorithms

Real-Time Scheduling Algorithms are a special class of algorithms of which it is required that they can guarantee a process will be done before its deadline. The only way these algorithms can work is if they at least know when the deadline for a process is, and how much the process takes of the system. Only if the system is not overloaded (subjective term) can the threads be guaranteed to finish before their deadline.

Each task has to be scheduled Xt times a second, or every Yt milliseconds (Yt = 1000 / Xt). Each run of that task takes at most Zt milliseconds. This task then creates a load of Lt = Zt / Yt.

The system as a whole has a load L, which is the sum of all task-loads: L = E Lt. If the system load exceeds 0.7 (in some rare cases it can be slightly larger, but we don't count them) the system is unschedulable using Rate Monotonic Scheduling. If this system load exceeds 1.0 it is unschedulable for any real-time system. Note that for normal systems any load is possible, including the ones that are extremely large. They will make the system very unusable though.

Rate Monotonic Scheduling

Rate Monotonic Scheduling is a way to schedule Real-Time threads in such a way, that can be guaranteed that none of the threads will ever exceed their deadline. The load of the system can vary, but in order to guarantee the deadline the load may not go above 0.7 or 70% CPU time usage.

The RMS scheduling works by assigning each task a priority based on its interval. The task with the shortest interval gets the highest priority and the task with the largest interval gets the lowest priority (still real-time though). The tasks are then run similar to a prioritized preempting [#Round-Robin]. This means, any task that can run runs, and if a task runs but a task with a higher priority is available, the higher one runs instead.

If your system is based on a #Round-Robin scheduler, this is the easiest way to do Real-Time scheduling. However, because of the 70% limit it's not the best algorithm.

Earliest Deadline First

Each task in an EDF scheduler is assigned a _deadline_ (e.g. a moment in the future at which the task _must_ be completed). Every time a task is inserted in the system or completed, the scheduler looks for the task which has the closest deadline and selects it for execution. In order to ensure that the scheduler is still able to meet each deadline, a 'monitor must evaluate if each new task doesn't overload the system and deny execution if it will do so.

In order to implement EDF-based system, one will have to know both the _deadline_ of the task (which could optionally be computed as "no more than X ms in the future") and the expected time needed to perform the task (required by the monitor). QoS network routers usually implement variants of EDF scheduling.

Related Articles

External Links