[MUSIC] In this lesson, we want to continue a bit on static scheduling approach, but we also want to extend this to include priorities in the scheduling. By using priorities in the tasks, we can say which tasks should be preferred in front of the others, and which tasks should get more CPU time. In the static schedulers we have presented so far, we need an exact offset between each task. This means that we need to update the timer expiration point for each scheduling point. To avoid this, it's also possible to generate interrupts with regular periods and this is called tick scheduling. The chosen period is then static and is called the OS tick, and it can be, for example, one millisecond. If you have a set of tasks like this, they can only be scheduled at regular ticks, which are generated by the OS timer. Of course, a task cannot be scheduled between two ticks and this makes tick scheduling less accurate than using the exact offset. Compared to the completely static schedule, a ready task might not be scheduled as soon as it becomes ready, because of the tick latency. The task must then wait for the timer to trigger the tick at the next scheduling point. And this can cause a delay of, for example, one millisecond in the worse case. It is however, a popular way of creating dynamic schedules and a form of tick scheduling is used in practice in freeRTOS. Instead of blindly looking at the clock for determining scheduling points, time sharing is a way of creating a more flexible system. Time sharing means dividing the CPU time into small pieces and giving one piece to each task. Think of the system as a FIFO queue. Tasks are added to the queue when they are created and removed when they are executing. The tasks are scheduled with the round robin algorithm, meaning that they pick the first task, execute the task for a time, and then put it back last in the queue. Let's say we have a task, T1, added to the queue. As no other task exists, it is added in the front. Then task two is created and is added after T1. When a job is placed for execution, it is removed from the queue and put to the CPU. T2 is then removed from the front of queue and is next in line for execution. After this, we also add tasks T4 and T3 to the queue. Then after a while, T1 completes and then T2 is scheduled. A job is always executing for a fixed time before being interrupted. And if the job is completed within this time, the job is removed from the CPU. Then the next job, T2, is scheduled. And if you now consider that T2 is not completed within the time slice, a timer interrupts the execution. T2 is then stopped and put last in the queue again, and the job first in the queue is selected for execution, in this case T3. If we have n rated jobs in system, each job gets exactly one time slice every n time slices. Therefore, the response time of scheduling a task depends on the queue length, but not on the execution times. Scheduling with complete fairness is relatively easy, but a consequence is that there's no notion of priority in the system. An important task will get the same amount of CPU time as a low priority task, and this is something we usually don't want. The round robin algorithm can be improved to contain weights for tasks. A weight for a task will tell the relative amount of CPU time a job will get, and by adjusting the weights, a simple form of priority can be implemented. Here we have three tasks and we have chosen to add the weights 1 to task 3, 2 to task 4, and 3 to task 1. Then the timer interrupt will depend on the weights we added. For example, a task with higher weight will have more execution time before it is interrupted by the timer. As we learned here, there are ways of creating a more flexible system than a pure static schedule. One such approach is by time sharing. And we also saw that that adding weights to tasks can be used as a simple form of priorities in round robin systems. And in other lessons we will further develop this concept of prioritizing tasks.