Transitioning from bare-metal embedded software development to a real-time operating system (RTOS) can be a difficult endeavor. Many developers struggle with the question of whether they should use an RTOS or simply use a bare-metal scheduler. One of the goals of this series is to walk developers through the transition and decision making process of abandoning bare-metal thinking and getting up to speed quickly with RTOSes. Before diving into the details of RTOSes, the appropriate first step is review some of the most popular scheduler techniques available to embedded software developers.

From bare-metal to RTOS Quick Links

Technique #1 – Round Robin

The easiest scheduler to implement in software is the Round Robin scheduling technique. Round Robin is so easy to implement because it is a natural way of writing software. Developers simply add new code to the super loop and a task is now added. There is no need to create dynamic and memory expensive objects. Just add code!

One of the biggest problems with a Round Robin approach to scheduling is that the real-time performance of the system is affected every time new code is added. The super loop might be executing once per 100 microseconds but suddenly add new code and now it’s 110 microseconds. That code pieced developed to execute perfectly every is now toast and has to be reworked. The fact of the matter is that using just a Round Robin approach results in constant rework to ensure system timing and can be very difficult to get predictable results for hard real-time requirements.

A simple example of how a Round Robin scheduler works can be seen in Figure 1.

Figure 1 - Round Robin Scheduling

Technique #2 – Round Robin with Interrupts

Bare-metal developers never allowed the issues with Round Robin to get them down. There are dozens of incremental techniques that can be used as a scheduler before having to roll out the big gun (RTOS). The next algorithm that developers have found quite useful over the years is Round Robin scheduling with interrupts. In these applications, any code that has hard real-time requirements or are timing sensitive are moved into an interrupt service routine. An example might be incoming UART data that if left too long could be overwritten by the next incoming data byte. The timing critical section, retrieve the data, is then handled as soon as the interrupt fires by moving the data byte into a circular buffer. Processing the data byte is usually a soft real-time requirement and can be handled by the super loop when it gets around to it.

The interrupt behavior can even be used to create pre-emptive behavior that for simple applications can mimic an RTOS. Careful selection of interrupt priorities and architecting the round robin loop can go a long way towards real-time and reliable behavior before needing to move on to other more advanced scheduling techniques. Figure 2 shows an example of what Round Round with interrupts would look like from a model point of view. On the left is the traditional Round Robin super loop. At any point, an interrupt could fire causes code execution to move from the left to the right and enter the interrupt service routine (ISR). The interrupt would then execute to completion, unless a priority interrupt fires, and then return where it left off on the left side of the diagram.

Figure 2 - Round Robin with Interrupts

Technique #3 – Queued

Queued scheduling is an interesting technique that is an intermediate step between Round Robin with Interrupts and Cooperative scheduling. In most cases it makes sense to just use a Cooperative Scheduler, but it is worth taking a few moments to discuss Queued scheduling. Before discussing the details, examine Figure 3 for a moment.

Figure 3 – Queue Scheduling Behavior

In Queued scheduling, an array of function pointers (queue) is created that initially contains only NULL pointers (Figure 3a). When a task needs to be executed, perhaps triggered by an interrupt for example, a pointer is inserted into the first NULL location. It is possible for multiple tasks to become ready for execution at the same time. When this occurs, the scheduler simply executes one task at a time (Figure 3b) in a Round Robin fashion until the queue is once again empty (Figure 3c and Figure 3D). Once a task has been executed, its function pointer location in the queue is returned to a NULL pointer.

Technique #4 – Cooperative

Cooperative schedulers are probably one of the most widely used bare-metal scheduling algorithms available to developers. A Cooperative scheduler resides in a very small memory footprint and can be easily configured to handle any number of tasks. A Cooperative scheduler consists of a simple structure that contains that contains a function pointer to the task to be executed, the period at which the task needs to be executed and last time the task was executed. Developers will often create an array of tasks that contain all of this critical task information.

The algorithm itself is pretty simple and straight forward. The system is first initialized and then a background timer is used to keep track of time. The scheduler reads the system tick and then loops through each of the tasks defined for the system and calculates whether it is time to execute the task or not. If so, the task is executed, otherwise the next task is checked. When more than one task needs to run, they are executed in a Round Robin fashion. Developers still need to take care to monitor their system execution times to make sure during the worst case, when all tasks have to execute, that all of the deadlines are still met.

Developers interested in learning more about Cooperative schedulers can download an example here. The scheduler can be ran with any microcontroller and only requires that a single timer be setup to get the scheduler working.

Figure 4 – Cooperative Scheduler (High Level)

Technique #5 – Real-time Operating System

The last technique that we are going to touch on is the RTOS. The RTOS is the most powerful scheduler a real-time developer can use and also the most complicated. The RTOS can usually be configured to use a number of deterministic scheduling algorithms that guarantee task deadlines are met. Each task has its own separate task control block that contains its own stack, function and ID among other parameters. Each task can literally be viewed as its own separate application. The RTOS also comes with a wide variety of synchronization and communication tools such as semaphores, mutexes and message queues.

RTOSes are a powerful tool for embedded software developers but can sometimes be overkill. For simple 8-bit or 16-bit applications a Cooperative scheduler may work perfectly fine and use far less memory. When developers are faced with using a complex 32-bit microcontrollers with more than 32 kB of flash and 4 kB of RAM, the use of a RTOS begins to make a lot of sense. Many 32-bit applications require the use of USB, TCP/IP and file systems which can be very difficult to develop for bare-metal applications. Most third party middleware is designed to integrate seamless with an RTOS.

Conclusions

To use an RTOS or not use an RTOS, that is the real question. Each scheduling algorithm has its own advantages and disadvantages. We have examined the most commonly used scheduling algorithms which forms a basis for making such a determination. The next step is to learn when and why to use an RTOS. Stay tuned!



