Embedded Basics – 7 Tips for Managing RTOS Memory Performance and Usage

There are two excuses that I typically hear from developers on why they refuse to use a RTOS:
• An RTOS memory footprint is too big
• The RTOS has too much overhead

These excuses might have had some merit five years ago but today they hold no weight. A typical RTOS will load the CPU less than 4%, require less than 16 KB of flash space and less than 4 KB of RAM. In most cases, the issues with performance and memory are related to how the developer is using the RTOS and gaps in their knowledge on how to properly use and configure the RTOS. Below are seven tips that developers can follow in order to optimize their RTOS application memory usage.

Tip #1 – Worst Case Stack Analysis on every Task

One of the biggest memory wasters is the memory that is allocated for a tasks stack. By default, most RTOSes will allocate a kilobyte to hold the tasks stack which contains things such as local variables, data structures and function call return addresses. The problem with the default size is that developers that are new to using an RTOS will often not examine each task and properly size the stack. A task that only blinks a few LED’s and does nothing else will often have a kilobyte for the stack even though 64 bytes would be more than enough. Failing to examine each task and properly size the stack can cause far more RAM to be used than is actually required for the application.

Tip #2 – Avoid Excessive Stack Usage

Since every task has a stack, the task stacks become a huge contributor to the RAM that is necessary to run the application. When developers design and implement their tasks, they should attempt to minimize stack usage. This can be done by:
• Avoiding recursive functions
• Minimizing function calls
• Avoiding large local data structures
Developers need to not just write code but carefully consider the memory and performance implications with every variable, data structure and function call. Avoiding excessive stack usage will allow developers to shrink the stack size and save RAM usage.

Tip #3 – Use Memory Block Pools

A big issue that developers will often encounter when developing an RTOS based application is that they need to allocate memory dynamically. The problem with dynamic memory allocation is that memory will usually be allocated from a heap which behaves like a byte pool. The heap and byte pools have a number of disadvantages such as:
• they can fragment
• memory allocation is non-deterministic
A block pool on the other hand comes in fixed blocks that not only can be allocated deterministically but also will not fragment. For developers that need to dynamically allocate memory the block pool is a better choice than heap or byte pools.

Tip #4 – Minimize RTOS Objects

A RTOS can help developers break up their application into reusable, semi-independent programs that use RTOS objects such as semaphores, mutexes and message queues to communicate and synchronize task execution. Each RTOS object does have a control block associated with it that uses a small amount of memory. In very resource, constrained applications or if developers over use these resources, more memory can be used than would actually be necessary. For that reason, developers should carefully design their RTOS application and try to minimize how many RTOS objects are used.

Tip #5 – Consider using Event Flags instead of Semaphores

RTOS features can vary from one RTOS to the next but in the several different RTOSes that the author uses, using event flags instead of semaphores can result in a slightly smaller footprint. A semaphore contains not only a control block but also some basic code to perform semaphore operations such as giving and taking a semaphore. In general, this code tends to be slower and use more memory than an event flag. An event flag is really nothing more than a memory location where every bit in the memory location represents an event such as the button was pressed or the temperature sensor was just sampled.

Tip #6 – Minimize Task Priority Levels

Real-time operating systems allow a developer to set the priority levels that a task can be set to. For example, the default for many systems is 0 to 31. In some circumstances, defaults could range from 0 to 128 or even 0 to 1024. In general, having fewer task priority levels can improve performance and decrease memory usage. Developers should try to keep the priority level settings to 0 to 31 unless there is a good reason to set it otherwise.

Tip # 7 – Optimize the RTOS Configuration File

RTOSes will typically have a configuration file that allows a developer to fine tune the RTOS behavior. The configuration file allows a developer to set features such as the default stack size, how many priority levels are available along with which synchronization objects will be included in the build. In many circumstances, making modifications to the configuration file can provide developers with a smaller footprint for the RTOS and even improved performance depending on the configuration options that are available. Make sure that you examine the RTOS configuration file and understand every option that is available.

Conclusion
An RTOS if used improperly can cause the memory footprint required for an application to balloon to unusable levels. In many circumstances, high memory usage is due to the way a developer is using the RTOS and not indicative of the RTOS itself. In this post, we’ve examined several tips that developers can follow to help minimize their own RTOS application footprints.

Share >