Managing memory in a real-time system can be challenging. There are many aspects to consider such as code space memory management, RAM memory management, memory optimizations and how they affect performance and so on. Below are seven general tips that will help real-time developers start to manage their memory.
Tip #1 – Avoid malloc
In a real-time system that requires deterministic timing, using malloc to allocate memory on the fly is a bad idea. First, the typical malloc implementation is not deterministic which means there is no guarantee how long it will take allocate memory if it is even able to do so. There are many real-time issues that arise from using malloc such as
• Heap fragmentation
• Failure to allocate memory
• Non-deterministic behavior
Don’t tempt fate, just avoid malloc. We will discuss a few alternative in this article.
Tip #2 – Use Memory Byte Pools for Task Stack Allocation ONLY
RTOSes usually contain numerous mechanisms for developers to allocate memory. The options are usually a byte and block memory pools. Byte memory pools behave very similarly to a heap and allocate memory like malloc. There are some implementations that are deterministic but there is still the potential issue for heap fragmentation. For these reason, it is highly recommended that developers only use byte pools to allocate memory at the application beginning such as for buffers or task stacks.
Tip #3 – Use Memory Block Pools for Dynamic Memory Allocation
There are times when developers can’t get away with statically allocating all their memory. The application may not know ahead of time how much memory is needed, or allocating all the memory up front may require more RAM than is available on the microcontroller. We have already mentioned that we don’t want to use malloc or byte pools so what is a developer to do in these circumstances? The answer is to use a block memory pool. A block memory pool allocates memory in fixed blocks of memory unlike a byte memory pool which does so one byte at a time. The algorithms for block memory pools are deterministic and fast! So if you need to dynamically allocate memory, use a block memory pool. (Most RTOSes have them).
Tip #4 – Statically Allocate Memory
I have been dancing around the issue a bit but now let’s just come out and say it; the best practice a real-time developer can follow is to statically allocate memory. Statically allocating memory means that all the memory allocation is performed at compile time rather than run-time. This is the safest way to ensure determinism and that there won’t be memory fragmentation issues. When a developer can’t allocate memory at compile time, such is the case with some RTOSes that allocate task control blocks dynamically, try to perform all dynamic memory allocation during system initialization. Allocating memory at start-up will give the appearance that the memory was statically allocated.
Tip #5 – Minimize RTOS Object Use
Every object that is created through the RTOS such as tasks, semaphores, message queues and so on all have a control block associated with them. The control block is essentially a structure that holds various parameters that are necessary for the object to perform its function. Developers working in resource constrained environments will want to minimize how many of these objects they use in their application. RTOS objects can very quickly start to use a significant amount of RAM if a developer doesn’t closely monitor their code.
Tip #6 – Change the compilers default optimization settings
Handling memory during run-time isn’t the only memory management issues that a developer will encounter. Sometimes developers need to try to optimize RAM and ROM in order to minimize BOM costs for the microcontroller they use. In many cases, compilers such as GCC do not contain the best optimization settings by default and the code is often bloated and slow. Never rely on the default compiler settings. Review the compiler manual for optimizations and settings that can be used to adjust RAM and ROM size. Alternatively, developers may want to find development tools that can do this for them such as Somnium’s DRT product.
Tip #7 – Monitor the Memory Map File
A great way to track where ROM and RAM is going is to review the memory map file that is generated by the compiler. This file will tell developers the code size for functions along with how much memory is being allocated for their variables. Map files generally differ slightly from one tool to the next so the it requires the developer to open the file and browse them manually to determine where their memory is being used. Attentively, developers could write a Python script that will read the file and provide mechanisms for viewing where optimizations and code reworking could be the most efficient.
Real-time embedded software developers often struggle with managing their systems memory. By not tracking where memory is going they can quickly run out of code space or run-time issues related to the heap being fragmented. The tips that we’ve examined in this article seem simple, but by following them developers can not only better manage their memory footprint, they can also save themselves the headache from debugging a system that is teetering on the edge of a memory disaster.