Microcontrollers have limited amounts of memory available, so it is important to understand how this memory is partitioned and used. There are two main types of memory in microcontrollers – code memory and data memory. Code memory stores the executable instructions for the program running on the microcontroller. Data memory stores variables and data used by the program during runtime. Understanding the difference between code and data memory partitioning is key to writing efficient programs for microcontrollers.
Code Memory
Code memory, also known as program memory, stores the compiled program code as binary instructions. It is non-volatile, meaning the code is retained even when power is removed from the microcontroller. Code memory is programmed once and then read-only during normal operation. Here are some key things to know about code memory in microcontrollers:
- Stored in non-volatile flash or ROM memory
- Read-only during normal operation
- Stores instructions as binary code
- Size ranges from 4KB to 2MB depending on microcontroller model
- Executes instructions sequentially (or with jumps)
- Optimized for sequential reads of code
When the microcontroller is powered on or reset, it starts executing the program from the beginning of the code memory. The instructions are read sequentially until a jump or branch operation redirects execution flow. Code memory provides fast, reliable reads to fetch instructions.
However, there are some downsides to large code memory:
- Increased microcontroller cost for larger sizes
- Higher power consumption for unused memory
- Longer fetch times for instructions as code size increases
Therefore, it is best to optimize program code size to only use the needed code memory. Unused memory results in higher costs and power use. The size of code memory can range from just a few kilobytes up to a couple megabytes for larger microcontrollers.
Data Memory
Data memory, or RAM, is used to store variables and data during program execution. Unlike code memory, data memory is volatile, meaning the contents are lost when power is removed. Data memory provides both read and write access for the program. Here are some key characteristics of data memory:
- Volatile RAM memory
- Provides both read and write access
- Stores program variables and data
- Sizes range from hundreds of bytes to kilobytes
- Optimized for frequent variable access
Data memory contains CPU registers, the program stack, global/static variables, and heap memory for dynamic allocation. While executing, the program can read and write to data memory to manipulate variables and data. Data memory supports random access – variables can be read or written freely in any order. This makes data memory faster for accessing different variables.
However, there are also downsides to large data memory sizes:
- Increased microcontroller cost
- Higher power consumption
- Variables initialized at start-up may delay boot process
Data memory is more limited than code memory, ranging from a few hundred bytes to a couple kilobytes. It’s important to optimize variable usage to avoid waste and make best use of the available data memory.
Harvard Architecture
Most microcontrollers use Harvard architecture which physically separates code and data memory into different address spaces. This allows both memories to be accessed simultaneously, providing greater throughput. The CPU fetches instructions from code memory while reading/writing data in data memory in parallel.
Some advantages of Harvard architecture include:
- Code and data can be accessed in parallel
- No contention between instruction fetches and data access
- Allows for dual bus architecture
- Instruction fetches can have higher priority
- Modifications to code do not affect data
The physical separation provides high performance by eliminating contention between the CPU and data accesses. This parallelism reduces stalls and improves overall throughput. Additionally, code memory and data memory can utilize different memory technologies that are optimized for their specific uses.
Memory Mapping
To provide ordered access between the CPU core and memories, microcontrollers utilize a memory mapping scheme. This maps code and data into different address regions. For example:
- Code memory: 0x0000 – 0xFFFF
- Data memory: 0x2000 – 0x2FFF
The CPU core requests an address when it needs to fetch an instruction or access a data variable. The memory mapper circuitry decodes the address and routes it to either code or data memory as needed. This allows the core to seamlessly access both memories uniformly using one address space.
Memory mapping also provides the ability to access external memory like RAM or flash. Smaller micros may only have internal memory, while larger ones allow external memory mapping:
- Internal code memory: 0x0000 – 0x7FFF
- External code memory: 0x8000 – 0xFFFF
- Internal data memory: 0x2000 – 0x2FFF
- External data memory: 0x3000 – 0x4FFF
This allows microcontrollers to exceed their physical memory limits by accessing external memory through the mapper.
Accessing Memory
Microcontroller code uses special instructions to access each memory type. For code memory, the program counter register points to the next instruction to fetch. This increments automatically each cycle to sequence through the code linearly or with jumps. To access data memory, special load/store instructions reference variables by calculated addresses or labels. Common instructions include:
- LDR – Load register from data memory
- STR – Store register to data memory
- PUSH – Push onto stack in data memory
- POP – Pop from stack in data memory
- MOVT – Move to top code memory half
Timing diagrams illustrate the parallel access of code and data memory. While the CPU core is fetching the next instruction from code space, data memory can be read/written without contention. This maximizes throughput.
Code Memory Optimization
Since microcontroller code memory is limited, it is important to optimize program code size. Here are some tips for reducing code size:
- Eliminate unused code and features
- Declare functions inline rather than external
- Use C language pointers instead of arrays
- Reduce identifier length to shorten instruction length
- Use compiler optimization for smaller code
- Avoid function pointers which require a lookup table
Additionally, microcontrollers often use compressed instructions that package shorter, more frequent instructions into 16-bit opcodes rather than 32-bit. This reduces overall storage requirements. Careful use of microcontroller resources can minimize wasted code memory.
Data Memory Optimization
Similarly, optimizing data memory usage is also important. Techniques for optimizing data memory include:
- Allocate variables statically rather than dynamically when possible
- Use minimal variable sizes (like uint8 rather than int)
- Reduce variable scope to process small chunks of data at once
- Reuse variables for multiple purposes instead of new variables
- Initialize variables at runtime rather than global init
Stack space can also be reduced by:
- Limiting function call depth
- Passing function parameters in registers instead of on the stack
- Using global variables rather than local variables on the stack
Lastly, reducing RAM usage for global variables decreases startup time after reset or power-on. The microcontroller copies initialized variables from flash into RAM on startup, so minimizing this allows faster boot.
Split Memory Models
Some microcontroller architectures like ARM implement a split memory model where code and data memory are unified rather than separate. In this case, Harvard architecture is not used. With a unified address space, contention occurs if simultaneous instruction and data accesses are required to the same memory.
However, split memory allows more flexibility in partitioning memory. Different mixes of code vs data memory can be configured, rather than entirely separate memories. The split ratio is set at build time based on the program’s needs. The linker file defines the split between instruction and data addresses. Some microcontrollers allow the split to be dynamically configured at runtime as well.
Overlaying Code Memory
To reduce external code memory requirements, microcontrollers often support overlaying sections of code memory. This allows infrequently used parts of code to be stored externally, then loaded into internal memory only when needed for execution. For example:
- Main code stored internally
- Initialization routines stored externally
- Error handling routines stored externally
This takes advantage of the microcontroller automatically fetching code from external memory when an access falls within its range. The linker file defines the overlay regions. This technique greatly reduces external memory requirements by overlaying different code segments into the same physical memory.
Memory Protection
To improve robustness, some microcontrollers provide memory protection mechanisms. These protect reserved code or data regions from being overwritten incorrectly. Access violations trigger exceptions to prevent corruption. Two main forms of memory protection are:
- MPU – Memory Protection Unit partitions memory into protected regions with assigned permissions and access rules.
- MMU – Memory Management Unit provides virtual address spaces, dynamic allocation, and enforceable permissions.
MPUs are simpler and lower overhead, while MMUs provide more advanced memory access control capabilities. Memory protection prevents bugs from inadvertently corrupting code and data in operation.
Caching and Prefetching
To maximize performance, many microcontrollers utilize caching mechanisms to reduce the average latency of memory accesses. This takes advantage of locality principles to cache frequently used code and data in faster memory. Prefetching techniques also proactively request instructions and data in advance before they are actually needed. Caching and prefetching hide memory latency and improve throughput. Examples include:
- Instruction cache – caches recent code fetches
- Data cache – caches data reads/writes
- Branch target buffer – caches branch destinations
- Instruction prefetch – fetches ahead in code memory
These mechanisms improve performance when memory access patterns have high locality. However, they increase memory cost and overhead. Simpler microcontrollers may omit caching and prefetching capabilities due to their cost overhead.
Conclusion
In summary, code and data memory partitioning is an important microcontroller design consideration. Harvard architecture physically separates the two memory regions to allow simultaneous parallel access. Program code is stored in slower non-volatile memory like flash, while data memory uses faster volatile RAM. Optimizing usage of limited memory is critical for efficiency. Caching, prefetching, overlaying, and protection mechanisms help improve performance and robustness. Understanding how to best utilize microcontroller memory resources results in lower cost and power consumption while meeting the program’s functional requirements.