Interrupts are a critical part of embedded systems, allowing time-critical tasks to be executed with minimal latency. However, there can be significant differences in interrupt latency depending on the type of interrupt and how it is configured. This article examines the latency differences between three common types of interrupts on ARM Cortex chips: timer interrupts, GPIO interrupts, and interrupts handled by a Real-Time Operating System (RTOS).
Overview of Interrupts
An interrupt is a signal to the processor that some event requires immediate attention. When an interrupt occurs, the processor temporarily suspends execution of the current program, saves its state, and jumps to an Interrupt Service Routine (ISR) to handle the event. After the ISR finishes, the processor restores the state and resumes normal program execution.
On Cortex-M processors, interrupts have a programmable priority level, with 0 being highest priority. Higher priority interrupts can preempt lower priority ones. Interrupts also have a configurable trigger type – level or edge. Level interrupts trigger as long as the interrupt signal is active, while edge interrupts trigger only on a voltage transition.
Timer Interrupts
Most Cortex-M chips contain multiple timer peripherals such as SysTick, RTC, and generic timers. These timers can be configured to generate periodic interrupts at a certain frequency. For example, SysTick could be set up to interrupt every 1ms.
Timer interrupts normally have low latency for several reasons:
- Timers often have high default priority level (e.g. SysTick priority is configurable but defaults to 0, the highest level)
- No input synchronizing is required
- Edge trigger avoids interrupt overload
- ISR is simple and fast (often just incrementing a counter)
For example, on a Cortex-M4 chip, timer interrupts can have around 10-15 cycle latency. At 168 MHz, that is just 60-90 nanoseconds – very consistent and low jitter. This makes timer interrupts ideal for time-critical tasks like PID control loops and motor commutation.
GPIO Interrupts
GPIO interrupts are generated from external signals connected to the microcontroller’s general purpose I/O pins. Example sources include external sensor outputs, serial data lines, encoder inputs, etc. Edge or level sensitive modes are supported.
Compared to timer interrupts, GPIO interrupts often have higher and more variable latency due to additional synchronization steps:
- Input synchronizer – Double or triple samples the GPIO input signal to prevent metastability issues on asynchronous inputs.
- Deglitcher – Rejects very short pulses to avoid false triggers from noise.
- Digital filter – Only generates interrupt if signal passes logic 0/1 threshold for certain number of cycles.
These extra steps mean GPIO interrupt latency is usually hundreds of cycles at least. For example, on an STM32L4 running at 80 MHz, GPIO interrupt latency can range from 300-900 cycles (3.75us – 11.25us).
GPIO interrupts are also subject to more variability based on the priority level configured relative to other peripherals. Higher priority GPIO interrupts see lower latency. Edge trigger mode helps minimize ISR processing time.
In summary, GPIO interrupt latency and jitter is higher than timer interrupts but can be optimized via priority selection and edge triggering.
RTOS Interrupts
A Real-Time Operating System (RTOS) manages task scheduling and execution on microcontrollers. Most RTOS kernels provide an API to allow threads to pend on interrupts similar to hardware events. This allows thread synchronization and offloading of interrupt handling to higher priority threads.
When using an RTOS, interrupt handlers typically have minimal ISR processing time. The interrupt simply triggers an RTOS call to unblock a waiting thread which then executes the main handling logic at lower priority. This keeps ISR duration short and deterministic at the cost of some extra context switching time.
For example, FreeRTOS provides the xSemaphoreGiveFromISR() API to set a semaphore that unblocks a task from an ISR. The latency of this approach includes:
- Hardware interrupt latency as already discussed
- RTOS call and scheduling overhead
- Context switching time from ISR to thread
Therefore, RTOS-based interrupt handling adds significant overhead compared to bare metal ISRs. However, it provides much more flexibility in prioritizing and distributing interrupt work across threads.
Comparative Latency Numbers
Here are some example latency numbers for timer, GPIO, and RTOS interrupts on an STM32 Cortex-M4 running at 168 MHz:
Interrupt Type | Latency |
---|---|
SysTick Timer | 15 cycles |
GPIO (low priority) | 500 cycles |
GPIO (high priority) | 300 cycles |
FreeRTOS Semaphore | 2000+ cycles |
So in summary:
- Timers have lowest and most consistent latency in the 10s of cycles
- GPIO is higher and priority-dependent, ranging from 100s to 1000s of cycles
- RTOS adds significant overhead but enables better code structure
Tips for Optimizing Interrupt Latency
Here are some tips for optimizing interrupt latency when building a real-time system on Cortex-M microcontrollers:
- Use the highest priority level only when absolutely needed. Having too many high priority interrupts can negatively impact latency.
- Configure edge triggered interrupts where possible to reduce ISR duration.
- Keep ISR code as lean as possible – avoid complex calculations or library calls.
- If using RTOS, keep semaphore/mailbox APIs in ISRs simple and fast.
- Spread interrupt workload across multiple threads to maximize CPU utilization.
- Use a preemptive RTOS kernel to minimize priority inversion issues.
- Measure interrupt latencies using oscilloscope or logic analyzer to quantify and tune.
Conclusion
There is no one-size-fits-all solution for interrupt handling on Cortex-M processors. The optimal approach depends on the specific application requirements and timing constraints. In latency-critical systems, bare metal timer ISRs often provide the best response times, while in more complex applications an RTOS can enable more modular software at the cost of some determinism. By understanding the latency impacts of different options, developers can make informed design tradeoffs.
The key is benchmarking various methods on your target hardware to quantify the differences. Only by measuring interrupt latencies under realistic conditions can the right approach be selected to meet a project’s real-time performance goals.