Interrupt latency and jitter are important performance metrics to consider when using Cortex-M processors. Interrupt latency refers to the time delay between the assertion of an interrupt request and the start of the interrupt handler execution. Jitter refers to the variation in interrupt latency from one interrupt to the next. Minimizing interrupt latency and jitter is crucial for real-time and time-sensitive applications built on Cortex-M cores.
What Causes Interrupt Latency?
There are several factors that contribute to interrupt latency on Cortex-M processors:
- Interrupt detection and prioritization – The processor’s Nested Vectored Interrupt Controller (NVIC) needs to detect the interrupt request, determine its priority level, and decide if it should be pended or taken immediately.
- Pipeline flush – The processor pipeline may need to be flushed of any instructions already in flight before branching to the interrupt handler.
- Context saving – Registers need to be stored to the stack before the handler can use them. More registers means more save time.
- Interrupt masking – Higher priority interrupts may mask lower priority ones until they complete.
- Interrupt forwarding – Redirecting interrupts from core peripherals to NVIC increases latency.
- Interrupt preemption – Higher priority interrupts preempt lower priority ones currently running.
- Handler execution – The instructions in the interrupt handler itself take time to execute.
The Cortex-M pipeline and NVIC are highly optimized to reduce most of these effects, but they still contribute to baseline interrupt latency.
Sources of Jitter
The main sources of jitter in Cortex-M interrupt handling are:
- Instruction timing variations – Most instructions have a single cycle execution time, but some take multiple cycles or stall the pipeline.
- Cache hits vs misses – Interrupt handlers cached differently than background code impacts timing.
- Context save differences – Number of registers saved may differ between interrupts.
- Interrupt nesting – Higher priority interrupts can preempt handler execution.
- Handler code differences – Some handlers may do more work and take longer than others.
- Arbitration of simultaneous interrupts – Their relative priority determines order of handling.
Even small variations in the above factors can lead to jitter between sequential interrupts. Timing determinism is challenging in complex systems.
Measuring Interrupt Latency and Jitter
Measuring interrupt performance requires capturing timestamps around interrupt assertion and the start of handling. This can be done using:
- External logic analyzers – Monitor interrupt signals vs processor activity.
- Processor trace – Capture instruction profile including interrupts.
- Instrumented handlers – Insert timestamps and profiling at key points.
- Timer captures – Record timer values around interrupt entry points.
- Scopes and oscilloscopes – Visualize interrupt request to handler timing.
Statistics can then be applied to sets of measurements to characterize overall latency and jitter. Care must be taken to minimize measurement impact on the very timings being measured.
Optimizing Interrupt Latency and Jitter
There are a number of Cortex-M configuration options that can help optimize interrupt latency and jitter performance:
- Prioritize interrupts appropriately – Ensure highest priority for the most time sensitive ones.
- Minimize run time checking – Turn off stack overflow checking and similar protections.
- Tune flash waits – Balance timing vs power consumption of code fetches.
- Leverage fast GPIO – Use immediate pin interrupts to detect external events.
- Optimize context saving – Only save the bare minimum registers required.
- Craft efficient handlers – Write tight handlers in assembly and/or C to do the minimum work required.
- Use caching – Ensure consistent hits to minimize instruction fetch latency.
- Avoid nesting – Design independent priority levels to avoid preemption between handlers.
Leveraging the Cortex-M’s deterministic handling of interrupts and eliminating software sources of timing variance are key to minimizing jitter. Profile your system extensively to identify and correct jitter inducing patterns.
Example Use Cases
Here are some example applications where optimizing interrupt latency and jitter are critical:
- Motor control – Rapid field oriented control requires timely commutation based on rotor position sensor interrupts.
- Digital power – Switching converters require precision timing of PWM signals driving power switches.
- Touch sensing – Quick processing of touch inputs enables smooth and responsive gesture recognition.
- Wireless systems – Low jitter time slicing facilitates accurate RF phase tracking for high throughput.
- Servo mechanisms – Clean encoder input handling minimizes jitter in position and speed control loops.
In each of the above, consistent and rapid interrupt handling directly impacts control loop performance and overall system behavior.
Key Takeaways
- Interrupt latency and jitter impact real-time application performance on Cortex-M cores.
- Multiple architectural and software factors contribute to both baseline latency and jitter.
- Instrumentation and profiling are required to properly characterize interrupt timing.
- Configuration tuning and efficient handlers optimize latency and jitter.
- Time critical Cortex-M applications require careful interrupt handling optimization.
By understanding the root causes and measurement of interrupt latency and jitter, developers can make design and software decisions to best optimize them on Cortex-M implementations.