SoC
  • Home
  • Arm
  • Arm Cortex M0/M0+
  • Arm Cortex M4
  • Arm Cortex M3
  • Contact
Reading: Implementing Floating Point Math on Cortex-M3
SUBSCRIBE
SoCSoC
Font ResizerAa
  • Home
  • Arm
  • Arm Cortex M0/M0+
  • Arm Cortex M4
Search
  • Home
  • Arm
  • Arm Cortex M0/M0+
  • Arm Cortex M4
Have an existing account? Sign In
Follow US
  • Looking for Something?
  • Privacy Policy
  • About Us
  • Sitemap
  • Contact Us
© S-O-C.ORG, All Rights Reserved.
Arm

Implementing Floating Point Math on Cortex-M3

Eileen David
Last updated: October 5, 2023 9:55 am
Eileen David 11 Min Read
Share
SHARE

The Cortex-M3 processor provides hardware support for single precision floating point math operations. This allows developers to leverage the benefits of floating point math in their applications running on Cortex-M3 based microcontrollers. Some of the key benefits of using floating point math include wider dynamic range, absence of overflow issues, and simplified code. This article provides an overview of the floating point unit in Cortex-M3, instructions for performing floating point operations, and guidance on efficient implementation of floating point math routines.

Contents
Floating Point Unit in Cortex-M3RegistersData TypesInstructionsProgramming ModelPerforming Basic Floating Point OperationsAddition and SubtractionMultiplication and DivisionComparisons and Conditional ExecutionType ConversionsTranscendental FunctionsMicroarchitectural ConsiderationsPipeliningBypassingLatenciesInterleaving Integer and Floating PointMemory AccessFloating Point Code Optimization TechniquesLoop UnrollingInstruction SchedulingReduce Memory TrafficExploit ParallelismFunction InliningLeverage Hardware AccelerationCommon Issues and SolutionsFloating Point ExceptionsLoss of PrecisionNumerical StabilityCode Size BloatFixed Point EncodingPerformance ValidationDesign ExamplesDigital Signal ProcessingControl SystemsSensor Fusion in IoTComputer VisionMachine LearningConclusion

Floating Point Unit in Cortex-M3

The floating point unit (FPU) in Cortex-M3 processors is an optional extension to the core that provides hardware acceleration for single precision floating point operations. Single precision complies with the IEEE 754 standard and provides 32 bits for the mantissa, 8 bits for the exponent, and 1 bit for the sign. This gives a dynamic range of approximately ±1.18 x 10^−38 to ±3.4 x 10^38. The FPU supports basic arithmetic operations like addition, subtraction, multiplication, division, and square root. It also handles comparisons, conversions between integer and floating point values, and some common functions like exponential, logarithm, trigonometric, etc. The FPU is pipelined and can accept a new instruction every cycle leading to high throughput.

Registers

The FPU uses 32 single precision floating point registers labeled S0 to S31. These are distinct from the core integer registers R0 to R15. The floating point status and control register (FPSCR) indicates error conditions like overflow, underflow, divide by zero, etc. after floating point instructions. The FPU also uses some scratch registers not accessible to software.

Data Types

The FPU supports the 32-bit single precision floating point type with the float keyword in C/C++. Integer to floating point conversions and vice versa use the float and int types respectively. Fixed point fractional values can be represented using float instead of manually scaling integer values.

Instructions

The Thumb-2 instruction set used in Cortex-M3 contains floating point instructions for arithmetic, comparison, conversion, load/store, etc. These start with the ‘V’ prefix (for vector) and operate on the floating point registers S0-S31. For example, VADD performs single precision floating point addition, VCMP does a compare, VCVT converts between float and int. Load and store use VLDR and VSTR.

Programming Model

The FPU provides a dedicated register file and execution pipelines for floating point that work in parallel with the integer core pipeline. So floating point instructions can execute simultaneously with integer allowing significant performance boosts when using both. The compiler handles register allocation and instruction scheduling seamlessly. From a C/C++ programmer’s perspective, enabling the FPU simply allows using float variables and arithmetic/functions on them.

Performing Basic Floating Point Operations

Here are some examples of basic single precision floating point operations in C code and the equivalent FPU instructions the compiler will generate:

Addition and Subtraction

float a = 1.5f; float b = 2.75f; float sum = a + b; // VADD float diff = a – b; // VSUB

Multiplication and Division

float x = 3.14f; float y = 1.41f; float prod = x * y; // VMUL float quot = x / y; // VDIV

Comparisons and Conditional Execution

float num1 = 5.0f; float num2 = 3.0f; if(num1 > num2) { // VCMP + BGT // num1 is greater } if(num1 < num2) { // VCMP + BLT // num2 is greater }

Type Conversions

int i = 10; float f = 3.142f; int j = (int)f; // VCVT.S32.F32 float g = (float)i; // VCVT.F32.S32

Transcendental Functions

float x = 1.0f; float exp_x = exp(x); // VEXP float sin_x = sin(x); // VSIN

Microarchitectural Considerations

To take full advantage of the floating point capabilities in Cortex-M3, it is useful to be aware of some microarchitectural implementation details of the FPU:

Pipelining

The FPU uses multiple execution units like adders, multipliers, dividers that overlap processing of multiple instructions. For example, while a multiply operation is being calculated in the multiplier unit, an add can start flowing through the adder pipeline. This instruction-level parallelism enables high throughput. The pipeline needs to be continuously fed with instructions to maximize performance.

Bypassing

Results from the FPU pipeline are forwarded to subsequent dependent instructions without waiting for the writeback stage. This prevents pipeline stalls and further increases performance. Chaining dependent instructions allows them to execute back-to-back in successive cycles utilizing bypassing.

Latencies

Certain floating point instructions have multiple cycle latencies like division (8 cycles) and square root (6 cycles). The FPU can accept new instructions in the pipeline each cycle, but results for high latency instructions take longer. Compiler optimizations schedule instructions taking latencies into account.

Interleaving Integer and Floating Point

The Cortex-M3 core and FPU pipelines work independently. Interleaving integer core instructions together with floating point achieves best performance instead of executing long sequences of just floating point instructions. The compiler and programmer should strive to interleave the two types of instructions.

Memory Access

Use of the FPU register file eliminates unnecessary spilling of floating point values to memory. But loads and stores are still required at times. Consecutive memory accesses can cause pipeline stalls, so these should be separated when possible. Preloading values into registers helps prevent stalls.

Floating Point Code Optimization Techniques

Here are some techniques that can be applied to optimize code using floating point on Cortex-M3:

Loop Unrolling

Partially or fully unrolling loops reduces branches and exposes more instruction-level parallelism. This allows floating point code to utilize the pipelines better. But balance is needed to avoid over-unrolling.

Instruction Scheduling

Reordering instructions to avoid pipeline stalls and leverage forwarding/bypassing improves performance. Putting multi-cycle latency floating point ops early allows subsequent ops to overlap.

Reduce Memory Traffic

Accessing memory requires multi-cycle waits while the FPU pipelines stall. Minimize loads/stores by reusing data already in registers when possible. Prefetch data before use.

Exploit Parallelism

Look for opportunities to execute independent integer operations on the core pipeline concurrently with floating point ops on the FPU. This scales the total capabilities.

Function Inlining

Inlining small frequently used functions reduces call overhead. It exposes more scope for instruction scheduling and optimization to the compiler.

Leverage Hardware Acceleration

Use the optimized floating point instructions for operations like exponents, logs, trig functions instead of software library approximations. This reduces code size and improves performance.

Common Issues and Solutions

Here are some common issues faced when implementing floating point code on Cortex-M3 and potential solutions:

Floating Point Exceptions

Illegal operations can cause floating point exceptions like divide by zero, overflow, etc. Wrap code blocks sensitive to exceptions in exception handler code or selectively enable/disable exception generation.

Loss of Precision

Accumulating small floating point values can lose precision due to rounding and cancellations. Use higher precision accumulators or alternative algorithms to minimize loss of precision.

Numerical Stability

Subtle differences in floating point math can cause inconsistencies across platforms. Use stable algorithms and profile on target hardware to catch stability issues.

Code Size Bloat

Floating point instructions take more code space than integer equivalents. Use compiler options like -ffunction-sections to generate smaller code whenever floating point is used.

Fixed Point Encoding

Sometimes fixed point provides sufficient dynamic range at lower overhead. Profile applications to determine if fixed or float representation for a variable is preferred.

Performance Validation

Measure and validate real-time performance with profiling. Confirm floating point optimizations meet deadlines and do not cause timing violations.

Design Examples

Here are some examples of how floating point capabilities can be used to improve design implementations on Cortex-M3:

Digital Signal Processing

DSP algorithms use many repeated math operations like FIR filters, FFTs, matrix math etc that benefit greatly from floating point. The wide dynamic range avoids scaling issues.

Control Systems

Floating point works very well for control system computations. Controller state variables and gains can be represented accurately over a wide range without overflow or scaling.

Sensor Fusion in IoT

Combining data from multiple sensors to derive higher level information uses floating point math extensively. The high dynamic range handles fusion computations well.

Computer Vision

Vision involves large amounts of matrix math for filtering, edge detection, transformations etc. Floating point parallelism accelerates these operations.

Machine Learning

Many embedded ML applications use low precision integer math, but some benefit from floating point. Linear algebra operations involved in neural networks can leverage floating point.

Conclusion

The Cortex-M3 floating point unit enables high performance single precision floating point math needed in many embedded applications like DSP, control systems, and computer vision. Proper use of the 32-bit floating point instructions and efficient programming techniques allows developers to speed up execution of algorithms. Floating point parallelism with integer operations results in significant gains for suitable workloads. Overall, leveraging floating point in Cortex-M3 opens up headroom to implement more complex math-intensive software in resource constrained embedded devices.

Newsletter Form (#3)

More ARM insights right in your inbox

 


Share This Article
Facebook Twitter Email Copy Link Print
Previous Article What type of FPU is the cortex M33?
Next Article When to Use Hardware vs Software Floating Point with Arm Cortex M?
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

2k Followers Like
3k Followers Follow
10.1k Followers Pin
- Sponsored-
Ad image

You Might Also Like

Is Arm The Future Of Gaming?

Arm-based processors have the potential to transform the gaming industry.…

6 Min Read

What is the ARM Calling Convention?

The ARM calling convention refers to the standard procedure used…

9 Min Read

What is the difference between ARM MVE and neon?

ARM-based processors have long included SIMD instructions to improve performance…

7 Min Read

Cross-Compilation Issues Between Cortex-M1 and Processing System

Cross-compiling code to run on different architectures can present challenges…

8 Min Read
SoCSoC
  • Looking for Something?
  • Privacy Policy
  • About Us
  • Sitemap
  • Contact Us
Welcome Back!

Sign in to your account