When designing a microcontroller system that requires floating point math, engineers must choose between implementing floating point operations in software (soft float) or hardware (hardware floating point unit). This article examines the tradeoffs between these two approaches to help guide the decision.

## Soft Float

With soft float, floating point operations are implemented in software routines rather than dedicated hardware. The key advantage of this approach is flexibility β since the routines are software, they can be easily modified and optimized. Software floats also save on silicon area and cost compared to hardware solutions. However, soft float has some significant disadvantages:

- Performance β software floats are much slower than hardware, often 10-100x slower for complex operations like divide and square root.
- Code size β the routines can consume a large amount of program memory.
- Power consumption β soft float routines require more power than hardware solutions due to the additional CPU cycles needed.
- Accuracy β software routines may have rounding errors or reduced precision compared to IEEE 754 hardware.

Therefore, for applications like signal processing that require both high performance and precision, soft float is usually inadequate. However, for low power IoT sensors and other microcontroller applications that do not need high speed math, soft float provides a compact and flexible solution.

## Hardware Floating Point Unit

In contrast to soft float, a hardware floating point unit (FPU) implements operations directly in silicon according to the IEEE 754 standard. The key benefits of an FPU include:

- Performance β hardware floats are extremely fast, often completing in just 1 or 2 clock cycles.
- Accuracy β FPUs provide correct rounding and full IEEE 754 precision.
- Power efficiency β hardware units minimize power by completing operations quickly.
- Code size β hardware floats avoid large software libraries.

By implementing floats in dedicated hardware, FPUs overcome all the drawbacks of soft float solutions. The major downside is increased silicon area and cost. Adding an FPU can grow the die size of a microcontroller by 10-15%.

## Tradeoffs and Considerations

So when selecting between soft float and hardware FPU, there are a few key tradeoffs and considerations:

**Performance needs**β for high speed math, hardware is required. Software can suffice for slower applications.**Power budget**β hardware floats minimize power consumption by completing operations quickly.**Precision requirements**β FPUs provide IEEE 754 compliance while software floats may have errors.**Code space**β soft float requires more program memory for routines.**Silicon area and cost**β hardware has greater die impact.

Here are some guidelines on when to use each approach:

- Use
**soft float**when:- Floating point usage is light (infrequent operations)
- Absolute performance is not critical
- Precision requirements are loose
- Power consumption is a key constraint
- Die size must be minimized

- Use
**hardware FPU**when:- Intensive floating point computations are required
- IEEE 754 compliance is needed
- Performance is critical
- Code space is very limited

## Soft Float Architectures

There are a few common architectures used when implementing soft floats:

**Fixed point**β Uses integer operations to emulate floats, easiest to implement in software but has limited precision and range.**Block floating point**β Operates on blocks of values using a shared exponent, minimizing updates to the exponent. Fast but lower precision.**Floating point emulation**β Mimics the architecture of a hardware FPU but in software. Slowest option but most accurate.

The choice depends on the performance and precision needs. Fixed point is good for simple low power devices. Emulation provides the best compliance at the cost of speed. Block floating point offers a middle ground.

### Fixed Point

Fixed point representations use a fixed number of integer bits to represent the mantissa. For example: SEEEEEMM MMMMMMMM S β Sign bit E β Exponent bits M β Mantissa bits

With fixed point, the decimal point position is fixed, hence the name. All operations like add/subtract/multiply/divide operate on the integer mantissa bits. The key advantages are:

- Simple to implement in software using integer instructions.
- No special data types needed.
- Uses minimal memory.

However, fixed point also has significant limitations:

- Limited range and precision due to fixed exponent and mantissa sizes.
- Scaling required to prevent overflow/underflow.
- No hardware-equivalent representation.
- Poor round-off performance.

Overall, fixed point floats work well for basic applications like low power sensors but lack the dynamic range and precision for more advanced math.

### Block Floating Point

Block floating point aims to minimize performance issues with variable length floating point formats. In this method, a block or set of floating point values share the same exponent term. So for a block size of N: Exponent | Mantissa 1 | Mantissa 2 | β¦ | Mantissa N

All operations on the mantissas can ignore the shared exponent term until the block completes. Then the exponent is updated and re-aligned. This significantly reduces expensive exponent updates. Advantages of block floating point include:

- Faster performance by batching exponent updates.
- Hardware-equivalent format allows conversion to/from IEEE 754.

Disadvantages include:

- Reduced precision due to sharing exponent across values.
- Block size limits range and precision.
- Still slower than dedicated hardware.

Block floating point works very well for digital signal processing applications where vector math is common. It provides a good balance of speed and precision in software.

### Floating Point Emulation

Floating point emulation seeks to exactly replicate the behavior of a hardware FPU using software routines. This allows the FPU architecture to be modeled as closely as possible. Typical steps include:

- Unpack IEEE 754 floating point values into sign, exponent, mantissa components.
- Perform operation on components using integer math.
- Normalize and round result correctly.
- Pack result back into IEEE 754 format.

Advantages of floating point emulation:

- Precisely matches hardware behavior.
- Provides IEEE 754 compliance.

Disadvantages:

- Very computationally intensive, 10-100x slower than hardware.
- Harder to optimize than custom soft float routines.

Emulation provides the best floating point accuracy but sacrifices performance. It is mainly used when IEEE 754 compliance is mandatory but hardware is unavailable.

## Hardware Floating Point Units

In contrast to soft float techniques, hardware floating point units provide optimized silicon implementations of float operations. Some key architecture aspects include:

- Adheres to IEEE 754 representation and rounding.
- Dedicated op units for add, subtract, multiply, divide, square root.
- Pipelining and parallel execution units.
- Special number representations such as denormalized values.
- Configurable precision levels (single, double, half)
- Handling of exceptions like underflow, overflow.

FPUs implement floating point operations in hardware specialized for maximum performance and efficiency. Unlike software, the floating point logic is designed directly into the microarchitecture of the CPU or GPU.

### Pipelining

One key technique used in FPUs is **pipelining**. This allows multiple operations to be working their way through the hardware simultaneously, like an assembly line. For example, while one operation is executing its multiply stage, another can be executing the add stage. This improves throughput.

Pipelining reduces the effective cycle time of operations by overlapping their execution. Additional speedups come from **superscalar** architectures with multiple parallel pipelines handling operations concurrently.

### Precision Configurability

FPUs support various floating point precisions like single (32-bit) and double (64-bit) as defined by IEEE 754. Many FPUs can be dynamically configured for different levels of precision:

**Single precision**β 32-bit floats with 8 exponent, 23 mantissa bits. 6-7 decimal digits precision.**Double precision**β 64-bit floats with 11 exponent, 52 mantissa bits. 15-16 decimal digit precision.**Half precision**β 16-bit floats with 5 exponent, 10 mantissa bits. 3-4 decimal digit precision.

Lower precision modes save power and improve throughput for applications that do not require high dynamic range. Configurability allows a single hardware unit to efficiently support different precision needs.

### Special Number Handling

FPUs implement full support for special numbers defined in IEEE 754 like:

**Denormals**β Small non-zero numbers near underflow threshold.**Zeros**β Signed and unsigned zero values.**Infinities**β Results exceeding range.**NaN**β Not a number results.

This handling prevents errors and ensures adherence to the IEEE 754 specification, unlike many soft float routines.

### Exceptions and Flags

FPUs also detect exception cases like underflow, overflow, divide by zero, invalid operation, etc. These conditions set status flags that can trigger interrupts to notify software. Typical flags include:

**Invalid operation**β Undefined opcode, NaN input.**Division by zero**β Divide or remainder with zero divisor.**Overflow**β Result exceeds formatβs range.**Underflow**β Tiny non-zero result lost to rounding.**Inexact**β Rounding changed result.

Proper exception handling is important for writing robust floating point code that handles edge cases correctly.

## Floating Point Tradeoffs

In summary, here are some key tradeoffs between soft float and hardware floating point implementations:

Soft Float | Hardware FPU | |
---|---|---|

Performance | 10-100x slower | Very fast, 1-2 cycles |

Precision | Varies, often lower | Full IEEE 754 support |

Power | Higher CPU usage | Lower, fast operation |

Area | Small impact on silicon | ~10-15% die increase |

Flexibility | Software is configurable | Limited flexibility |

So in summary, hardware floating point is preferred when performance, precision, and power efficiency matter. Soft float makes sense when die area and flexibility are critical.

## ARM Cortex-M Floating Point Options

Looking specifically at ARM Cortex-M cores, there are a few options for adding floating point support:

**Software library**β Barebones Cortex-M0/M0+ have no float support, so pure software routines are needed.**FPU coprocessor**β Cortex-M4 and M7 have optional single/double precision FPUs.**DSP instructions**β Cortex-M4/M7 DSPI instructions work on half precision 16-bit floats.**Helium vectored FP**β Up to 16 half precision lanes for high throughput.

Software routines provide basic support for Cortex-M0/M0+ but sacrifice performance. The FPU coprocessor gives full hardware acceleration. And DSP/Helium instructions work on lower precision floats very efficiently.

### Software Libraries

Without any hardware floating point, pure software routines are needed on Cortex-M0/M0+ processors. This typically uses fixed point or emulation methods. Common libraries include:

**Newlib-nano**β Lightweight C standard library, implements float in software.**ARM CMSIS**β DSP and common math functions for Cortex-M cores.**Musl libc**β Implements floats and math functions in software.

These libraries provide float operation coverage in roughly 10x the cycles versus hardware. But they have minimal memory and performance impact on the system.

### FPU Coprocessor

The Cortex-M4 and M7 can optionally integrate a single/double precision floating point coprocessor for hardware acceleration. This FPU includes:

- Full IEEE 754 support, configurable precision.
- Pipelined multiply, add, divide, square root units.
- ARMv7E-M floating point instruction set extensions.
- Up to 2x lower power than emulation libraries.

With the FPU, floating point intensive applications can experience up to 10-20x better performance over software libraries. It provides hardware speed without sacrificing compliance or precision.

### DSP/Helium Instructions

For particularly high performance floating point, the Cortex-M4/M7 DSP instructions work on half precision 16-bit vectors. Performance benefits include:

- Up to 128-bit vectors using Q registers.
- Pipelined multiply-accumulate instructions.
- Up to 16 MACs per cycle with Helium extensions.

Vectorized half precision support maximizes throughput for DSP algorithms. When ultimate float performance is required, DSP and Helium provide an efficient hardware accelerated path.

## Conclusion

Floating point design involves key tradeoffs between performance, precision, power, area, and flexibility. For embedded microcontrollers like Cortex-M, choosing between software libraries, the FPU coprocessor, and DSP/Helium instructions requires balancing these factors for the target application. There is no one-size-fits-all best approach. Hardware acceleration provides the best speed and power efficiency while software offers more flexibility. DSP instructions excel at high throughput vector math. Engineers must evaluate the FLOAT requirements and constraints to determine the right fit.