Python Float Type
last modified April 1, 2025
Python's float type represents floating-point numbers following the
IEEE 754 standard. This tutorial covers float representation, usage patterns,
precision limitations, and alternatives for exact calculations.
We'll examine the binary representation of floats, explain why exact decimal representation is impossible, and demonstrate common pitfalls with practical examples. Understanding these concepts is crucial for financial and scientific computing.
Float Basics
Python floats are 64-bit (double-precision) floating-point numbers.
# Float literals
a = 3.14
b = 1.23e-4 # Scientific notation
c = float("inf") # Infinity
d = float("nan") # Not a Number
# Type checking
import math
print(type(a)) # <class 'float'>
print(math.isinf(c)) # True
print(math.isnan(d)) # True
# Special values
print(float('inf') * 2) # inf
print(float('nan') + 5) # nan
Python floats follow IEEE 754 double-precision standard, using 64 bits: 1 sign bit, 11 exponent bits, and 52 fraction bits. This gives approximately 15-17 significant decimal digits of precision.
Special values like infinity (inf) and Not a Number (nan) are handled according to IEEE rules. The math module provides functions for safe float operations and checks.
Float Representation
Floats are stored in binary scientific notation.
import struct
def float_to_bits(f):
packed = struct.pack('!d', f)
return ''.join(f'{byte:08b}' for byte in packed)
# Show binary representation
print(float_to_bits(0.1)) # 0011111110111001100110011001100110011001100110011001100110011010
print(float_to_bits(0.5)) # 0011111111100000000000000000000000000000000000000000000000000000
The binary representation shows how floats are stored internally. The 64 bits are divided into:
- 1 sign bit (0=positive, 1=negative)
- 11 exponent bits (biased by 1023)
- 52 fraction bits (with implicit leading 1)
For example, 0.5 is stored as exponent -1 (1022 in biased form) and fraction 0, resulting in 1.0 × 2⁻¹ = 0.5.
Precision Limitations
Many decimal numbers cannot be represented exactly in binary floating-point.
# Exact representation test
a = 0.1 + 0.1 + 0.1
b = 0.3
print(a == b) # False
print(f"{a:.20f}") # 0.30000000000000004441
print(f"{b:.20f}") # 0.29999999999999998890
# Smallest difference
import sys
epsilon = sys.float_info.epsilon
print(epsilon) # 2.220446049250313e-16
The classic 0.1 + 0.1 + 0.1 ≠ 0.3 example demonstrates float precision issues. This happens because 0.1 in binary is a repeating fraction (like 1/3 in decimal) that must be truncated.
The machine epsilon (2⁻⁵² ≈ 2.22e-16) represents the smallest difference between 1.0 and the next representable float. This is the fundamental limit of float precision.
Accumulation of Errors
Repeated operations magnify floating-point errors.
# Summing many small numbers
total = 0.0
for _ in range(1_000_000):
total += 0.000001
print(total) # 0.9999999999999177 (not 1.0)
# Comparing floats safely
def almost_equal(a, b, rel_tol=1e-9, abs_tol=0.0):
return abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)
print(almost_equal(total, 1.0)) # True
When summing many small floats, errors accumulate due to limited precision. The result drifts from the mathematically correct value.
The solution is to use tolerance-based comparisons (like math.isclose()) rather than exact equality checks. The example shows how to implement a basic version.
Decimal Alternatives
For exact decimal arithmetic, use the decimal module.
from decimal import Decimal, getcontext
# Exact decimal arithmetic
a = Decimal('0.1') + Decimal('0.1') + Decimal('0.1')
b = Decimal('0.3')
print(a == b) # True
# Precision control
getcontext().prec = 28 # 28 decimal places
result = Decimal(1) / Decimal(7)
print(result) # 0.1428571428571428571428571429
# Financial calculations
price = Decimal('19.99')
tax = Decimal('0.08')
total = price * (1 + tax)
print(total) # 21.5892
The decimal module provides decimal floating-point arithmetic with user-definable precision. It's ideal for financial applications where exact decimal representation is required.
Decimal stores numbers in base-10, avoiding binary conversion issues. However, it's slower than native floats and requires explicit precision management.
Fraction Type
For exact rational arithmetic, use fractions.
from fractions import Fraction # Exact fractions a = Fraction(1, 3) # 1/3 b = Fraction(1, 2) # 1/2 print(a + b) # 5/6 # Float to Fraction c = Fraction.from_float(0.1) print(c) # 3602879701896397/36028797018963968 # Exact calculations x = Fraction(1, 10) sum_fractions = sum([x, x, x]) print(sum_fractions == Fraction(3, 10)) # True
The Fraction type represents numbers as exact numerator/denominator pairs. This avoids floating-point rounding errors for rational numbers.
Converting floats to fractions reveals their exact binary representation. Fractions are useful when exact ratios must be preserved through calculations.
Float Internals
Python's float implementation details.
import sys
import math
# Float attributes
print(sys.float_info)
"""
sys.float_info(
max=1.7976931348623157e+308,
min=2.2250738585072014e-308,
dig=15,
mant_dig=53,
...
)
"""
# Float components
def float_components(f):
mant, exp = math.frexp(f)
return mant, exp
m, e = float_components(3.14)
print(f"Mantissa: {m}, Exponent: {e}") # Mantissa: 0.785, Exponent: 2
The sys.float_info struct reveals platform-specific float
characteristics. Key values include maximum/minimum representable numbers and
mantissa precision.
The math.frexp function decomposes a float into its mantissa and
exponent components. This shows how numbers are normalized in scientific
notation.
When to Use Floats
Appropriate use cases for floating-point.
# Scientific computing
def quadratic(a, b, c):
discriminant = b**2 - 4*a*c
sqrt_discriminant = math.sqrt(discriminant)
return (-b + sqrt_discriminant)/(2*a), (-b - sqrt_discriminant)/(2*a)
# Physical simulations
def simulate_projectile(v0, angle, dt=0.01):
angle_rad = math.radians(angle)
vx = v0 * math.cos(angle_rad)
vy = v0 * math.sin(angle_rad)
# Simulation loop using floats
...
# Graphics programming
def normalize_vector(x, y, z):
length = math.sqrt(x**2 + y**2 + z**2)
return x/length, y/length, z/length
Floats are ideal for scientific computing, physics simulations, and graphics where:
- Approximate results are acceptable
- Performance is critical
- Large dynamic range is needed
- Hardware acceleration is available
The examples show typical float applications where small errors don't affect results meaningfully.
Best Practices
- Avoid exact comparisons: Use math.isclose() or tolerance ranges
- Beware accumulation: Sum errors grow with operation count
- Use decimals for money: Financial calculations need exactness
- Consider fractions: When working with rational numbers
- Know your precision: sys.float_info shows system limits
Source References
Author
My name is Jan Bodnar, and I am a passionate programmer with extensive programming experience. I have been writing programming articles since 2007. To date, I have authored over 1,400 articles and 8 e-books. I possess more than ten years of experience in teaching programming.
List all Python tutorials.