---
title: What happens if we represent unix time as floats?
date: 2026-01-16
tags: [code, math, time]
description: "When evaluating the performance of some software component, I want to get high precision. But when I talk about millions of years in the future, I don't care about the exact second."
---

Unix time is the number of seconds that have elapsed since the epoch
(1970-01-01 00:00:00 UTC). To my knowledge, is is usually expressed as an
integer, which gives us fun issues like
[Y2K38](https://en.wikipedia.org/wiki/Year_2038_problem).

Floating point numbers (or floats) are great because they can represent both
tiny and huge numbers. The downside is that they loose precision as they get
farther away from 0.

That sounds like a good fit for measuring time. I want to get high precision
when evaluating the performance of some software component (right now). But
when I talk about millions of years in the future, I don't care about the exact
second.

So let's have a quick look at the math and see if this is workable.

## Basics

An IEEE 754 float consists of a single bit for the sign $s$, $i$ bits for the
mantissa $m$, and $j$ bits for the exponent $e$. Its value then is:

$$x = s \cdot (1 + m \cdot 2^{-i}) \cdot 2^e$$

We get the smallest possible increment for a given number by increasing the
mantissa by 1:

$$
s \cdot (1 + (m + 1) \cdot 2^{-i}) \cdot 2^e - s \cdot (1 + m \cdot 2^{-i}) \cdot 2^e
= s \cdot 2^{e - i}
\approx x \cdot 2^{-i}
$$

## The actual numbers

For 32-bit floats, $i$ is 23 and the exponent ranges from -127 to 128. That means:

-   The smallest possible increment (close to 0, i.e. on 1970-01-01) is $2^{-150}$ seconds, which is very very small.
-   The largest possible value is about 10 nonillion years in the future, at which point the increment will be $2^105$ seconds, about a septillion years.
-   The increment at the time of writing is roughly 211 seconds, which is much to coarse for most use cases.

We can also invert the calculation to find out when we cross certain thresholds:

-   We reached a single second of precision some time around 1970-04-08
-   We reached a minute of precision some time around 1985-12-13
-   We will reach an hour of precision some time around 2926-12-20
-   We will reach a day of precision in the year 24,937

For 64-bit floats, $i$ is 52. That means:

-   The increment at the time of writing is roughly 0.4 nano seconds.
-   We will reach a full nano second of precision some time around 2112-09-18
-   We will reach a micro second of precision in the year 144,683.

## Conclusion

32-bit floats are probably too coarse. But using 64-bit floats for measuring
time might actually be a good idea. They can represent plenty of time in the
past and future. And they provides nano second precision for the current time.

What I found surprising is how quickly the precision deteriorates. For 32-bit
floats, it starts out at $2^{-150}$ seconds and already reaches a single second
4 months later. So if you need extremely high precision, your best bet is
probably to define a custom epoch.
