Ξ

What happens if we represent unix time as floats?

Published on 2026-01-16 code math time

Unix time is the number of seconds that have elapsed since the epoch (1970-01-01 00:00:00 UTC). To my knowledge, is is usually expressed as an integer, which gives us fun issues like Y2K38.

Floating point numbers (or floats) are great because they can represent both tiny and huge numbers. The downside is that they loose precision as they get farther away from 0.

That sounds like a good fit for measuring time. I want to get high precision when evaluating the performance of some software component (right now). But when I talk about millions of years in the future, I don't care about the exact second.

So let's have a quick look at the math and see if this is workable.

Basics

An IEEE 754 float consists of a single bit for the sign ss, ii bits for the mantissa mm, and jj bits for the exponent ee. Its value then is:

x=s(1+m2i)2ex = s \cdot (1 + m \cdot 2^{-i}) \cdot 2^e

We get the smallest possible increment for a given number by increasing the mantiassa by 1:

s(1+(m+1)2i)2es(1+m2i)2e=s2eix2i s \cdot (1 + (m + 1) \cdot 2^{-i}) \cdot 2^e - s \cdot (1 + m \cdot 2^{-i}) \cdot 2^e = s \cdot 2^{e - i} \approx x \cdot 2^{-i}

The actual numbers

For 32-bit floats, ii is 23 and the exponent ranges from -127 to 128. That means:

We can also invert the calculation to find out when we cross certain thresholds:

For 64-bit floats, ii is 52. That means:

Conclusion

32-bit floats are probably too coarse. But using 64-bit floats for measuring time might actually be a good idea. They can represent plenty of time in the past and future. And they provides nano second precision for the current time.

What I found surprising is how quickly the precision deteriorates. For 32-bit floats, it starts out at 21502^{-150} seconds and already reaches a single second 4 months later. So if you need extremely high precision, your best bet is probably to define a custom epoch.