What happens if we represent unix time as floats?
Unix time is the number of seconds that have elapsed since the epoch (1970-01-01 00:00:00 UTC). To my knowledge, is is usually expressed as an integer, which gives us fun issues like Y2K38.
Floating point numbers (or floats) are great because they can represent both tiny and huge numbers. The downside is that they loose precision as they get farther away from 0.
That sounds like a good fit for measuring time. I want to get high precision when evaluating the performance of some software component (right now). But when I talk about millions of years in the future, I don't care about the exact second.
So let's have a quick look at the math and see if this is workable.
Basics
An IEEE 754 float consists of a single bit for the sign , bits for the mantissa , and bits for the exponent . Its value then is:
We get the smallest possible increment for a given number by increasing the mantiassa by 1:
The actual numbers
For 32-bit floats, is 23 and the exponent ranges from -127 to 128. That means:
- The smallest possible increment (close to 0, i.e. on 1970-01-01) is seconds, which is very very small.
- The largest possible value is about 10 nonillion years in the future, at which point the increment will be seconds, about a septillion years.
- The increment at the time of writing is roughly 211 seconds, which is much to coarse for most use cases.
We can also invert the calculation to find out when we cross certain thresholds:
- We reached a single second of precision some time around 1970-04-08
- We reached a minute of precision some time around 1985-12-13
- We will reach an hour of precision some time around 2926-12-20
- We will reach a day of precision in the year 24,937
For 64-bit floats, is 52. That means:
- The increment at the time of writing is roughly 0.4 nano seconds.
- We will reach a full nano second of precision some time around 2112-09-18
- We will reach a micro second of precision in the year 144,683.
Conclusion
32-bit floats are probably too coarse. But using 64-bit floats for measuring time might actually be a good idea. They can represent plenty of time in the past and future. And they provides nano second precision for the current time.
What I found surprising is how quickly the precision deteriorates. For 32-bit floats, it starts out at seconds and already reaches a single second 4 months later. So if you need extremely high precision, your best bet is probably to define a custom epoch.