Once upon a time, there was this Unix thing powering departmental mainframes. When it came to keeping track of system time it sounded like a perfect solution to have a virtual clock ticking once every second and counting seconds since the EPOCH was the way to keep track of the current date and time. Such a clock, stored into a 32bits, had several advantages – it was simple and easy to store and manipulate, it was granted to be monotonically increasing for such a long time that it would have been someone else problem the day that this grant would have been broken. Now switch context, you are working on a micro within an embedded system. The micro has the standard nowadays System On Chip parade of peripherals, including a hardware timer that can be programmed to interrupt your code at any frequency is convenient to you.
Using 1ms as a clock base may seem a little aggressive, but that timespan makes a lot of sense – it is unnoticeable for us sluggish humans but its sort of eons for a speedy electron. In other words, it is a good compromise to base your time engine. You can build on this some timer virtualization so that you may employ multiple timers either periodical or one shot in your application.
So far so good, but there’s a catch. Consider using a 32bit variable to keep the time (as the glorious Unix time_t). Starting from 0 on every power-up (after all, it is not mandatory to keep the full calendar information), it will take one-thousandth of the time it takes Unix to run out of seconds. Is that enough to be someone else problem? No, that’s much closer to your paycheck. It takes some 49 days to roll over if the variable is unsigned.
At the rollover, you lose one important property of your clock – increasing monotonicity. One ms you are at 0xFFFFFFFF and the next one you are at 0x00000000. Too bad.
Every comparison on time in your code is going to fail.
Is there anything that we could do? Let’s say we have a code like this –
if( now > startTime + workingSpan ) { // timeout }
It seems perfectly legit, but what happens if startTime approaches the rollover boundary? Let’s say that startTime is 0xFFFFFFF0, workingSpan is 0x10, and now is 0xFFFFFFF8. That means that now is within the working span and we expect the test to succeed. Instead the test will fail, because 0xFFFFFFF0 + 0x10 rolls into 0x00000000 which is NOT greater than now (0xFFFFFFF8).
Is there any way around before going into 64bits?
You may try to check if the operation is going to overflow and act accordingly, e.g.
uint32_t LIMIT = MAXUINT32 - workingSpan; if( startTime > LIMIT ) { handle overflow... }
Or you may check if the operation overflowed –
uint32_t timeLimit = startTime + workingSpan; if( now < startTime || now > timeLimit ) { handle overflow }
You may have to trim a bit the code for corner cases, but I wouldn’t care because there is a simpler way. An interesting property of modulo arithmetic is that
a - b = modulo[n]( a- b )
as long as a – b < n. That means that if we manage to transform the comparison operation from absolute to relative then we can forget about the overflow and roll over problem. So the following code:
if( workingSpan < now - startTime ) { // timeout }
is correct from an overflow/rollover point of view, does not requires extra computing time, nor space, and is as readable as the original code.