- nanoseconds
- microseconds
- milliseconds
- seconds
- minutes
- hours
(You can make your own, but that's not the point, here.)
Here are the two important points to take away from this, if nothing else:
- All of these are different template definitions of "std::chrono::duration".
- All of these may be converted to and from each other (as best as they can).
Real-world example
How often have you seen something like this?
void SomeClient::setTimeout( int timeout );
I've seen this over and over, and it's never easy to figure out what's going on. Is the timeout in seconds? Milliseconds? Microseconds? None of the above? There's no easy way to tell; you have to rely on the documentation (if there even is any).
Then you end up with a call like this:
myClient.setTimeout( 500000 );
That doesn't really help. So, whatever the function expects, this call is sending a lot of that unit of time. Now it's (1) still dubious as to the units, and (2) hard to read, since there are just a bunch of numbers in a row. We can't easy know what's going on.
Enter chrono.
Imagine that the function instead looked like this:
void SomeClient::setTimeout( std::chrono::microseconds timeout );
This immediately tells us that the preferred units of measurement are microseconds. It probably means that any more precise and the function won't care.
So then my call ends up looking like this:
myClient.setTimeout( std::chrono::milliseconds( 500 ) );
Now it's clear that the timeout is being set to 500 milliseconds, and who cares what the function wants. It doesn't matter. The units that we pass in will be properly converted. In this case, they'll be multiplied by 1,000. Done deal.
What if...?
Finally, what happens if we tried to do it the old way?
myClient.setTimeout( 500000 );
This results in a compile-time error. Unreadability, you're doin' it wrong.