Background:
The type, precision, and range of both time_t and clock_t are implementation defined. Local time zone and daylight savings time are also implementation defined. The Unix time standard can also vary slightly.. IE, the type of time_t and clock_t are precisely "It's a number guys!". It is therefore important to be very careful when using time_t and clock_t in C because assumptions can lead to problems ranging from errors in program timing to possible overflow from invalid type conversions. What follows are some recommendations that help one to avoid common pitfalls that cause security vulnerabilities.
Recommendation #1:
When comparing time_t, first cast the item you are comparing it with to a time_t
Traditionally, time_t is set to be a signed 32 bit integer type on Unix systems, but the C99 standard only requires that time_t is an arithmetic type. It is a common error (and temptation) to use integers interchangeably with time_t. However, doing so could lead to invalid comparisons in your code.
Non-Compliant Code
int main(void) { time_t now = time(NULL); if ( now \!= \-1 ) { fputs(ctime(&now), stdout); } return 0; }
The c standard mandates that time() return (time_t)(-1). Some systems may interpret (time_t)(-1) as something completely different from the integer -1. This could lead to potential invalid comparison and therefore invalid output (or worse, depending on what else is put in the if statement). Therefore the correct code is as follows:
Compliant Code
int main(void) { time_t now = time(NULL); if ( now \!= (time_t)-1 ) { fputs(ctime(&now), stdout); } return 0; }
In the code above the comparison will function as expected.
Recommendation #2:
Use difftime when subtracting time, and avoid other arithmetic operations when possible.
The result of performing arithmetic operations on time_t are undefined. Even if your system's time_t is an integer, adding and subtracting times may or may not produce a meaningful result.
However, situations do arise in which it is necessary to add and subtract time. In general, it should be avoided. If you must do so, below is described the "best guess" at how to add and subtract time.
Subtraction:
C99 defines difftime(), which allows for the subtraction of two time types and returns a double representing the number of seconds between them. Always use difftime() and do not use arithmetic subtraction.
Addition:
C99 does not define any addition functions for time; the best thing to do when you need to add time is to not. However, if you must do so, it is best to write a custom function for the platform you are on (a good place to start on ideas for structure is the difftime() function for your platform).
Recommendation #3:
Use proper form when subtracting clock_t's for processing time.
Often times you will wish to clock the amount of time it takes your processor to run a given process. The C99 standard specifies the clock() function for this particular purpose. "In order to measure the time spent in a program, the clock function should be called at the start of the program and its return value subtracted from the value returned by subsequent calls..." It is further specified "To determine the time in seconds, the value returned by the clock function should be divided by the value of the macro CLOCKS_PER_SEC. If the processor time used is not available or its value cannot be represented, the function returns the value (clock_t)(-1)."
Two common errors are made when performing this operation. They are illustrated below.
Non-compliant Code Examle
int run_big_program() { clock_t start, finish; int seconds; start = clock(); run_long_program() finish = clock(); seconds = ((finish-start)/CLOCKS_PER_SEC); return seconds; }
The problem here is: 1) the return values of clock() are not checked (this occurs in many implementations of this functionality) and 2) the return value seconds in inappropriate.
The way to remedy this is to check for return values properly and use an appropriate data structure for the subtraction for the number of seconds. It is implied by the C standard that finish-start will return the number of clock cycles necessary for run_long_program(), so you can be fairly sure of that.
C99 Section 7.23.1 states that CLOCKS_PER_SEC
expands to a constant expression with type clock_t
that is the number per second of the value returned by the clock()
function, so dividing by CLOCKS_PER_SEC
will net you the number of seconds. What is not specified, however, is the type that will be yielded by dividing CLOCKS_PER_SEC
. The best recommendation is to look at the types defined in time.h and use a compatible (and large, to prevent overflow) type. The convention for linux and most x86 architectures seems to be to use a double.
Compliant Solution
double run_big_program () { clock_t start, finish; double seconds; start = clock(); if (start = (clock_t)(-1)){ return -1.0; } run_long_program(); finish = clock(); if (finish = (clock_t)(-1)){ return -1.0; } seconds = (double) (finish-start)/CLOCKS_PER_SEC; return seconds; }
When appropriate one should also check for overflow in seconds.
Credits/Interesting Links:
- The original idea for this came from the C Language Gotchas site, accessible here
- The wikipedia article on Unix Time is quite enlightening. Read it here
- An article about a denial-of-service in 64bit microsoft time code. Read it here
- Interesting time_t discussion from which I pulled my example code. Read it here