You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

    The type, precision, and range of both time_t and clock_t are implementation defined. Local time zone and daylight savings time are also implementation defined. The Unix time standard can also vary slightly.. IE, the type of time_t and clock_t are precisely "an arithmetic type". It is therefore important to be very careful when using time_t and clock_t in C because assumptions can lead to problems ranging from errors in program timing to possible overflow from invalid type conversions. What follows are some recommendations that help one to avoid common pitfalls that cause security vulnerabilities.

The difftime() function   

    Use difftime when subtracting time, and avoid other arithmetic operations when possible.

   The result of performing arithmetic operations on time_t are undefined. Even if your system's time_t is an integer, adding and subtracting times may or may not produce a meaningful result.
    However, situations do arise in which it is necessary to add and subtract time. In general, it should be avoided. If you must do so, below is described the "best guess" at how to add and subtract time.  
    Additionally, C99 defines difftime(), which allows for the subtraction of two time types and returns a double representing the number of seconds between them. Always use difftime() and do not use arithmetic subtraction.

Other Arithmetic

Addition

   C99 does not define any addition functions for time; the best thing to do when you need to add time is to not. However, if you must do so, it is best to write a custom function for the platform you are on (a good place to start on ideas for structure is the difftime() function for your platform).

Subtraction

    Often times you will wish to clock the amount of time it takes your processor to run a given process. The C99 standard specifies the clock() function for this particular purpose. "In order to measure the time spent in a program, the clock function should be called at the start of the program and its return value subtracted from the value returned by subsequent calls..." It is further specified "To determine the time in seconds, the value returned by the clock function should be divided by the value of the macro CLOCKS_PER_SEC. If the processor time used is not available or its value cannot be represented, the function returns the value (clock_t)(-1)."

    Two common errors are made when performing this operation. They are illustrated below.

Non Compliant Code Example

int run_big_program() {
    clock_t start, finish;
    int seconds;
    start = clock();
    run_long_program()
    finish = clock();
    seconds = ((finish-start)/CLOCKS_PER_SEC);
    return seconds;
}

    The problem here is: 1) the return values of clock() are not checked (this occurs in many implementations of this functionality) and 2) the return value seconds in inappropriate.

    The way to remedy this is to check for return values properly and use an appropriate data structure for the subtraction for the number of seconds. It is implied by the C standard that finish-start will return the number of clock cycles necessary for run_long_program(), so you can be fairly sure of that.

C99 Section 7.23.1 states that CLOCKS_PER_SEC expands to a constant expression with type clock_t that is the number per second of the value returned by the clock() function, so dividing by CLOCKS_PER_SEC will net you the number of seconds. What is not specified, however, is the type that will be yielded by dividing CLOCKS_PER_SEC. The best recommendation is to look at the types defined in time.h and use a compatible (and large, to prevent overflow) type. The convention for linux and most x86 architectures seems to be to use a double.

Compliant Code Example

double run_big_program ()
{
    clock_t start, finish;
    double seconds;
    start = clock();
    if (start == (clock_t)(-1)){
        return -1.0;
    }
    run_long_program();
    finish = clock();
    if (finish == (clock_t)(-1)){
        return -1.0;
    }
    seconds = (double) (finish-start)/CLOCKS_PER_SEC;
    return seconds;
}

   When appropriate one should also check for overflow in seconds. The key, though, is that because the proper function is being called and a reasonable value will result. This function will correctly return the time in seconds.
 

Rule

Severity

Likelihood

Remediation Cost

Priority

Level

MSC05-A

5

2

2

P6

L2


References

- The original idea for this came from the C Language Gotchas site, accessible here

- The wikipedia article on Unix Time is quite enlightening. Read it here

- An article about a denial-of-service in 64bit microsoft time code. Read it here 

- Interesting time_t discussion from which I pulled my example code. Read it here

  • No labels