You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

The precision and range of clock_t are implementation defined. The type of time_t is defined as size_t, which is "an unsigned integer type", but the encoding of time is not unspecified by the standard. The standard defines

Non-Compliant Code Example

This code attempts to execute do_some_work() multiple times until at least seconds_to_work has passed. However, because the encoding is not defined, there is no guarantee that adding start to seconds_to_work will result adding seconds_to_work seconds.

int do_work(int seconds_to_work) {
  time_t start;
  start = time();
  if (start == (time_t)(-1)) {
    /* Handle error */
  }
  while (time() < start + second_to_work) {
    do_some_work();
  }
}

Compliant Code Example

The compliant code example uses difftime() to determine the difference between two time_t values. difftime() returns the number of seconds from the second parameter until the first parameter and returns the result as a double.

int do_work(int seconds_to_work) {
  time_t start, current;
  start = time();
  if (start == (time_t)(-1)) {
    /* Handle error */
  }
  while (time() < start + second_to_work) {
    current = time();
    if (current == (time_t)(-1)) {
       /* Handle error */
    }
    if (difftime(current, start) > seconds_to_work)
      break;
    do_some_work();
  }
}

Note that this loop may still not exit, as the range of time_t may not be able to represent two times seconds_to_work apart.

Other Arithmetic

Addition

   C99 does not define any addition functions for time; the best thing to do when you need to add time is to not. However, if you must do so, it is best to write a custom function for the platform you are on (a good place to start on ideas for structure is the difftime() function for your platform).

Subtraction

    Often times you will wish to clock the amount of time it takes your processor to run a given process. The C99 standard specifies the clock() function for this particular purpose. "In order to measure the time spent in a program, the clock function should be called at the start of the program and its return value subtracted from the value returned by subsequent calls..." It is further specified "To determine the time in seconds, the value returned by the clock function should be divided by the value of the macro CLOCKS_PER_SEC. If the processor time used is not available or its value cannot be represented, the function returns the value (clock_t)(-1)."

    Two common errors are made when performing this operation. They are illustrated below.

Non Compliant Code Example

int run_big_program() {
    clock_t start, finish;
    int seconds;
    start = clock();
    run_long_program()
    finish = clock();
    seconds = ((finish-start)/CLOCKS_PER_SEC);
    return seconds;
}

    The problem here is: 1) the return values of clock() are not checked (this occurs in many implementations of this functionality) and 2) the return value seconds in inappropriate.

    The way to remedy this is to check for return values properly and use an appropriate data structure for the subtraction for the number of seconds. It is implied by the C standard that finish-start will return the number of clock cycles necessary for run_long_program(), so you can be fairly sure of that.

C99 Section 7.23.1 states that CLOCKS_PER_SEC expands to a constant expression with type clock_t that is the number per second of the value returned by the clock() function, so dividing by CLOCKS_PER_SEC will net you the number of seconds. What is not specified, however, is the type that will be yielded by dividing CLOCKS_PER_SEC. The best recommendation is to look at the types defined in time.h and use a compatible (and large, to prevent overflow) type. The convention for linux and most x86 architectures seems to be to use a double.

Compliant Code Example

double run_big_program ()
{
    clock_t start, finish;
    double seconds;
    start = clock();
    if (start == (clock_t)(-1)){
        return -1.0;
    }
    run_long_program();
    finish = clock();
    if (finish == (clock_t)(-1)){
        return -1.0;
    }
    seconds = (double) (finish-start)/CLOCKS_PER_SEC;
    return seconds;
}

   When appropriate one should also check for overflow in seconds. The key, though, is that because the proper function is being called and a reasonable value will result. This function will correctly return the time in seconds.
 

Rule

Severity

Likelihood

Remediation Cost

Priority

Level

MSC05-A

5

2

2

P6

L2


References

- The original idea for this came from the C Language Gotchas site, accessible here

- The wikipedia article on Unix Time is quite enlightening. Read it here

- An article about a denial-of-service in 64bit microsoft time code. Read it here 

- Interesting time_t discussion from which I pulled my example code. Read it here

  • No labels