You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 30 Next »

Null-terminated byte strings are, by definition, null-terminated. String operations cannot determine the length or end of strings that are not properly null-terminated, which can consequently result in buffer overflows and other undefined behavior.

Unable to render {include} The included page could not be found.
Unable to render {include} The included page could not be found.
Unable to render {include} The included page could not be found.
Unable to render {include} The included page could not be found.

Exception

An exception to this rule applies if the intent of the programmer is to convert a null-terminated byte string to a character array.  To be compliant with this standard, this intent must be clearly stated in comments.

Risk Assessment

Failure to properly null terminate null-terminated byte strings can result in buffer overflows and the execution of arbitrary code with the permissions of the vulnerable process by an attacker.

Rule

Severity

Likelihood

Remediation Cost

Priority

Level

STR32-C

3 (high)

2 (probable)

2 (medium)

P12

L1

Related Vulnerabilities

Search for vulnerabilities resulting from the violation of this rule on the CERT website.

Mitigation Strategies

Static Analysis

We can catch these with a local flow analysis. We will assume an integer range analysis to track the length of the strings. (Note: I am not entirely familiar with the literature on buffer-overflow analysis, but we should check that none of them already handle this scenario.)

  • Presume that all char* parameters are NT(null-terminated). We must check that they are still NT at the end of the function. Additionally, the return value must be NT. We will also check that they are NT before being passed to another function.
  • Any exceptions to the NT rule (functions that accept/return open strings) are specified separately. Given that this is C, the best option might be two hardcoded handling routines in the analysis. If the function either accepts an open string (not null terminated) or can return an open string, we can write some code to specify this. The analysis calls these handling routines to retrieve these specifications. Another option would be to utilize the preprocessor to write in-code specifications. However, this is not in the style of C programmers. Additionally, we can't add these specs to libraries that way. Given the environment, a separate specification, in C, is probably the best option.
  • The integer range analysis tracks the lengths of char*s.
  • We use a tuple lattice for the analysis. The lattice has 4 elements, bottom, NT(null terminating), O(open) and top(unknown).
  • Use the specifications (or the default of NT) to set the initial lattice element for each char*.
  • If we index into the string and set a character to '\0', move the string to NT. This only occurs if the index is less than the minimum size of the string. (The integer analysis must be aware of strlen and that it works properly only on NT strings.)
  • Check that the parameters to all functions match the specifications. If not, cause an error.
  • At the end of the function, Check that the return value and the parameters match the specification for the function. If not, cause an error.

There is a question of what to do about character arrays. One option is to assume that char[] is open, and using it as a char* means that we first must make it null terminating. This could get annoying for developers very quickly. I think it's better to treat char[] as char*, that is, we assume NT and check for it. If the exception case does occur, it will have to be specified.

This analysis also impacts STR03-A, STR07-A, and STR31-C.

Rejected Strategies

Testing

It would probably be prohibitively expensive to come up with the test cases by hand. Another option is to use a static analysis to generate the test inputs for char*. However, it would still have to generate the inputs for the other values. We would still have to specify whether the function allows open strings or can return open strings, so that the dynamic analysis knows whether to report a defect. Since we still have to write the specifications, this technique will not save developer time there.

Dynamic Analysis

It seems the analysis won't be very different from the static analysis, in which case, we should just do this statically.

Inspection

An inspection would essentially grep for known problem functions and inspect the usage. Obviously, this is extremely costly, as there would be a lot of false positives, and this does not scale well. There may also be many false negatives. Say Dev A inspects a function that returns an open string. Dev A considers it ok and documents it as such, perhaps this is one of the exception cases. Dev B might be inspecting another part of the code and might not realize that Dev A allowed an open string. It might be documented, but this is not very reliable. This might lead to a false sense of confidence that since the developers hand inspected every case that the code is fine, when in fact, a miscommunication can cause a defect.

References

[[ISO/IEC 9899-1999]] Section 7.1.1, "Definitions of terms," and Section 7.21, "String handling <string.h>"
[[Seacord 05]] Chapter 2, "Strings"
[[ISO/IEC TR 24731-2006]] Section 6.7.1.4, "The strncpy_s function"
[[Viega 05]] Section 5.2.14, "Miscalculated null termination"

  • No labels