...
An understanding of the capabilities and limits of automated detection will help guide readers of this standard to better use the coding rules and guidelines.
...
For some code flaws, automated detection methods are very are too costly (take too much time, too much memory, or too much disk space) to be practical. Makers of automated detection tools (both proprietary code analysis tools and cost-free, open-source code analysis tools) must balance including the ability to check for a particular code flaw with the average user's cost, user's interest in finding that code flaw, and the false-positive rate of that particular code-flaw checker. Checkers that have high false-positive rates tend to displease tool users. For detailed discussion of the issues discussed in this paragraph, see the article A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World.
Widely-used automated code flaw detection tools often find somewhat-overlapping but quite different sets of code flaws, even just looking at automated static analysis tools (e.g., see SEI technical note, Improving the Automated Detection and Analysis of Secure Coding Violations). Some code analysis frameworks use multiple analysis tools to analyze code for a wider variety of code flaws, however the number of code warnings (many of which are false positives) that must be manually inspected increases accordingly (for more information on this topic, see SEI blogpost Prioritizing Alerts from Static Analysis to Find and Fix Code Flaws).
Human code review is manual (not automated, although automation can help document findings and schedule reviews), but can detect some errors that widely-used automated static and dynamic analysis tools do not check for.
...