First this was going to be a blog, then a DD post, then a blog again...
A while ago I've read an article absolutely not about security but about how great it is to work in small friendly teams - http://pragprog.com/magazines/2012-12/agile-in-the-small
It contains an awesome quote:
"...most best practices are just crutches for having a heterogeneous skill mix in one’s team."
Please hold that quote in mind while I turn to the figures recently released by WhiteHat Security
They say that 39% of their clients use some sort of source code analysis on their webapps. These customers experience (probably meaning 'discover') more vulnerabilities, resolve them *slower* and
have a *worse* remediation rate.
Why is this? If you have ever been a customer to a SCA salesman then you know. Their pitch goes like this:
"All you need to do is to run our magic tool with this 'best practice' configuration and fix all results. The tool does not require a person who understands app security to be involved. It's like a tester in a box. Even better, just use "OWASP top 20" (I call it "fake silver bullet") configuration, this is what everyone else does."
Typical outcomes: the tool finds a large amount of rather unimportant noise, rates the issues overly high just in case. Developers get tired fixing these often nonsensical results. You'd be amazed how many people run SCA (or web scanners) with the default config and then forward results to developer teams, their own or third party's. Eventually, the person running the magical scanner starts being treated as the boy who cried wolf too often.
Now, this is *not* a post against static analysis. Static analysis can be an awesome tool for vulnerability research, especially for C/C++ code (although everyone seems to be 'fuzzing kernels' instead) and maybe even in web apps. That is, if the tool you've got is capable of being used as a research helper, not a checkbox filler.
Unfortunately the reaction of SCA salesmen to such a request (not of all, but many) is usually "You want what? Write your own rules? And drill down on results? And specify sanitisers and stuff? Crazy! Let me find someone back at headquarters who knows what you're talking about…"
Very often, a few simple scripts involving minimal lexing/parsing, written for *your* specific web app (even without getting into ASTs, solvers and data taint analysis) can be way more useful in finding old and preventing new issues. Especially if they are run as commit hooks in your git repo.
Back to the 'best practices' quote - if you are a software vendor and you want to get real benefits from commercial SCA tools (I do not count compliance among the benefits), do two things: hire someone with a clue (about app sec, about development, about SCA) and get a tool which has configurable rules.
Otherwise don't even bother. It will be about as effective as, and much more expensive than, running an IDS or AV.