Monday, 26 November 2012

Modern Web application security: Facebook, Twitter, Etsy

In the past 6 months at least 3 big modern webapp companies published details of how they do application security. Etsy was the first, with Twitter and Facebook close second. The presos are at:

Etsy: http://www.slideshare.net/zanelackey/effective-approaches-to-web-application-security
Twitter: http://www.slideshare.net/xplodersuv/putting-your-robots-to-work-14901538
Facebook: http://www.slideshare.net/mimeframe/ruxcon-2012-15195589

Despite small differences caused by frameworks and technologies these companies use, they all do the same set of things:

Code reviews

Security team does regular code reviews, and does them in a smart way. They have set up triggers for reviews, automated by unit tests or simply grep scripts looking at (D)VCS commits, e.g. git. These scripts monitor for two different kinds of changes:
  1. Any change in "important" files. These usually are the parts of the app source code that deal with CSRF protection, encryption, login, session mgmt, XSS encoding.
  2. Any new instances of "potentially nasty" snippets of code anywhere in the code base. These include introduction of file system operations, process execution, HTML decoding calls.
The above can be mixed and matched in a number of ways. For example, on can also monitor for any new URI endpoints (this can also be done via dynamic scanning, see below), or for people explicitly disabling automatic protections for CSRF, XSS, if you have these protections in place.

Dynamic scans

Security team have set up a number of Web bots to periodically scan their app for "simple" security issues.

NB: Do not use commercial scanner monsters, they are geared to produce as many results as (inhumanly) possible and are much more willing to produce false positives to reduce false negatives. In other words, they would rather alert on 10 "possible" issues that turn out to be non-issues, than miss one. The sad part is that they still miss a lot anyway.

What you (and everyone, unless they are paid by weight of the report) need is minimal false positives even at the cost of missing a number of things. Some mathematical reasoning behind the idea can be gathered from a 1999 paper The Base-Rate Fallacy and its Implications for the Difficulty of Intrusion Detection by Axelsson, who calculated that the IDS's false positive rate should be 10^-5 (yes, 1/100 000) in order for the alerts to be actionable in a high traffic environment.

All three companies use scanner bots to monitor for regressions ("hey, we fixed an XSS here, let's make sure it does not reappear"), for detecting new URI (if they do not detect them in source code), and other similar tasks, check their presos for details.

Secure by default

They developed (and it is a good idea for everyone else to do the same) their own or adopted "secure by default" frameworks. These frameworks are nothing grand - they achieve simple and important outcomes: provide automatic output encoding for XSS, automatically assign CSRF tokens and so on. Remember code monitoring scripts earlier? They trigger a security review if any of these security frameworks are disabled or opted out of on a specific page.

Security headers

Headers such as
  • X-Content-Type-Options
  • X-Xss-Protection
  • CSP headers 
have gained popularity. They require care to implement and here the approaches differ, see the original presos.

A nice touch is deploying CSP policies in monitoring mode, without blocking anything, and analysing the resulting alerts (slide 45 in the Facebook deck). Applying CSP in blocking mode to a large existing app is a huge task and is likely not to gain traction with your developers. The CSP candidate spec says:
Content Security Policy (CSP) is not intended as a first line of defense against content injection vulnerabilities. Instead, CSP is best used as defense-in-depth, to reduce the harm caused by content injection attacks.
There is often a non-trivial amount of work required to apply CSP to an existing web application. To reap the greatest benefit, authors will need to move all inline script and style out-of-line, for example into external scripts, because the user agent cannot determine whether an inline script was injected by an attacker.

Graphing stuff

There are many things that can be graphed or at least thresholded with security benefits - CSP alerts, increase in traffic containing HTML, number of failed login attempts,...

Summary

All of the measures in this post are help the security team and make the team deliver the most bang for the buck. My next post will be on how to use similar tools for security "evangelism" (I try avoiding this word, it is misguiding), or "get those developers to not release vulnerable software"