Saturday, 16 November 2013

Implicit security and waste-cutting

I have a nagging feeling that I've read some of this somewhere a long time ago. Probably in @mjranum's papers.

Imagine you have a product or a process. It kind of works, although you can already see the ways you can improve it by cutting some stuff here and there and changing the way it operates.

A random example: an old paper based process in a company (say, a telco) is replaced with a nice workflow all done on a web site. All information is available there for anyone immediately, all staff are happy, costs decrease, productivity increases 10-fold and so on. The process was concerned with customer contracts, so the information is kind of sensitive.

Bad hackers discover the web site, dump all the data, identity theft sky-rockets, the bank is massively fined and everyone is sad. Downsizing occurs, and so on.

Now what happened here? The old process was slow and cumbersome and low productivity. At the same time it had a number of implicit security measures built in simply by the nature of it being paper based. In order to steal a million paper contracts, one has to break-and-enter the bank facility, plus have a big van to haul all this stuff out. The loss would be immediately discovered (photocopying is not an option due to the time limitations of the heist).

Designers of the new process did not identify these implicit measures or implicit requirements because nobody thought about them. After all, the measures were implicit.
Some of the cost savings of that redesign came from (unintentional) dropping these implicit requirements or measures.

Why I am writing about this? To remind: when you are putting together a new project or re-engineer a process, check if you forgot to implement security that you did not even know was there. Stop for a moment and think about what weaknesses your product or process has, what or who may exploit these weaknesses and what the results will be. "What" could be random event, not necessarily a malicious person. In some places they call this risk management.

The funniest example is OpenSSL in Debian - http://research.swtch.com/openssl.

A less funny example is Vodafone AU customer database project which is more or less described above. It did have one password for all employees.http://www.smh.com.au/technology/security/mobile-security-outrage-private-details-accessible-on-net-20110108-19j9j.html

How I installed pandoc

Probably funny

Pandoc is a "swiss-army knife" utility to transform documents in various markup languages. I needed it for something, and this post is how I managed to get it installed. In retrospect, I could have used the installer from http://code.google.com/p/pandoc/downloads/list
$ brew install pandoc
Error: No available formula for pandoc
Searching taps...
#...Googling... Turns out it's a Haskell thing?

$ brew install haskell-platform
...
==> Installing haskell-platform dependency: apple-gcc42
# ZOMG?!
...
...A GFortran compiler is also included
# ooohkay...
...

$ cabal update
...
Note: there is a new version of cabal-install available.
To upgrade, run: cabal install cabal-install

$ cabal install cabal-install
# Those strange Haskell people ^_^...
#...downloading the Internet...

$ cabal install pandoc
# FINALLY WE ARE GETTING SOMEWHERE!
#...downloading the Internet the second time...

$ pandoc
-bash: pandoc: command not found
# WTF?!

$ export PATH=$PATH:~/.cabal/bin
DONE...

Thursday, 7 November 2013

How to buy code audits

In the past couple of years I've commissioned quite a few external source code audits. It turns out this task is far from simple, despite what the friendly salesmen tell you. I've collected a few thoughts and learnings based on my experience. YMMV.

First of all, all the advice you need is in @mdowd's (et al.) interview http://www.sans.edu/research/security-laboratory/article/192 (question 3). Quote:
Word-of-mouth recommendations often convey the best real-world measure of experience. To cast a wider net though, you can use publications and industry recognition as a good measure of reputation. When approaching a company, you may also want to ask for bios on the auditors likely to perform your assessment. Next, you'll want to ask for a sample report from any auditors you're considering. The quality of this report is extremely important, because it's a large part of what you're paying for. The report should be comprehensive and include sections targeted at the developer, management, and executive levels. The technical content should be clear enough that any developer familiar with the language and platform can follow both the vulnerability details and the recommendations for addressing them. You also need to get some understanding of the audit process itself. Ask if they lean toward manual analysis or if it's more tool-driven. Ask for names and versions of any commercial tools. For proprietary tools, ask for some explanation of the capabilities, and what advantages their tools have over those on the market. You also want to be wary of any process that's overly tool driven. After all, you're paying a consultant for their expertise, not to simply click a button and hand you a report. If a good assessment was that easy, all software would be a lot more secure.
Word-of-mouth is indeed the best indicator of quality. All other criteria are substitutes and make the selection inherently more risky. As with many risks, there are some things you can do to control them.

1. Beware of "bait and switch". This is a technique common in outsourcing when you are promised the best and famous resources when negotiating, but after the contract is signed, you get monkeys

When engaging any company larger than a boutique, insist on interviewing and approving every person who is going to work on your code. Ask for hard numbers of their experience in code auditing - "How many days in the past 3 years had this person worked on code audits in <language X>?" This metric is good because there is no reason for the vendor not to share it. 

Try not to deal with a company that has just acquired another entity or has just been acquired itself. These people have more important issues to sort out than your code, even if they say otherwise.

2. Your boss will probably ask you for an objective metric of progress. We all know the audit is more of a research than assembly line work. It is still possible to produce some useful metrics. Insist on specify your own format of reports, the one that works for you. I use a variation of the following:
  • Description of work done during the week, 1-2 pages.
    • Which sections of code or which application modules have been audited.
    • A summary of any results - positive findings (issues found and filed in our bug tracker) or negative (code looked at is ok security-wise).
    • Other work done, e.g. documentation
  • Details of each confirmed positive finding (security vulnerabilities found)
    • Description
    • Relevant places in code
    • Screenshot of successful exploitation where applicable
    • Recommendations on how to fix
    • Proposed severity rating
    • Issue number in our bug tracker if already filed.
This way you see not only findings but also coverage and the general direction of the review. After all, if nothing is found you want some reassurane that the vendor had looked in the right places.

3. Run some static/dynamic tools yourself before handing the contract to vendors. Veracode is cheap. Findbugs is free. 

A very popular "manual" code review methodology seems to be "grep and gripe." This consists of running an inhouse pattern matcher or a commercial tool, tracing the results to inputs and filing bugs. This way you get tons of issues like "insufficient entropy", "DOM XSS" and other simple cruft, yet business logic issues will never be uncovered. Do reiterate to the vendor (several times) that you value logic issues, not grep results.

A couple of posts on related topics - One way of greppingSome IntelliJ magic, and On automated tools

4. Require the vendor to work with your developer team. In my experience, the very first thing the best auditors ask for is code access, and the very second - a meeting with developers, in order to verify the auditor's understanding of the code base. NB: ask developers after the meeting what they think of the auditor. If developers are not impressed, you'll probably be wasting your money. Oh, and if you're not in a good relationship with your developers, maybe look for a new job :)

Do weekly reviews of results in person or on the phone, where you and developers review and validate each finding - sometimes things do not really make sense or, on the contrary, may give rise to further exploration ideas. If you leave everything until the review is completed, more likely than not it won't be a satisfactory one.

5. All of the above should ideally be specified in a contract. As a bare minimum, get an acceptance clause in: if you do not accept the final report from the auditor, they do not get paid and have to fix it up for you for free.

Good luck :)

P.S. Someone asked for timeframes for a good audit. It's difficult to say because no "coverage" metrics really apply. At the same time there is anecdotal evidence that you can expect up to 8 kLoC to be eyeballed by an auditor a day (on average, on large code bases). Properly "read" is more like 800 LoC a day - at the same time, hopefully not more than 1/10 of your code base is relevant to security, especially if it's Java with its verbosity.

Friday, 1 November 2013

Fixing VMWare pain in the ass performance

A tech-ish post for a change. Just for the Googles of the world.

If you are running more than one VM on the same host, and performance is hell, you need to:
$ mkdir ~/.vmware
$ echo "sched.mem.pshare.enable = FALSE" >>~/.vmware/config
and launch VMWare. This makes VMWare stop constantly comparing memory blocks in hopes of finding common ones so that it can fake more memory on the page level. It will start using real RAM. Massive improvement, in my experience. You will obviously need more RAM.

Other good settings are:
MemTrimRate = 0
prefvmx.useRecommendedLockedMemSize = "TRUE"
prefvmx.minVmMemPct = "100"


Saturday, 12 October 2013

12 steps to saner infosec

Actually, after kicking any references to $deity from the original list, there is about 6 points left.

1. Admit that you cannot be in full control of your systems and networks

There will always be NSA to break your elliptic curves, or a new zero day in a library inside a library that you forked, modified and then used in your code. And if you say "defence in depth", I'll ask you to show me your "perimeter".


2. Recognise that this is not a defeat

Attackers are people too, and are driven by economic motives. If it is too hard and not worth the effort, they will not go after you. Unless they want to make a point, of course.

Make breaking into your stuff not worth the effort. That is, ensure the required effort is hard enough that "the bad guys" will give up.

3. Examine, with the help of others, your past efforts to "secure", "risk manage", "protect" everything to the level of "best practice" 

"Best practice" is partly management speak for "I have no idea how to deal with specifics of my business environment" and partly vendor sales pitch. Risk management is good in theory but does not work in practice for infosec, beyond very basic qualitative judgements.

Talk to others, inside your business sector and outside it. Etsy, Facebook, Twitter, and even Salesforce are doing awesome things. Talk to me, I'll buy you a beer! :)

4. Make amends for these errors (or efforts)

Don't be a business prevention specialist. Be nice to your developers, they are generally smarter than you - learn from them. Listen to your network admins, they are often more protective of their hosts than you think.

5. Learn to live a new life

Give people what they need to do their jobs and get out of the way - figure out a "secure enough" method of doing what people need without disrupting their jobs. Set yourself specific time limited goals and don't fall into the trap of "best practices" again (see point 1)

Make your own informed decisions. You cannot outsource understanding to consultants, whitepapers and Google.

6. Help others who suffer from the same addiction to total control

Run an exploit or two for them... Teach them about the halting problem, just because it's fun to see people realising what it entails, at least in theory. Send them a few links:
And maybe http://www.phrack.org/!

PS A vaguely related preso I gave is at http://www.slideshare.net/agelastic/security-vulnerabilities-for-grown-ups-gotocon-2012-15479294

Tuesday, 8 October 2013

What is Security Anonymous?

First of all, nothing to do with the evil Anonymous, and quite a bit to do with AA's "twelve step" program.

The awesome Spaf recently reminded everyone (excluding people who work for one of the few very awesome companies that actually have a grip on their infosec) that no-one on the "defence" side cares about security enough to seriously change the situation.

Step one in the yet-to-be-written 12 step program: admit that "defence" side is not doing well (be honest with yourself): 

Breaking things is thought to be sexier.

"User awareness" does not work.

Blinkenlights on products consoles don't give much reassurance other than psychological, or theatrical level.

Companies that thought they had security programs running well, find their source code dumped by the attackers on a random web server for unknown time.

Governments care mainly about how to break into their (or not) citizens' computers and backdooring crypto standards and implementations.

What's more, there is no "higher power" (see the original 12 steps) to appeal to. It's up to humble engineers who quietly do awesome stuff. I'll be posting about how others deal with their infosec challenges. No fluffy stuff, and probably no mention of "risk management," but you're welcome to convince me it works :)

What's even better, there will be drinkups! Because:

Rules–particularly the dogmatic variety–are most useful for those who aren’t confident enough to make their own damn decisions.
For the rest of us, there’s vodka–so we can cope with the decisions we were foolishly wise enough to make.
So help us, Grey Goose.
Amen.

Sunday, 22 September 2013

Words and works

A short while ago I've mentioned this blog to someone who read through posts and then came back, saying: "Nice ideas, but did you actually implement any of this?"

Here's what we've managed to implement at work, all or most of the ideas in these topics:

Code review tools and techniques

http://www.surrendercontrol.com/2013/05/crutches-and-static-code-analysis.html
http://www.surrendercontrol.com/2012/12/focused-code-reviews-followup.html

Application security for big web apps

http://www.surrendercontrol.com/2012/11/modern-web-application-security.html

Changing security culture

http://www.surrendercontrol.com/2012/12/changing-things-when-change-is-hard.html

Saturday, 14 September 2013

Wheels inside wheels



Reblogging from http://seclists.org/dailydave/2013/q2/38

… or, Ptolemaic model of the solar system of infosec.

Required reading: https://en.wikipedia.org/wiki/Deferent_and_epicycle

In all enterprise-y security courses they will teach you that there
are several components to defence processes:

10. If you can, try to prevent bad guys getting to you
20. If you cannot, try to detect an attempt to get in before it succeeds
30. If you cannot detect attempts, aim to detect whether you've been compromised
40. If you've been compromised, do incident response and clean up

(Imagine your enterprise assets is the Earth and those 4 items are other planets, orbiting it)

When the reality demonstrates that the current approach to any of the
components is inadequate, it gets updated with "smarter" technology.

What this "smarter" technology comprises changes with time, but it
always goes through stages of

1. Add more signatures, then
2. Do some sort of local behaviour analysis, then
3. "Big data" / "data mining" or similar magical words, then
4. Whatever else the market fancies

(These are equivalents of "wheels within wheels", or epicycles in Ptolemy's astronomy)

Examples:
- AV is permanently stuck on line 20 with a few epicycles, from signatures to big data, under its belt already;
- IoC (Indicators of Compromise) is line 30, only just at the beginning of its spiral.

The main take away here is that the defending side is, unfortunately,
retreating. Those "let's clean up compromises quicker" contests Spafford
was lamenting recently only illustrate this tendency further.

The other take-away is that I love lists…

Oh and if someone comes up with a true Copernican concept of security,
please tell me. I have to be part of that!

Monday, 17 June 2013

How to feel cool, whether you're an "attacker" or a "defender"

...in which it is "scientifically" (if psychology is considered science) proven why attacking usually feels "cooler" and unqualified self-help advice is provided. No less.

A little on terms: attacking includes both finding vulnerabilities and creating exploits, defending - ensuring that "attackers" don't succeed.

"Cooler" here does not mean the media angle, or the pecking order inside the industry. Here I talk only about self-perception based on ability to achieve own goals.

I stumbled upon an article last year, while researching what motivation developers would have to write secure code (short summary of research results: very little). A Twitter conversation this morning gave me an idea that the same approach can be applied inside the infosec industry as well.

Here are some quotes from the article with their "infosec" interpretation:
...it matters how people frame their good intentions or goals.  
For instance, better performances are observed when people set themselves challenging, specific goals as compared with challenging but vague goals (so-called "do your best" goals). 
Let's see: "attackers" do have specific goals; "defenders" - maybe, it depends. In many organisations it is indeed "do your best".
This goal-specificity effect is based on feedback and self-monitoring advantages, as is also true for the goal-proximity effect (proximal goals lead to better performances than distal goals). 
Attackers' goals are usually closer in time: find a vulnerability in this software, get this exploit running. Defenders... their goals last as long as the company exists.
Goal attainment is also more likely when people frame their good intentions as learning goals (to learn how to perform a given task) rather than performance goals (to find out through task performance how capable one is) 
Not sure about this, looks the same for both "sides" to me - depends how you actually do your work. If your goal is "figure out how quick I can run Nessus (or Metasploit)", then it kind of sucks either way.
...or when they frame their intentions as promotion goals (focusing on the presence or absence of positive outcomes) rather than prevention goals (focusing on the presence or absence of negative outcomes)
This one is obvious - attackers' outcomes are usually positive, while defenders are stuck with "prevention".

Now what you can do if you think "defenders suck", and you're one of them?

Reframe your goals:

  1. Make them more specific,
  2. Closer in time (e.g. "we need to do X threat models in 3 months")
  3. Learn new stuff
  4. Stop being a "prevention specialist". Find positive outcomes to pursue.
I bet this can even get you a promotion.

Sunday, 9 June 2013

How to set up an ARM VM for Open Security Training's "Intro to ARM" course

Don't you hate it when you can't follow the video and end up just staring at the slides?

Folks at http://opensecuritytraining.info/IntroARM.html published a great course about ARM assembly. Chances are, if you are watching the course, you'll want to follow along with the class' labs and examples. The speaker uses a QEMU VM, which, alas, is not provided with the course. In fact, he uses a QEMU VM inside an Ubuntu VM inside, umm, Windows something? (If you read Russian, check this out :)

Not to worry, thanks to Linaro, it's quite easy to set one up without getting involved into a complex cross-compiling project.

Here's a compressed version how to do it. This works on Macs and Linux-es (or is it Linuces?). None of this is original, I took the info off a bunch of web pages, links below.

ARM VM setup

1. Install QEMU.

sudo apt-get install qemu qemu-system (on Linux)
brew install qemu (on Mac)

You may need to install other packages, depending on what OS you're on, different Linux flavours package QEMU differently. Check that you have qemu-system-arm command on your system, if not:

apt-cache search qemu-system-arm

to see what other packages you're missing. Another utility "qemu-img" will come in handy later, check that it is present as well. Check that your QEMU is at least v1.0 - Debian, for example, has very old builds by default.

2. Now you're ready to get a Linaro image running. Here are slightly modified instructions from https://wiki.linaro.org/PeterMaydell/QemuVersatileExpress. Versatile Express is a good choice of a modern ARM system to emulate since form Mac's and Linux' QEMU can do it.

Get image:
wget http://releases.linaro.org/images/12.03/oneiric/nano/vexpress-a9-nano.img.gz
gunzip vexpress-a9-nano.img.gz

Extract kernel and initrd:

(IMG=vexpress.img ; if [ -e "$IMG" ] ; then sudo mount -o loop,offset="$(file "$IMG" | awk 'BEGIN { RS=";"; } /partition 2/ { print $7*512; }')" -t auto "$IMG" /mnt/mnt; else echo "$IMG not found"; fi )

sudo cp -vr /mnt/mnt/boot .
sudo chown -R youruser:youruser boot
sudo umount /mnt/mnt

You end up with a number of images in .boot:

abi-3.3.0-1800-linaro-lt-vexpress-a9         
config-3.3.0-1800-linaro-lt-vexpress-a9      
initrd.img-3.3.0-1800-linaro-lt-vexpress-a9
System.map-3.3.0-1800-linaro-lt-vexpress-a9
vmlinuz-3.3.0-1800-linaro-lt-vexpress-a9

Copy the original vexpress image, vmlinuz.* and initrd.* somewhere where you want to keep your VM.

Run image:

qemu-system-arm -kernel vmlinuz-3.3.0-1800-linaro-lt-vexpress-a9 -M vexpress-a9 -cpu cortex-a9 -serial stdio -m 1024 -initrd initrd.img-3.3.0-1800-linaro-lt-vexpress-a9 -append 'root=/dev/mmcblk0p2 rw mem=1024M raid=noautodetect console=tty1 rootwait vmalloc=256MB devtmpfs.mount=0' -redir tcp:5022::22 -sd vexpress.img

This should bring up a black window with a Linux penguin in the top left corner and a usual boot sequence.

3. The image is configured with root autologin, there is a "linaro" user with "linaro" password as well. If you wish to autologins kill "/etc/init/openvt.conf" and edit "/etc/default/autogetty" to disable any autologin.

The qemu command above also maps the VM's port 22 to localhost:5022 so that you can use scp to copy files after sshd is installed like this:

scp -P 5522 projects.tar.gz linaro@localhost:~

4. Install tools inside the image:

apt-get install aptitude
aptitude install binutils gcc gdb build-essential vim openssh-server

...and whatever else looks missing. Beware that the linaro image comes with little free disk space.

apt-get clean

might save you a few megabytes, alternatively see next point.

5. Optional - upsize the image (make a copy first just in case):

qemu-img resize vexpress.img +512M

Then use parted or gparted (not available on Mac, unfortunately) on the image to actually resize the file system. It looks like gparted gets confused (those GUI tools!) - check out http://www.ros.org/wiki/groovy/Installation/Linaro/Source for how to deal with it.

Mac users might want to copy the .img file to a linux box/VM and run parted there. There is probably a contrived way using fdisk and resize2fs

6. Even more optional and probably does not work:

apt-get install cpufrequtils

- might speed the VM up a litte. Or it might not, I am not sure this util even works in a VM environment.

Tips and errata for the course

  1. Lab 3 Interrupts is not in the projects.tar.gz :( Day 2 Part 2 video talks about it at length, though.
  2. There are answers inside ".solutions" directory
  3. Day 2 Part 1 around 41:30 (slide 159) is incorrect (I think) about the meaning of the caret in
    LDMFD sp!, {r0-r12,lr}
    -
    it means that the user mode registers are loaded and not the banked registers

Wednesday, 22 May 2013

Vim for code reading

GUI-based IDEs are nice tools for reading code, when properly set up. SourceInsight is probably the best based on the combination of efficiency/quality/price.

What if you find yourself with only a (colour) text console? I've put together a small .vimrc to make reading code a nicer experience in those situations.

If you are new to Vim, check out the README in https://github.com/agelastic/vim-reading for basic commands. Those are all you need to read and navigate any code in Vim.

.vimrc in that repo does the following:
  • Ensures we are running a vim (and not vi) in a colour console
  • Turns on search highlighting and 'search as you type'
  • Sets detailed status lines
  • Turns on code folding and makes initial state 'unfold all', which I like more than the default 'fold all on opening file'
  • Turns saving some info on exiting file
  • Lets you use Tab for vim command completion
  • Maps Shift-Up and Shift-Down to move between Vim windows
  • Maps Ctrl-X to "exit vim"
There are several plugins that look useful for this task, but I'm not adding them to keep the config lightweight.

If you want do explore those plugins, here's a bunch of links:

Sunday, 12 May 2013

Crutches and static code analysis


First this was going to be a blog, then a DD post, then a blog again...

A while ago I've read an article absolutely not about security but about how great it is to work in small friendly teams - http://pragprog.com/magazines/2012-12/agile-in-the-small

It contains an awesome quote:
"...most best practices are just crutches for having a heterogeneous skill mix in one’s team."
Please hold that quote in mind while I turn to the figures recently released by WhiteHat Security
They say that 39% of their clients use some sort of source code analysis on their webapps. These customers experience (probably meaning 'discover') more vulnerabilities, resolve them *slower* and
have a *worse* remediation rate.

Why is this? If you have ever been a customer to a SCA salesman then you know. Their pitch goes like this:

"All you need to do is to run our magic tool with this 'best practice' configuration and fix all results. The tool does not require a person who understands app security to be involved. It's like a tester in a box. Even better, just use "OWASP top 20" (I call it "fake silver bullet") configuration, this is what everyone else does."

Typical outcomes: the tool finds a large amount of rather unimportant noise, rates the issues overly high just in case. Developers get tired fixing these often nonsensical results. You'd be amazed how many people run SCA (or web scanners) with the default config and then forward results to developer teams, their own or third party's. Eventually, the person running the magical scanner starts being treated as the boy who cried wolf too often.

Now, this is *not* a post against static analysis. Static analysis can be an awesome tool for vulnerability research, especially for C/C++ code (although everyone seems to be 'fuzzing kernels' instead) and maybe even in web apps. That is, if the tool you've got is capable of being used as a research helper, not a checkbox filler.

Unfortunately the reaction of SCA salesmen to such a request (not of all, but many) is usually "You want what? Write your own rules? And drill down on results? And specify sanitisers and stuff? Crazy! Let me find someone back at headquarters who knows what you're talking about…"

Very often, a few simple scripts involving minimal lexing/parsing, written for *your* specific web app (even without getting into ASTs, solvers and data taint analysis) can be way more useful in finding old and preventing new issues. Especially if they are run as commit hooks in your git repo.

Back to the 'best practices' quote - if you are a software vendor and you want to get real benefits from commercial SCA tools (I do not count compliance among the benefits), do two things: hire someone with a clue (about app sec, about development, about SCA) and get a tool which has configurable rules.

Otherwise don't even bother. It will be about as effective as, and much more expensive than, running an IDS or AV.

Saturday, 30 March 2013

Scams in security testing

Dedicated to people who submit Web scanner results to their software vendors.

A while ago I stumbled upon a book on software testing. Not security, mind you, just plain normal software testing. By my favourite "techie" author Gerald Weinberg - Perfect software and other illusions about software testing. It's a great read for app security folks, as long as you are capable of making basic domain substitutions.

My favourite chapter in the book is "Testing scams", where the author follows up his earlier discussion of fallacies in testing with a list of outright scams by vendors promising to sell a magic testing tools. He says
"Here's the secret about tools: Good tools amplify effectiveness. If your testing effectiveness is negative, adding tools will only amplify the negativity. Any other claim a tool vendor makes is, most likely, some kind of scam."
I made a short summary of this chapter, with examples from security testing domain (mostly web, "dynamic" and source code, "static" scanners). Text in quote marks is from the book, apart from the obvious phrases.

1. "Tool demonstration is a scam" - where you are shown a perfect demo, of a scanner running on WebGoat. And when you try it on your own code, the scanner explodes with junk. Of course the demo was carefully designed and tuned to produce the best impression.

Subtype 1a: You are not allowed do your own PoC without vendor's helpful supervision. At the least, they will give you a spreadsheet with criteria to compare their product against others.

Note: If you are not capable of conducting your own PoC without asking vendors for help, you should not be buying any of those tools in the first place.

2. "With all these testimonials, it must be good" - where there isn't even a demo, but you get a pile of whitepapers with pseudo-test results (comparisons, certifications, endorsements). These docs usually "appear to contain information, but ... only identify charlatans who took a fee or some payment [in kind] for the used of their names".

As a test, try requesting personal testimonies from the customers involved, ideally see how they use the tool. If the vendor cannot produce a single customer who is excited about their product so much that they want to show you how wonderful it is, it's a crap tool.

3. "We scam you with our pricing" - where the vendor creates cognitive dissonance among previously scammed people. As a result, despite the expensive purchase being a failure on all levels, from purchasers to end users, they keep this fact to themselves.

Subtype 3a: "[is] discrediting competitive tools with disingeniousness, suggesting, 'With a price so low, how could those tools be any good?'"

4. "Our tool can read minds" - where the tool is presented as a complete replacement of a security specialist - testing even better than a human, and not requiring any human post-processing. In personal experience, this is one of the most common scams in security testing market, with scam #1 being the next most popular, and #3 and #4 reserved for very pricey tools (you know who you are).

A belief that a magic app security silver bullet exists is so deep, that when the tool (or the "cloud" service) quickly fails to deliver to its promises, their customer concludes that this was his/her own mistake and there is another magic mind-reading service elsewhere. Rinse and repeat.

Note: There is no silver bullet. Artificial Intelligence is hard. Turing was right.

5. "We promise that you don't have to do a thing" - where a "cloud" service promises that all the customer has to do is to point it to the web app, and the service will spit out actionable results with no false positives and no false negatives. Less experienced security teams or managers fall for this one quite often, since many of the services come with a promise of manual postprocessing of results by the most experienced analysts in the world (or close to that). Where this fails is lack of context for the testing. The vendor does not know your code, they do not know your context in terms of exploitability, mitigating controls, or impacts. They do not know what technologies and methodologies your developers use. What usually comes out of such services is slightly processed results of a scanner with generic settings.

Subtype 5a: - when the service vendor does the old "bait and switch" between its personnel involved in the sales process (gurus) and who you get once you pay (little to no experience button pushers in a cheap outsourced location).

Still with me? Here's the summary:

If someone promises you something for nothing, it is a scam (or they are Mother Theresa, that is, not a business person). Even if you are promised a magic tool in exchange for a lot of money (scam 3 above), this is still a promise of something for nothing. 

It is impossible to do good security testing other than by employing (in one way or another) people who know the context of your environment and code or who are willing to learn it.

Thursday, 14 March 2013

Medievalism in infosec

Dedicated to the last pope.

In my quest to understand the elusive American puritanist psyche I've been reading up on origins and history of Christianity recently.

As a side note - original biblical languages are so much fun. Not only nobody is quite sure which tense in Biblical Hebrew is past and which - future, but even when the meaning is obvious, translations do so much moralising and sweeping all the blood sex and genocide in the Old Testament under the carpet.

Example: did you notice how many times a woman fiddles (uncovers, kisses, touches etc) man's feet in the OT? But never the other way around or, God forbid, a man to a man? It turns out, "feet" is an euphemism :)

Anyhow, this post is about striking parallels between some old religious metaphors and the modern "cybersecurity" ones.

Infosec thinking as Judaism of 1st century BCE

A quote from a very respected Biblical scholar:

"Apocalyptic eschatology" is ... centering in the belief that  
(1) the present world order, regarded as both evil and oppressive, is under the temporary control of Satan and his human accomplices, and
(2) that this present evil world order will shortly be destroyed by God and replaced by a new and perfect order corresponding to Eden before the fall.  
During the present evil age, the people of God are an oppressed minority who fervently expect God, or his specially chosen agent the Messiah, to rescue them. The transition between the old and the new ages will be introduced with a final series of battles fought by the people of God against the human allies of Satan. The outcome is never in question, however, for the enemies of God are predestined for defeat and destruction. The inauguration of the new age will begin with the arrival of God or his accredited agent to judge the wicked and reward the righteous, and will be concluded by the re-creation or transformation of the earth and the heavens. This theological narrative characterized segments of early Judaism from ea. 200 BCE to ea. 200 CE"
Let's change some words:

  • 'World order' => information systems
  • 'Satan and his human accomplices' => evil hackers, APT!!1!
  • 'God' => well, I guess, it stays?
  • 'The people of God' => various infosec consultants, from Mandiant to the forthcoming 13 tribes of the US cyberdefense.

You get my drift. There is also an awesome document called The War Scroll, describing the final fight of Sons of Light and Sons of Darkness... then there are Gnostics... There is a PhD in this somewhere.

Infosec industry as pre-reformation Catholic church

No needs for extensive quotes here, Catholicism is a popular topic this week. Just one example: exorcisms.

A typical exorcism: a person was apparently possessed by an invisible evil spirit who caused all kind of trouble (Exorcist is a fine movie, watch it), then a licensed exorcist priest was called, who did strange things and expelled the immaterial possessor out of the victim's body, collected fee and told the victim to install a fountain with holy water, pray and not sin any more.

People sinned, confessed, sinned again...

The priests themselves were idealistic young men or hypocritical old farts who did not practice what they preached.

Substitution table:

  • 'Satan', as before => 'bad hackers'
  • 'possessed human' => 'infiltrated company'
  • 'exorcists' => security consultants, especially DFIR type
  • 'sins' => "bad" security practices
  • 'confession' => audit or pentest, perhaps.

I could go on but it's time to wrap up, since we ought to celebrate: according to Iranian sources, Habemus Papam has been elected the new pope