Friday, July 24, 2009

+/- 40% accuracy and we think it's good?

I was caught by surprise when I was reading Matthew Rosenquist post on the IT@Intel blog by this information about the OCTAVE methodology:

"I have observed the accuracy to be +/- 40% in complex organizations.  I believe this is largely due to multiple tiers of qualitative-to-quantitative analysis and the bias introduced at each level.  Credible sources have expressed a better +/- 20% accuracy for smaller implementations."

Even if Matthew is defending the use of the methodology, these are very strong numbers for me. I cannot see how a methodology with this level of accuracy can be much better than some quick and dirty threat and impact assessment, at least for getting support information for a security strategy definition.

I was always a very big fan of risk based methodologies and frameworks like ISO27002. However, they all seem to suffer from a "first steps syndrome", they are extremely hard to be put in motion and it takes a long time before they start to be effective. Eventually, after a couple of years, you'll start to get some good results. But until you get there you're probably exposed and have some serious gaps on your security posture.

This is not just the case of fixing the urgent gaps first and then starting the everything "in the right way". The gap fixing will become a neverending firefighting and will suck time and resources needed for the big stuff. What we need now is a way to reach a desirable end state by a series of actions that will solve immediate issues while staying in the path for that. And how is that possible?

I'm still not sure, but I'm trying to put something together in that way. That would include:

  • More prescriptive directions (like PCI-DSS)
  • Quick and dirty, facts based threat assessment
  • Actions prioritization based on immediate outcome, reach (threats and assets related) and increasing value over time
  • Outcome based metrics


No comments:

Post a Comment