That was a real Fire starter!
This week began with some real fun caused by Mike Rothman's Firestarter post, "Risk Metrics Are Crap". A lot of the people jumped up and down on the post and there have been a lot of sharp complaints about his point of view. Well, I must say I completely agree with what he is saying on that post, and I also think that most of those criticizing the post haven't really understood what he's trying to say.
But what's the real problem with risk metrics? IMHO it's a twofold problem. First, those numbers are too hard to obtain. Hey, before someone tries to throw the DBIR report on me, I'm talking about all the required data to build a reasonable risk metric. There's plenty of data out there for several aspects, but almost never everything you'll need for a good enough number good (I'm not even talking about laser precision stuff). I'll even make Mr. Hutton mad by mentioning the Black Swan effect :-). This is often linked to the challenge of identifying the likelihood of the incidents, but I don't think that's the case. We can derive a lot of reasonable information from the Verizon Report for that. We can see that zero days are not the monster that some FUD based vendors try to make you believe, and it's quite obvious that custom malware and SQL injection attempts will knock on your door. What is really challenging in generating a good risk number is the Impact assessment. I don't even need to talk about ALE, the key issue is predicting how a single incident will impact your business; think Heartland, HBGary, the Wikileaks cables. The impact from those breaches was probably way out of sight for those organizations.
Second, context and scope. Those aspects vary so much that it's almost impossible to apply the same data in two different assessments. Just a small list of factors that will make likelihood and impact factors different for the same issue for two different organizations: Size, demographics (different countries, for example), business, public image, technologies being used, internal organization and IT maturity, adversaries involved. About scope: with the level of interconnectivity and interdependency in organizations Today, how can we clearly determine the scope of a risk number? How much the number from "adjacent scopes" can interfere with your numbers? That's extremely hard to tell.
Please note that I didn't say at any point that it's impossible to get those numbers. I've probably said "almost impossible", and that's the catch. Theoretically they are ok, but I believe that the cost for getting good enough numbers will almost always be far greater than the real benefit.
One of the most important points on Mike's post, is that he's not saying that metrics are crap. He is saying that RISK metrics are crap. He even says that security metrics are important:
"BTW, I’m not taking a dump on all quantification. I have always been a big fan of security (as opposed to risk) metrics. From an operational standpoint, we need to measure our activity and work to improve it. I have been an outspoken proponent of benchmarking, which requires sharing data (h/t to New School), and I expect to be kicking off a research project to dig into security benchmarking within the next few weeks."
Mike also sys that it's important to assess risk. He clearly states that in the post:
"That said, I do think it’s very important to assess risk, as opposed to trying to quantify it. No, I’m not talking out of both sides of my mouth. We need to be able to categorize every decision into a number of risk buckets that can be used to compare the relative risk of any decision we make against other choices we could make. For example, we should be able to evaluate the risk of firing our trusted admin (probably pretty risky, unless your de-provisioning processes kick ass) versus not upgrading your perimeter with a fancy application aware box (not as risky because you already block Facebook and do network layer DLP)."
That's the main point, let's focus on the decision process, not on generating the risk metrics. Risk metrics are often created to be used during decision making on security investments and prioritization. I believe it's quite limited to consider that those decisions can only be made based on absolute numbers. Executives (in fact, almost all types of professionals) have to do that all the time. I'm not saying we should not try to do evidence based decision making. I'm saying that creating fantasy metrics disguised as evidence doesn't work.
Decision making can use a lot of factors different than risk metrics. It can use those good security metrics to derive prioritization on threats and the efficiency and effectiveness of controls. Controls and measures costs (obvious obvious obvious), easiness to integrate with the organization IT environment, resiliency (that's extremely important - how well the control does adapt to exceptions and changes?) are just some of the factors that can be considered.
Again, not all decisions are number based! Look at how many business decisions are made with simple tools such as a SWOT analysis. Not to mention the more elaborated techniques you can find.
(We can even stick to a numeric approach applying some number crunch magic powder, if we have enough data (VERIS will help us with that), but even that doesn't mean we'll generate the magic "RISK" number.)
In short, security decisions must be evidence based, but that evidence is most likely NOT a quantified risk metric.