I agree with Ben Tomhave
on this particular subject. He is basically saying that we still don't have a good solution for reliable and repeatable risk assessments. I must say that this is not true to smaller scopes, like a single application or a small network or system. However, when we start talking about a risk assessment for an entire organization, I really don't trust the results.
A lot of people will say that this is not true, as they've already completed successfully several assessments. For those I would ask, do you think that just by delivering a methodology you can ensure that the results would be the same for any other (competent) security professional? Until we can answer that with a sounding "YES", I don't think we've developed a good enough methodology for risk assessments. In short, I want to see a methodology that brings results that can used to:
- Compare the risk from different organizations (benchmarking!)
- Compare the risk for the same organization in different points of time
- Identify a comfortable level of risk that will be pursued by the implementation of security measures
- Identify the results of applying security measures (answering the basic question, "was it helpful/worth doing?")
- Compare the risk from two or more different business processes, components or approaches
- Protect against "black swans" (this one is extremely hard)
It should also:
- Include "blind spots" from the organization into the risk calculation
- Consider the interdependency of different business and technology processes and components (how much risk are your production systems inheriting from your development systems?)
- Be resilient to the fact that almost all medium/big organizations have very high levels of uncertainty on the different variables usually necessary for a meaningful risk calculation
That's not easy and most of the current methodologies cannot address all these issues. That's the fun part in our job today, we need to find how to do it.