From my Gartner Blog - Is It Really Failing That Bad?
One of Gartner’s 2016 predicts documents include a very interesting finding about vulnerabilities being exploited:
Existing vulnerabilities remain prevalent throughout the threat landscape, as 99.99% of exploits are based on vulnerabilities already known for at least one year.
Ok, so if known vulnerabilities are the target of basically all exploits, does it mean Vulnerability Management is a perfect example of FAIL? Should we just stop trying it and do something else? It is a tempting (and somewhat) easy conclusion, but I have to say this may not be the case.
First, let’s carefully examine the finding above and try to just look at the reported fact: exploits are based on vulnerabilities known for more than a year. That’s it. Now, let’s see how some natural lines of thought that could come from that:
– Vulnerability Management’s goal is to reduce risk from existing known vulnerabilities. If known vulnerabilities are being exploited, it has failed its main purpose.
– As VM is not working, there is no point in trying to improve it, as we’ve been trying that for a long time and we are still seeing breaches via known vulnerabilities.
– If VM is not working, we should avoid being breached via the exploitation of known vulnerabilities in a different manner. Alternatives would either eliminate the sources of vulnerabilities (such as software from vendors with a bad record on writing secure code), make exploitation harder or impossible (via additional security controls, such as EMET, for example) or reduce the impact of exploitation (via architectural approaches such as microsegmentation, sandboxing, etc).
The first point, on VM having failed; even if there are many organizations doing a great job on VM, there’s still plenty of those doing it very bad or not doing it at all. So, even if the population for 0-day attacks is bigger, the population vulnerable to conventional attacks is still big. Let’s say, being very optimistic, that 70% of organizations have perfect VM; it still means that 30% are vulnerable to old known vulnerabilities.
On top of that, it’s cheaper to attack known vulnerabilities: research, tools, PoCs are already available, so you don’t need the skills and time to find new vulnerabilities and produce exploits against them. There is a cheap method with plenty of vulnerable targets; why try anything different? So, attackers exploiting known vulnerabilities is not necessarily incompatible with good VM being done by many organizations.
The second point, on not making sense to improve VM. The overall result from a process like VM is not black and white. If you manage thousands of systems and you manager to move from 100% vulnerable systems to 10%, it is a quite good result, even if you still need to do something else to handle the successful attack attempts against those 10%. Yes, you don’t eliminate the problem, but it brings numbers down to a level where your other security processes, such as Incident Response and Detection, have a chance to be useful.
So, VM won’t make you incident free, but it can move incidents to a manageable number.
Last, but not least, because it could still be a valid point even considering the aspects above. If we can’t reach that perfect level with VM, can’t we try an alternative approach that does it? Like what?
[HERE YOU TELL ME ABOUT THE INCREDIBLE WAY TO BE COMPLETELY IMMUNE TO ATTACKS]
Now, let’s look at that idea and assess it considering:
– Sentient attackers: you know, those bad guys evolve! They adapt! After you deploy your magic solution, what would they do to still be able to reach their goals? They won’t just give up and leave, so your solution should be threat-evolution proof.
– Changing IT environment: Great, you found a magic solution that makes all your desktops and servers hacker-proof. And then your users all migrate to vulnerable mobile devices. Or your data suddenly moves to the cloud. Yes, we are constantly dealing with a moving target, so as much as VM suffers from that, your solution most likely will also feel the impact of the ever changing IT environment. It will be even worse if your solution makes it harder to change, as users will rebel against you and find neat ways to bypass your controls.
– Legacy: We keep dealing with untouchable stuff. Systems that you can’t install new things, can’t migrate to a new (and better platform), remove vulnerable pieces. This is a strong reason to limit what we can achieve with VM, and it will also affect how well your solution performs. Does it require a move to a different technology or platform? If so, high chances of leaving a piece of the environment behind (and vulnerable)
If your solution passed those three considerations and still delivers better value than VM, it might be worth trying. However, I’m skeptic that you could find something that would work for many different organizations, independent of their size and culture. There may be something that works perfectly for you, but the chances of that being a good candidate to replace VM all over the world are very, very slim.
I didn’t add Vulnerability Management to the title of this post for a reason. I believe it applies to many other security practices. They have their value, but you shouldn’t expect perfect results because they are just not achievable. Just like that old but still very valid quote from Marcus Ranum, “Will the future be more secure? It’ll be just as insecure as it possibly can, while still continuing to function. Just like it is today.”
The post Is It Really Failing That Bad? appeared first on Augusto Barros.
from Augusto Barros http://ift.tt/1HTwu8n
via IFTTT