Software security
This tweet from Pete Lindstrom made me think for a while in software security:
@SpireSec: Does anyone really think you can completely eliminate vulns? If not, when is software security "secure enough" #makesmewannascream
No, I don’t think we can eliminate software vulnerabilities; Pete’s question is perfect. If we accept the fact that software will always have vulnerabilities, how can we define when it’s too much and when it’s acceptable?
I like one of his suggestions, some kind of “vulnerability density” metric. But it doesn’t look like it’s everything to me. In fact, I would probably favor software with more vulnerabilities but with a better managed patching process by the vendor than something with just a few vulnerabilities which are never patched or the patches are a nightmare to deploy. So, the factors that would be included in this assessment would be:
- Vulnerability density
- Average time from disclosure to patch by the vendor
- Patching process complexity/cost/risk
In short, it’s not only about how big the problem is, but also how easy is to keep it under control.
Another interesting aspect is that those factors are completely dependent on the software provider. But factors from the client side are also important. If the technology environment you have in place is better prepared to protect Microsoft systems than Linux, a vulnerable Microsoft system is a lesser problem for you than a Linux vulnerable system. Would you prefer to have software with less vulnerabilities but less monitoring capabilities or more visibility with more vulnerabilities? It will depend on how your security strategy is assembled.
So, comparing software in terms of security is not trivial. I’m going even further by saying it’s context dependent too.