From my Gartner Blog - It’s Not (Only) That The Basics Are Hard…
While working on our research for testing security practices, and also about BAS tools, I’ve noticed that a common question about adding more testing is “why not putting some real effort in doing the basics instead of yet another security test?”. After all, there is no point in looking for holes when you don’t even have a functional vulnerability management program, right?
But the problem is not about not doing the basics. It is about making sure the basics are in place! Doing the basics is ok, but making sure your basics are working is not trivial.
Think about the top 5 of the famous “20 Critical Security controls“:
Inventory of Authorized and Unauthorized Devices
Inventory of Authorized and Unauthorized Software
Secure Configurations for Hardware and Software
Continuous Vulnerability Assessment and Remediation
Controlled Use of Administrative Privileges
How do you know your processes to maintain devices and software inventories are working? What about the hardening, vulnerability management and privileged access management processes? How confident are you that they are working properly?
If you think about the volume and frequency of changes in the technology environment of a big organization, it’s easy to see how the basic security controls can fail. Of course, good processes are built with the verification and validation steps to catch exceptions and mistakes, but they still happen. This is a base rate problem: with the complexity and high number of changes in the environment, even the best process out there will leave a few things behind. And when it is about security…the “thing left behind” may be a badly maintained CMS exposed to the Internet, a CVSS 10 vulnerability, unpatched, a credential with excessive privileges and a weak (maybe even DEFAULT!) password.
I’ve seen many pentests where the full compromise was performed by the exploitation of those small mistakes and misconfigurations. The security team gets a report with a list of things to address that were really exceptions of processes that are doing a good job (again, you may argue that they are not doing a good job, but this is the point where I stop saying there’s no such thing as a perfect control). So they clean those things, double check the controls and think “this definitely will never happen again!”, just to be see the next test, one year after, also succeeding by exploiting a similar, but different combination of unnoticed issues.
And that’s one of the main value drivers for BAS. Choosing to deploy a tool like that is to recognize that even the good controls and processes will eventually fail, and put something that will continuously try to find those issues left behind. By doing that in an automated manner you can ensure to cover the entire* environment consistently and very frequently, reducing the time those issues will be exposed to real attackers. Is it another layer of control? Yes, it is. But an automated layer to keep the overhead to a minimum. If your basics are indeed working well the findings should also not be overwhelming to the point of becoming a distraction.
* – You may catch the funny gap in this rationale…you may also end up failing because the BAS tool is not checking the entire environment, due to an issue with inventory management. Or the tests are not working as intended because they are being blocked by a firewall that should have an exception rule for the tool; yes, this using BAS is also a control, so it may fail too!
The post It’s Not (Only) That The Basics Are Hard… appeared first on Augusto Barros.
from Augusto Barros http://ift.tt/2F82kSk
via IFTTT