Beware of "low impact" in risk assessments
The details of the RSA breach emerged Today and confirmed one thing I already expected to see, the escalation of privileges path taken by the intruder from a regular user (one of the victims of the spear phishing e-mail) to the target data. That was the strategy we used to choose in pentesting 10 years ago, and I don't see why it wouldn't work now. That's something interesting that happens in the security industry and that has aspects of massive cognitive dissonance, the illusion of "low impact" intrusion targets.
Why did the Titanic sink? Ok, no big failure like that can be attributed to a single root cause, but I'll choose one here to illustrate my point, the compartmentalization failure. The Titanic hull was built in a way that a hole in it could not sink the whole ship, as the water would only inundate one compartment that could be isolated from the others. The issue with the iceberg incident was that it ripped the side of the ship in a way that multiple compartments were inundated, bringing the whole ship down. The threat assumption in the Titanic's hull compartmentalization design was that the threats would be holes, not a gash. Wrong assumptions sink ships. And breach networks too.
Even if it's considered best practice it's not very common to see properly compartmentalized networks out there. When compartmentalization is applied, it's usually done at the server side, with multiple segregated networks with servers grouped according to different criteria, such as data classification or line of business. That's cool, but it usually protects from intruders jumping from one groups of servers to the other. What about the users network?
Let's be fair, it's very hard to deploy appropriate network controls at the distribution network. There are lot's of switches, sometimes with very limited management capabilities and different physical locations. Not to mention wireless networks, remote and mobile users. However, there's a lot of interesting products out there (most on the NAC realm) to help on that. But those networks are usually seen as less important then the servers side, incidents there considered "low impact". That's bullshit. Most of the big breaches now target the user's computer, where there'll be someone willing to click on links and opening files all over the place. It's easier to get your bridgehead in a network on a user workstation than in a well protected and monitored server. From there the intruder will learn how the organization infrastructure works, will start harvesting interesting credentials and look for the target data. Everything in a network that is not usually monitored. Can you detect today a brute force authentication attack against the local built-in administrator account from one workstation to another? I mean, no servers involved?
So, when defining that some targets, specially users workstations, would only cause "low impact", remember to consider what an intruder could do inside your network. Even better, try to hire a pentest starting from your users network, defining your most critical data as the final target. Check how that test will be seen by your security monitoring processes. The lessons from that will most likely change the "low impact" classification of a lot of things in your organization, what will cause a revolution in your risk assessments and security initiatives prioritization. And do it fast, before someone you'll call APT does that for you.