Friday, April 29, 2011

Post Mortem lessons from Amazon

The AWS outage last week had caused a surge in cloud reliability discussions. I believe it turns out that using cloud service providers is much like using any other IT service, you must do your homework about how to deal with failures and also the appropriate vendor management procedures to choose wisely.

 

Having said that, Amazon is still the leader in cloud computing services and in my opinion their behaviour in reacting to this incident clearly shows why. They have just published an extremely detailed Post Mortem analysis, presenting the root causes, what is being done to avoid similar events in the future and also offering reasonable compensation to affected clients. It's also worth pointing that they mentioned the root cause even if it was a change mistake, a very honest posture, in my point of view.

 

If all the service providers behave like that we'll definitely keep seeing an increase in business moving to the cloud. Congratulations to Amazon.  

Thursday, April 28, 2011

Must read for those working with vuln. management

Seriously, if you have a Vulnerability Management process, you MUST read it. You don't need to necessarily apply everything in the presentation, but the idea behind it should really be considered when putting together a strategy to deal with the massive number of vulnerabilities that are published every day.

The key word on this is "Intelligence", gathering more meaningful information and data that you can base your actions on. Beware of the "best practices" in Vulnerability Management...most of them don't include anything like that and just tries to make you patching cycle wheel spin as fast as possible. Not very effective and greatly increases the chance of breaking stuff.

McAfee VirusScan Enterprise: False Positive Detection Generic.dx!yxk in DAT 6329

 

McAfee Labs have issued an alert that McAfee VirusScan DAT file 6329 is returning a false positive for spsgui.exe. This is impacting SAP telephone connectivity functionality.


McAfee have a work around for the issue documented in KB71739 https://kc.mcafee.com/corporate/index?page=content&id=KB71739

 

Chris Mohan --- Internet Storm Center Handler on Duty

They seem to be improving...at least it's not a core component of the OS this time :-)

Wednesday, April 20, 2011

Will we see the return of low level vulnerabilities?

With the efforts towards the migration to IPv6 (and all the protocols related to it, such as ICMPv6) and DNSSEC, a lot of vendors are running to add the support for those protocols to their products. Vulnerabilities in protocols at the lower ISO stack levels haven't been common, but there were plenty of those when the Internet became popular (remember the Ping of Death?). The times when you could bring down a system with a simple "ping" seemed to be over, but now, with a lot of new code handling the basic stuff being deployed, we'll probably see again a surge in vulnerabilities like those being exploited.

However, the scenario is quite different now. Some factors that may make things different:

  • The Internet now is slightly different from that one in the 90's...I wonder what could happen if someone finds a new PoD Today.
  • Developers know that their code will be attacked, that things like "buffer overflows" can be exploited. Big vendors have SDLCs in place.
  • The research community is bigger and better prepared. A lot of very good people trying to find bugs.
  • The tools to find bugs have also evolved. A lot of researches are pointing their new shinny fuzzers to everything that runs code.
  • More powerful and well funded organizations searching for "cyberweapons".

During the last years we've seen the attackers targets going up the ISO layers. With all the new code being deployed there's no reason to believe they won't revisit the lower levels to find "lower hanging fruits" (pardon the pun).

Tuesday, April 19, 2011

Quick comments on the Verizon DBIR 2011 report

The Verizon Business guys have just delivered the 2011 DBIR report. Again, a very nice job and one of the key sources for the CISOs around the world to do their planning, decision making and prioritization.
 
The most commented point of this year's report is the huge drop in number of records affected, even with a bigger number of breaches included in the report. I think this is a case of over analyzing and some really stupid explanations are flying around. In my opinion it's just a number theory issue; if you look at the numbers of the report in its multiple editions, the number of breach cases (let's say instances) are more or less reasonable, within the same order of magnitude and reflecting the growing effort of the authors in getting more instances into their database.
 
The number of records number, on the other hand, will always wildly vary, and unless it's considered with a lot of additional categorization and normalization it's not really good to derive any useful conclusions. The number of records kept by different organizations varies from hundreds to hundreds of millions. A breach in a single organization with a huge number of records (government agencies, for example) would completely change the numbers in the entire report. The authors are aware of that, and whenever possible they try to make it clear in the report. Of course, a lot of people will just skip those lots of words and go just for the juicy charts :-)
 
Anyway, I really hope Verizon allows us to play with their database in the future. Being able to produce our own charts using filters based on different organization demographics would greatly increase the value of the data for security planning. Maybe a two-way agreement (something like expanding the VERIS program, which by the way is already bringing nice results), where organizations submitting breach information would get access to the database would help them making the report even better but also more useful for the consumers.   

Monday, April 4, 2011

Beware of "low impact" in risk assessments

The details of the RSA breach emerged Today and confirmed one thing I already expected to see, the escalation of privileges path taken by the intruder from a regular user (one of the victims of the spear phishing e-mail) to the target data. That was the strategy we used to choose in pentesting 10 years ago, and I don't see why it wouldn't work now. That's something interesting that happens in the security industry and that has aspects of massive cognitive dissonance, the illusion of "low impact" intrusion targets.

Why did the Titanic sink? Ok, no big failure like that can be attributed to a single root cause, but I'll choose one here to illustrate my point, the compartmentalization failure. The Titanic hull was built in a way that a hole in it could not sink the whole ship, as the water would only inundate one compartment that could be isolated from the others. The issue with the iceberg incident was that it ripped the side of the ship in a way that multiple compartments were inundated, bringing the whole ship down. The threat assumption in the Titanic's hull compartmentalization design was that the threats would be holes, not a gash. Wrong assumptions sink ships. And breach networks too.

Even if it's considered best practice it's not very common to see properly compartmentalized networks out there. When compartmentalization is applied, it's usually done at the server side, with multiple segregated networks with servers grouped according to different criteria, such as data classification or line of business. That's cool, but it usually protects from intruders jumping from one groups of servers to the other. What about the users network?

Let's be fair, it's very hard to deploy appropriate network controls at the distribution network. There are lot's of switches, sometimes with very limited management capabilities and different physical locations. Not to mention wireless networks, remote and mobile users. However, there's a lot of interesting products out there (most on the NAC realm) to help on that. But those networks are usually seen as less important then the servers side, incidents there considered "low impact". That's bullshit. Most of the big breaches now target the user's computer, where there'll be someone willing to click on links and opening files all over the place. It's easier to get your bridgehead in a network on a user workstation than in a well protected and monitored server. From there the intruder will learn how the organization infrastructure works, will start harvesting interesting credentials and look for the target data. Everything in a network that is not usually monitored. Can you detect today a brute force authentication attack against the local built-in administrator account from one workstation to another? I mean, no servers involved?

So, when defining that some targets, specially users workstations, would only cause "low impact", remember to consider what an intruder could do inside your network. Even better, try to hire a pentest starting from your users network, defining your most critical data as the final target. Check how that test will be seen by your security monitoring processes. The lessons from that will most likely change the "low impact" classification of a lot of things in your organization, what will cause a revolution in your risk assessments and security initiatives prioritization. And do it fast, before someone you'll call APT does that for you. 

Friday, April 1, 2011

World Economic Forum 2011 Risk Report

With all the discussions about risk measurement and how to present risk information, the report created by the World Economic Forum about global risks is full of awesome ways to present risk information. They managed to include in their graphical representations stuff like risk perception and uncertainty. A couple of good examples can be seen below:



By the way, "cyber risks" are in the top of lists of "Risks to Watch", in other words, risks with a lot of uncertainty and hard to predict trends. It makes sense.

1 Raindrop: "I know" and "I don't know" schools of security architecture

Excerpt from Howard Marks’ July 2003 Memo “The Most Important Thing”:

"One thing each market participant has to decide is whether he (or she) does or does not believe in the ability to see into the future: the “I know” school versus the “I don’t know” school. The ramifications of this decision are enormous.

If you know what lies ahead, you’ll feel free to invest aggressively, to concentrate positions in the assets you think will do best, and to actively time the market, moving in and out of asset classes as your opinion of their prospects waxes and wanes. If you feel the future isn’t knowable, on the other hand, you’ll invest defensively, acting to avoid losses rather than maximize gains, diversifying more thoroughly, and eschewing efforts at adroit timing.

Of course, I feel strongly that the latter course is the right one. I don’t think many people know more than the consensus about the future of economies and markets. I don’t think markets will ever cease to surprise, or thus that they can be timed. And I think avoiding losses is much more important than pursuing major gains if one is to achieve the absolute prerequisite for investment success: survival."

In security architecture terms, I differentiate Identity & Access Services which are designed to help enterprise achieve some business goals (and despite what a lot of people say about ROSI, these services have ROI attached to them from day one), but these are implicitly "I know" kind of service or least "I guess."

otoh, there are Defensive services like monitoring and logging, which implicitly say -  "I don't know" how I am going to be attacked, how and where things will fail, but I need to build a margin of safety into the system to be able to react if and when they do.

Securitytriangle

The Security Triangle shows that depending on whether you have "I know" assumption or "I don't know" assumptions, you'll end up with a different looking architecture. Of course, its not a binary choice, you will have some of both, but there are always priorities and choices. The goal for security architects is to be clear about the choices, because if you are trying to know or accepting that you don't know, your security services delivery, measurements and processes will vary.

When you are building out monitoring services, your goal is to identify assets and event types with the goal of increasing visibility. This typically results in a people, process and technology like IRT that respond to catalysts and vectors that are often not known at the time the system is being built.

When you are building out Identity & Access services, you are assuming much more knowledge of subjects, objects, attributes, data and applications. This mapping typically manifests in architecture like Identity & Access Management systems, publishing and enforcing known known relationships.

In each of these cases, the toolsets are different, how you staff for them is different, the design and operations is totally different, but they get lumped under the title "security."

Very good post from Gunnar Peterson. I always thought there's a huge diference between security technologies. Mike Rothman like to define them as "let the good guys in" and "keep the bad guys out" technologies. I even wonder if anyone has ever tried to model their security teams like that, or something like "external threat team" and "internal control team". That would be interesting.