Thursday, September 20, 2012

This is a test post

Sorry about this, but Posterous put some odd stuff on my LinkedIn profile for my last post's "autopost" feature. investigating what's going on...

Wednesday, September 19, 2012

Best of breed

The debate about whether it makes sense to buy best of breed products for security or if “good enough” is good enough (Ok, that was ugly J). Mike Rothman wrote a very good post about this a few years ago. I agree with most of his point on this, that “best of breed” makes sense for innovative products but not for mature technologies. But lately I’ve been seeing some discussion that expands the discussion.

I think that Mike’s post makes all sense for products. However, I’ve been seeing the same rationale being applied to services. But I’m really not sure that the same thing applies to services. Let’s think about Managed Security Services, for example. If you are outsourcing your SOC, even if it’s a mature service offering in the market, does it make sense to go for the “good enough”? Doesn’t look like it does. For mature products there isn’t a big difference between the best products and rest of the pack. In addition to that, there are usually benefits to get a product that is part of a suite you already have in place, from a vendor that you already have an enterprise license agreement or provides better integration with your other tools. But services are about human intelligence. You get what you pay, plain and simple. You can have you big box IT services provider doing that, but if you look the way they operate you’ll always see the same things: high turnover, low salaries, unskilled and inexperienced employees. It’s just not possible to provide the same level of service as the boutique providers that are specialized in that type of service and thus put a lot more energy on getting the right people doing it. The features of a service are directly linked to the people providing it, so the differences between the best and rest are higher than for products.

There are services where that wouldn’t matter, where you really don’t need intelligence and content, just a bunch of eyes and hands. Those are the services that are usually outsourced in all other disciplines, and I don’t see why it wouldn’t be different in IT or Infosec. But we often underestimate the skills necessary for some services, and MSS are usually the case. A SOC manned by unprepared analysts will only spit alerts provided by default configuration and out of the box rules from standard tools. A best of breed SOC will bring intelligence to the work, providing customized rules, configurations, threat information, ingest internal context information and prove meaningful alerts. Be careful when discarding the best of breed. For things like highly specialized services the “best” is the minimum you should expect, and “good enough” will almost always be “not enough”.

Friday, September 14, 2012

What should I do about BYOD?

There are lots of people providing canned advice about BYOD (for all the cloud related stuff too). It’s very important to understand that the only correct answer for the “what should I do about BYOD” question is the standard lawyer line: it depends.

Technology trends (I don’t question the fact that it is indeed a trend) such as BYOD often bring advice in the form of “you can’t fight the future” and “security is past the time of blocking new stuff”. I definitely agree that anyone working with security should keep an open mind, specially for technology trends. But those that are always anxious to stay on the bleeding edge should also understand that security must consider multiple factors when making decisions. BYOD is a very good example for that.

How prepared is the organization for BYOD? What’s the maturity and technology state of the organization network? About the applications, are they prepared to be used in a BYOD way? What’s the point of allowing people to work on their iPads if most of them use to work on fat client applications running on Windows? What about access control, encryption, etc? Is your network prepared to handle those for those devices?

Technology is only one aspect. Of course it’s easier to people to read email on only one nice smartphone. But are there any compliance regulations that should be considered? Financial Institutions usually have to comply with a lot of regulations regarding monitoring and controlling employee communications, how will they enforce those in a BYOD model?

It may be straightforward to decide about BYOD in a startup in California, but a big defense supplier may have a few additional threats to consider when deciding about that. Keep that in mind when someone asks for advice on BYOD and any other technology trend. The answer may not be as simple as you would expect.

Monday, September 10, 2012

DLP and encryption

The “Security Manager’s Journal” series of articles from Network World are a really nice way to understand the day to day challenges of real Infosec shops out there. Today’s article, “DLP tool is suddenly blind to email”, is a very interesting example of the challenges related to DLP and encryption. However, the most interesting aspect of Today’s article for me is the approach around decision making for the issue reported.

Summarizing the post, the author says they had implemented a DLP solution, but recently it stopped finding data leaving by email. It was found that the issue was caused by opportunistic TLS encryption between their Exchange server and the cloud based anti-spam solution. After finding that the author goes about the potential solutions to allow the DLP system to inspect the encrypted traffic.

What I found interesting about the article is that he never mentioned the alternative to disable encryption. WHAT?? ARE YOU FREAKING NUTS?? Yes, I’m serious. I mean, encryption is always good, but let’s consider it; email hitting the anti-spam provider will probably be delivered to the final destination unencrypted. So, what’s the real benefit of encrypting it between their Exchange server and the anti-spam system? Is it more valuable than the ability to scan (and block?) the outbound messages for data leakage?

That was something that I’d like to see when issues like the one described are discussed. We know that security is always about trade-offs, and this case is about a slightly different trade-off: one control for another. How would we compare the value of the controls? What’s the organization priority on this case? Those are all questions that would help understanding the scenario and adding a more mature risk management spin to the whole thing.

Elderwood project: the FUD, and some reality too

It’s been interesting to read all the frenzy about what Symantec has been calling “The Elderwood Project”.  The summary is “we are seeing these guys, who were behind that Aurora thing some time ago, still using a lot of 0-days in their hacking of NGOs, defense supply chain and government agencies”.

There are many different spins around the story now on the interwebs. There any many degrees of FUD around it too, but it’s important to analyze and thing about it carefully. Take, for example, this piece from Symantec’s blog post:

In order to discover these vulnerabilities, a large undertaking would be required by the attackers to thoroughly reverse-engineer the compiled applications. This effort would be substantially reduced if they had access to source code. The group seemingly has an unlimited supply of zero-day vulnerabilities. The vulnerabilities are used as needed, often within close succession of each other if exposure of the currently used vulnerability is imminent.

I’m not here to downplay the efforts of finding new 0-days. I cannot do it with my technical skills, I know it’s something really hardcore. But wait a minute, “a large undertaking to thoroughly reverse-engineer the compiled applications”, with supposed access to source code? My skeptical alarm rings loud here; we know a lot of security researchers that have been founding dozens of vulnerabilities in the same applications without access to source code and, if not with “minimum effort”, just by playing with fuzzers and doing some part-time (sometimes just for fun) testing. While I think there are really very good people putting some decent effort on finding vulnerabilities, I don’t think we can conclude there is a huge lab with never ending resources too. It might be, but we just don’t know.

Now, there’s something important to take as lessons learned from this. It is the fact that patching is just not enough, and that you have to have good defense in depth and detection capabilities in place too. If the adversary has some “unfair” advantage, such as 0-days, we need to level the field by boosting our monitoring capabilities. It’s important for “regular” organizations, but extremely important for those that can be an interesting target for motivated attackers (state sponsored, crime rings, carders, etc). This is not FUD, guys. It’s reality.