Wednesday, May 30, 2012

Flame, exactly how we predicted 5 years ago

This week news are all around Flame, the father of all malware. There are several interesting posts and code analysis floating around about it, but what I wanted to highlight is how Flame is following the evolution pattern I and my friends Victor and Fucs presented back in 2007 (Black Hat Europe). Some of Flame's characteristics that we talked about at that time are:

- Modular architecture: we said "The payload, the part of the bot that is responsible for its "features", can also be developed as a separate layer. It would be composed by several features modules, which receive the commands from the command layer. The bot can just download a new feature module, that is programmed to receive its parameters through a defined API"

- Script language: from our paper: "If the botnet master's objective is to avoid transferring executable binaries while maintaining the ability to have flexible bots with extensible functionality, there is also the option of using script languages."

 

Flame was designed to allow updates for its exploits. 

Thursday, May 24, 2012

Browsers and malware

So, "Google Chrome Just Passed Internet Explorer To Become The World's Most Popular Web Browser". What does that mean for security?
I think that, putting aside the always present concerns around privacy every time the name "Google" is around, it's good to have a browser with security conscious design being widely adopted. However, I think the interesting part of this is to consider the data below together with some other factors:


Source: StatCounter Global Stats - Browser Market Share
Have you noticed that browser vulnerabilities are not the key vectors being used by attackers to compromise end users anymore? The culprits of the day are Adobe Flash and Acrobat Reader. And it's easy to understand why.
If we look at the graph above we'll notice there's no browser with more than 40% of the market. So if you are an attacker and you want to write malware than hit a bigger chunk of the "victim space" it's not a good strategy to use browser specific exploits, such as those related to MS12-010. Wouldn't it be better to target something that runs on 99% of the PCs (considering that PCs are still the major malware target), or even 73%? Those are Adobe Flash and Java, respectively.
Whenever there's monoculture, there is increased security risk, as Dan Geer has been saying for years:

In biology, a monoculture--a singular species that supplants all others--is a bad thing. When every plant is the same species, every plant is susceptible to the same predators, the same diseases. Examples are as plentiful as they are sad: Consider the virus that brought on the Irish potato famine or the boll weevil that nearly obliterated the South's cotton crop in the early 20th century, and you see the destruction that human-made monocultures bring upon themselves.
Computers are no different. Computer viruses spread efficiently, lethally when all computers on a network run the same software. MyDoom, Melissa and MSBlast were a function not of the Internet, but of a Windows monoculture. They caused havoc because they were designed for specific vulnerabilities of Windows. Since one virus generally affects one species of software, any computing monoculture poses a hazard the same way it does in nature.

As always, the old is new again. Geer was talking about document formats at that time, now the discussion is around active web content. But there is hope: HTML5 has been seen as something that will allow the diversification necessary to reduce the risk. No more single piece of software necessary to browse the web, no dependence on specific Operating Systems or platforms.
But for this specific case, there is a catch: HTML5 is so powerful that there's a risk it becomes not only an attack vector, but a new species by itself, a huge new monoculture. Things like the WebSocket API could make it be the new One, the One to rule them all, the One to bring them all and in the darkness bind them (Yes! I did it! I quoted Lord of the Rings :-)). Cross-platform malware is the new threat rising, leveraging HTML5 features to exploit PC, Mac, iOS and Android.
The perspective (temptation?) of malware that can potentially run on all those platforms is certainly drawing the attention of all sorts of colored hats. Javascript worms have been reality for a long time, so there's really no reason to believe HTML5 malware won't be a rising issue in the near future. Trend Micro's Robert McArdle wrote a very nice piece about HTML5 attack scenarios that illustrates our future challenges around it.
So be it. The browser monoculture is dead (at least for now - keep an eye on Chrome's rising trend!). Long live HTML5 monoculture!

Friday, May 18, 2012

which tool to pick?

A friend of mine sent me an e-mail asking for my opinion on some tools for a DRP (Disaster Recovery Planning) project. It’s a subject that I haven’t touched for a long time, but in the end the thought process around his question ended up being so interesting from a security planning perspective that I thought it could be good material for a post.

 

He asked me about two specific tools, LDRPS and Archer. We had a good experience with LDPRS when we worked together on a BCP/DRP project a few years ago, and someone suggested Archer to him. As I said above there’s been a long time since I worked with BCP processes, but I spent a few minutes researching the current state of those tools in order to provide him a decent opinion.

 

The interesting aspect of his question is that it replicates a very common dilemma we often face when we are developing tools roadmaps and architectures. The Best of Breed x Generic solution.

 

I haven’t put my hands on those tools for BCP, but I’m certain that LDRPS is better than Archer on a simple feature by feature comparison. LDRPS was developed by Strohl, later acquired by Sungard, two companies specialized on availability services. It’s used by a lot of Fortune 500 companies and it’s been evolving for literally decades.

 

Archer, on the other hand, is a GRC tool that happens to have a BCP module. It’s a tool to solve a broader variety of problems than LDRPS, and I bet that it won’t have all the bells and whistles LDRPS has for developing and testing disaster and business continuity plans. But (and there is always a but)…

 

The wider scope for Archer can be the source of its weakness on this case, but it’s also its major strength. There are a lot of common steps and similarities in the BCP/DRP processes and other processes supported by other Archer modules, such as Risk Management, Compliance Management and Vendor Management. For all these processes it’s necessary to identify data, assets, locations and other components of the organization, establish ownership, value/impact and interdependencies. And that’s what could make Archer the best pick for my friend. Depending on this organization’s strategy for those other processes they might be able to leverage some work already done or re-use the data being gathered for the BCP project on those other processes. They may end with a tool that is not the best available for developing Business Continuity and Disaster Recovery plans, but they might be getting more value by leveraging the data obtained during that project on other fronts.

 

Integration and data sharing is one of the key aspects of a successful security strategy. Good security architects and managers will always consider that when choosing the tools to implement that strategy.

 

Wednesday, May 16, 2012

Grimes article on firewalls

It’s always interesting when an article or blog post generates multiple responses from the security blogosphere. It lets us gauge the general opinion of that particular idea or concept. It wasn’t different with this post from Roger Grimes, “Why you don’t need a firewall”. It sounds very similar to the general rationale for the Jericho project, but those guys have clearly stated that the firewall doesn’t have to be removed, but it assumes a smaller role in the new security strategy.

There are similar opinions about the article here, here, here and here. Some different spins, but the general understanding is that the firewall is not a silver bullet, but it has its use. The most important thing to consider when assessing the firewall value is to understand the value of choke points:

In military strategy, a choke point (or chokepoint) is a geographical feature on land such as a valley, defile or a bridge, or at sea such as a strait which an armed force is forced to pass, sometimes on a substantially narrower front, and therefore greatly decreasing its combat power, in order to reach its objective. A choke point would allow a numerically inferior defending force to successfully prevent a larger opponent because the attacker would not be able to bring his superior numbers to bear.

(from the “Choke Point” Wikipedia entry)

Firewalls are also valuable enablers of other security tools, such as IPS/IDS and deep package inspection systems. Deploying those systems behind firewalls reduce the amount of data to be inspected and the number of events generated for investigation, reducing capital (hardware) and operational (people) costs for those controls. There are some decent metrics out there for sizing deployments of those tools based on the amount of traffic being monitored, so it should be straightforward to factor them into a cost/benefit analysis for firewalls.

One can argue that we should also consider the additional costs from the firewall deployment itself, but the controls above are just one example of things that will cost less because of (well managed) firewalls. Those reductions sum up to a point where not having firewalls is just a very bad business decision.

Tuesday, May 15, 2012

PCI DSS overhaul necessary

First, I have to admit this post should have been submitted to the PCI Security Standards Council as part of the feedback phase for the DSS that has just closed, but my concerns are related to the core structure and format of the standard. I believe it wouldn’t make any difference to submit them as the last changes to PCI DSS have been more of an incremental aspect. Anyway, that’s my mea culpa for delivering criticism without contributing on improving the standard.

 I’ve been working exclusively with PCI during the past year. Being involved in remediation activities and not in assessments allows us to have a direct view of the challenges to have the standard fully implemented, even considering it a “bare minimum” in terms of security.

 In short, PCI DSS must be more effective: It has to shift from a simple list of controls to an outcome based system.

 All PCI requirements have the same weight to organizations trying to achieve compliance. There was an initiative from the Council called “the prioritized approach”, but it’s more a roadmap to an organization towards compliance than a risk based model. It says that “The roadmap helps to prioritize efforts to achieve compliance, establish milestones, lower the risk of cardholder data breaches sooner in the compliance process”. It tells you what should be tackled first, but at no point it means that you don’t need to work or put effort on the items on the end of the list. So, when full compliance is the end goal, having clear roles and responsibilities expressed in the security policy is as important as ensuring that Internet facing web applications are not vulnerable.

 I won’t argue about the importance of specific controls in the standard, but clearly some key deficiencies are directly related to a big chunk of the breaches, so the standard should be tuned to put more emphasis on those controls while allowing organizations to deal with the other items according to their own internal prioritization, planning and even culture. An organization with strong yet informal controls, for example, can only consider those controls in place for PCI after formalizing them, driving resources away from other areas that carry more risk and that might need improvement.

 PCI DSS also stifles innovation by forcing organizations to apply a set of “best practices” that otherwise could be replaced by more modern practices. Imagine a scenario where you are working to improve your controls over data traversing your network perimeter. A lot of interesting approaches and technologies are currently evolving and being discussed in our field, such as application aware Next Generation Firewalls, DLP systems, network behavior analysis tools and having an active security monitoring group who can understand what should and what should not be there. However, if PCI compliance is one of your priorities you better put all those things aside and start putting together extensive documentation about ports, protocols and rules in your environment (and keeping that updated!). The DSS seems to be written for low complexity networks, with just a few entry points and a very small number of services available and ongoing connections. You need to have all of them documented and keep that documentation up to date. No wonder the card issuers (i.e. Banks), who have far more complex networks than the average merchant, are still trying to keep a healthy distance from PCI.

Validation also needs to be reviewed. As the reporting instructions are public, organizations are tailoring their compliance efforts to what the QSA will look for, not to meet the requirements intents. A lot of documents are created as empty shells and placeholders, just because some process and procedures have to be documented. The QSA has limited time to go through all those documents in a very short time engagement with limited resources (lots of QSA competition reduce their ability to charge decently for those assessments), reducing the time available to check things that really matter. Can’t the assessment be changed from a control checklist to something more outcome based? Integrate the ASV scanning and pentest requirements into a single continuous assessment framework that would check the outcome of security processes the organizations choose to put in place to keep some key defined metrics under control?

 That’s a lot of food for thought. I’m not holding my breath for any exciting changes from the current review cycle. In fact, I’m expecting more from the same, additional layers of controls to keep the compliance wheel of pain running fast. Here we go for another lap.

 (I know about compensating controls, scope reduction and other things that can be done to make PCI DSS compliance more “manageable”. Although I agree they are useful tools I don’t think they are enough nor well defined and understood within the QSA community. Today it’s easy for an organization to just hop from one QSA to the other looking for someone that “likes” their approach on those items. There are so many different opinions out there that you can always find a QSA that will agree with anything you want to do)

Wednesday, May 9, 2012

Why does PCI-DSS (and other standards) suck?

From: The Six Enemies of Greatness (and Happiness) - Forbes
Just check item number #3:

 3) Committees
Nothing destroys a good idea faster than a mandatory consensus. The lowest common denominator is never a high standard.
Standards like PCI are always created by Committees. Unfortunately, as this nice article says, "the lowest common denominator is never a high standard".

Tuesday, May 8, 2012

Adding context - tech jobs

Professionals starting in network security (or any other specialized IT job) are often concerned about improving their skills and knowledge in networking and the products and gear they spend most of their time with. Although it’s extremely important to know the technology you work with, it’s also very important to learn at least a little about all the other technologies you may find in the IT environments you’re (and will be) dealing with. Even very basic tasks as defining or reviewing firewall rules are challenging when there’s no context available. I’m tired of seeing people with stupid hardwired rules in their minds (HTTPS is good; FTP is bad; and so on…) struggling to understand why a specific control is in place or swallowing stupid justifications such as “we need port 80 open both ways (bi-directional – ugh) for this app to  work” just because they know nothing about any other technology or process that is not directly related to their job descriptions.

Almost all security professionals learn that the Business defines Security, and not the opposite. However, few are able to tell you how to transform that piece of wisdom into practical advice. So here it is: learn about what the organization is doing:

·         What does the “business people” do?

·         Which applications do they use?

·         How those applications work? What kind of data, architecture, protocols?

·         What’s the data flow for the business? What are the people’s roles in the business process?

There’s plenty to learn from the other IT silos too, such as:

·         What is running on all these servers? What do all these applications and middleware do?

·         How are the operations teams doing their jobs? How are they accessing and connect to servers and applications? Jump boxes? Shared IDs?

Learning about how the organization works is as important as learning more about security. You’ll find which issues are easy to fix, what process deficiencies will keep spitting out vulnerabilities, how controls will or will not work. Security is usually not part of their core job descriptions, so don’t expect them to go the extra mile to understand how security should be done for their context. If you want it to work, get that context yourself and apply your security knowledge to it. You’ll be far more effective and, surprisingly, they will listen when you start to sound like you know what they do.