Monday, January 31, 2011

Virtual desktops and incident response

I've just noted there is an odd silence regarding the push for virtual desktop environments and incident response. We went through strong growing pains related to incident response and finding/disabling desktops during security incidents. With the massive push for virtualization, not only on the server side but also on the client side, a lot of what has been done in that front will have to be revisited. What if you have just identified a compromised desktop VM on a big pool of virtualized desktops? Should you just kill that VM? Was there any risk of an hypervisor breach? If that's the case, how to deal with it? How to kill a bunch of desktops without causing massive user pain? Oh, you may think those cool "VMotion" like technologies will help, but can they also make things worse by transporting compromised VMs to "clean" pools?
A lot of new and interesting questions to work on. Is there anyone out there working one those? What are the incident response procedures for compromised virtual desktops?

Friday, January 28, 2011

Banks may soon require new online authentication steps - Computerworld

Computerworld - The Federal Financial Institutions Examination Council (FFIEC) could soon release new guidelines for banks to use when authenticating users to online banking transactions.

The new guidelines will clarify the FFIEC's existing guidelines on the subject and more explicitly inform banks about what they need to do to bolster online authentication, said Avivah Litan, an analyst at Gartner.

Litan and others recently met with the FFIEC's IT subcommittee to discuss the updates. "They have been talking about it and debating it for a while," Litan said. "My understanding is that [the subcommittee meeting] was the last step in the process before they issue the new guidance."

The FFIEC is an interagency council that develops standards for the federal auditing of financial institutions by bodies such as the Federal Reserve System and the Federal Deposit Insurance Corp. (FDIC).

In 2005, it issued a set of guidelines, titled "Authentication in an Internet Banking Environment." They called on banks to upgrade their single-factor authentication processes -- typically based on user name and passwords -- with a stronger, second form of authentication by the end of 2006.

The guidance left it largely up to the banks to choose whatever second form of authentication that they felt was the most appropriate for their needs. The FFIEC listed several available authentication technologies that banks could choose from, including biometrics, one-time passwords and token-based authentication.

Since the guidelines were issued, many banks have added a second authentication layer for users when conducting certain kinds of online transactions. However, in many cases, the added measures have been largely cosmetic in nature and have done little to bolster authentication in the way the FFIEC had originally intended, Litan said.

"Obviously, some of the banks thought that it was enough if they simply added cookies or challenge/response-based authentication," Litan said. "What has happened is that the FFIEC has realized that some banks need to be told in black and white what they need to do."

The FFIEC did not immediately respond to Computerworld's requests for clarification on the purported release of the new guidelines.

News of the proposed revisions come amid growing concerns about the ability of cyber criminals to circumvent the existing authentication mechanisms used by banks for online transactions.

Over the past two years there have been a string of attacks, mostly against small and medium businesses, by cyber criminals using stolen banking credentials to plunder corporate accounts.

Such account takeovers have cost U.S. businesses in excess of over $100 million since 2008, according to the FBI.

Organizations such as NACHA-the Electronics Payments Association, have warned financial institutions about such attackers and said that much of it has resulted from the relative lack of strong authentication procedures, transaction controls and "red flag" reporting capabilities.

Such attacks have also highlighted the need for banks to install stronger transaction monitoring controls and fraud alerting systems analysts have said in the past. It's unclear whether the upcoming FFIEC guidelines will call for such controls though.

Gartner too has warned about how authentication measures such as one-time passwords and phone-based user authentication, once considered among the most robust forms of security, are being increasingly circumvented by cyber criminals.

Jaikumar Vijayan covers data security and privacy issues, financial services security and e-voting for Computerworld. Follow Jaikumar on Twitter at Twitter

 @jaivijayan or subscribe to Jaikumar's RSS feed Vijayan RSS

. His e-mail address is

Read more about Financial Services in Computerworld's Financial Services Topic Center.

I wonder if the new guidelines will be based and/or point to empirical evidence of the efficacy of the proposed controls. Otherwise, it'll be just more security theater.

What drives the RA? Need or Fashion? »

In presentations I ask people what they do before going on holiday to ’secure’ their house. They call out things like ‘turn off the gas’, ‘cancel the milk/post/newspaper, ‘lock the doors and windows’, ‘board
the dog’. This is all baseline sensible stuff. We see it because we are used to being in the physical world, but the e-world is often invisible.

What do I mean, invisible?

If I have 50,000 books I can see them; on the couch, the bed, shelves, tables, chairs …

If I have 50,000,000 ebooks, what do I see.  Exactly the same box as if I had only 1.

So all those basic precautions are ‘out of sight, out of mind’.

And then there’s the ’specialist’ ability to perceive what others don’t.
I’m sure medical doctors will tell us of the many medical conditions he can tell of just by watching someone walk by and looking in their face, the whites of their eyes, the colour and texture of their skin.   My optometerist deals with children who are not able to respond to questions about eye tests in the way that adults can, but he can tell the perscription they need by looking in their eyes.

All of which would be meaningless - just another face in the crowds - to the rest of us (…. oh, those fatty deposits around the edges of your eyelids, Anton …)   But its the sort of skill the real professional has.

So many things are ‘obvious’ to us InfoSec professionals, where we infir causality and risk, but not to the CIO or not even to the IT staff. Why? Because its our domain of knowledge.

We may argue among ourselves, but that’s true of any profession.
However its beside the point unless it gives clients the impression that this is all nonsense.

Which end of the egg you crack isn’t what matters.
Having a good breakfast is what matters.

Which gets back to the point Donn Parker makes repeatedly.

Unless you have a good - “Context is Everything” - baseline in place, no method of RA is useful. While you are off pondering your RA - which isn’t, as Donn points out, going to tell you what controls to install - the Bad Guys(tm) have moved in and moved out your crown jewels.

A poor attitude toward IT risk on the part of the BoD (or BoG in some other countries) seems quite common. You present a Risk Analaysis and they say “We’ll accept the risk”.   Part of this is the difference in attitude to what they see as risk, the business model vs the infosec model. Part of it is ‘emotional’; they don’t, as Donn points out, want to hear or think about the potential for bad things to happen. They are businessmen, they are concerned with profit and growth and opportunity and market and all those B-school things.

Part of it is that we in InfoSec are not doing a good enough job communicating the issues.

What I’m disturbed by is the way that ’standards bodies’, NIST, ISO and now I see it gaining ground at ISACA, are MANDATING Risk Analays.  In particular mandating it as a prior step rather than as a “gap analysis” after establishing an well considered BASELINE.

Great post by Anton. It aligns well to Rothman's "P-CSO" approach too. We cannot go into the neverending hamster wheel of pain of Risk Analysis/Assessments before ensuring that at least the minimum is in place.

I would go even farther by saying we don't need RA for the overall security strategy and operation. We need that for specific projects and scope, such as applications. I've just read the new Stephen Hawking's book, "The Grand Design". He says that we'll probably never end with a "Unified Theory of Everything", as most physicists are still looking for, but with a set of theories (models), each one being more useful for a specific context. I see the same thing for security. We'll probably end up using a decision making model for organization-wide security and another for specific scopes security, such as single environments, networks, services or applications. An example of a "set of models" to be used by an organization in their security activites could be:

- Organization level - Baseline-based security
- Project level - Risk-based model
- Security Operations - Threat based model

I don't know if it's the best set and even if these are the best scopes for each component of that set, but it illustrates my point that there's no single model for security decisions and what we should do to choose the models we'll use.

Tuesday, January 25, 2011

Parabéns São Paulo

Non ducor duco!

New blog engine

Yep, you've probably noticed some weird stuff going on with my feed and blog. I'm moving from my own Wordpress installation to Posterous. I'm tired of keeping Wordpress up to date and the risk of using plugins full of silly vulnerabilities is just to great. I'm losing a bit of flexibility, but Posterous allows me to link a series of social networks to my blog and also manages automatically the inclusion of pictures, videos and bookmarks. I decided to give it a try.

By the way, the direct URL for this blog is now