Friday, March 25, 2011

Not so fast about SecurID

There wasn't really a point to comment on the RSA incident, as a lot of good people already said. There's not enough information to talk about and a lot of the news circulating around are just noise or bad opportunistic marketing. However, I noticed a lot of "assume SecurID is broken" advice in the last two days, and I thought I should say why I don't necessarily agree with that.
 
First, there's not much to "break" in SecurID. The algorithm had been reverse engineered a long time ago, so I don't have reasons to believe the leaked information is related to the algorithm itself. Only an issue with the algorithm would make the whole solution permanently useless, and I think it would have been found by now, no matter if a breach in RSA occurred or not.
 
Without an algorithm weakness, two main possibilities  remain. A very unlikely one (although there are some rumours about it) is the existence of a backdoor in the product. If that's the case it would only require a patch to ACE Server to fix it. It would certainly produce a huge damage to RSA reputation, but it's not something that would be very hard to fix. If that was the case they would have provided the fix together with the initial breach announcement, as the compensating controls they suggested to clients wouldn't make any sense if a backdoor was involved.
 
The last possibility is related to seed information. The seeds are the biggest secret of the SecurID solution, shared between the ACE Server and the user's token. They could have been compromised in the format of a database containing seed/serial number pairs or an eventual secret algorithm to generate seeds based on serial numbers. This last option would be pretty bad, as it would require the replacement of all tokens out there.
 
Anyway, if the issue is related to the seeds, the suggestions RSA made to its clients make sense, they are trying to increase the resistance of the remaining controls in a SecurID implementation. Also, if the seeds are compromised, an attacker, in order to successfully authenticate as a SecurID user, would have to:
 
1. Know the correct user identification
2. Know the user SecurID PIN
3. In most cases, to know the user password (SecurID is usually implemented in addition to an existent password authentication)
4. Know the user's token serial number
 
I never lease my token where it could be managed by anyone, so I'm sure that its serial number is reasonably protected. And I haven't even considered the PIN yet, that would require a keylogger or something similar to be obtained. Add to it active monitoring of RSA authentication logs and you still have some considerable resistance to authentication attacks.
 
As we can see, it's still not easy. In my opinion, it's quite different from "assume it's broken". It is less effective for sure, but it doesn't mean you should assume it's just not there. Probably something will have to be done to bring the security level back to where it was before, but your risk assessment will probably indicate you can wait until more information is provided by RSA.
 
By the way, I'm also not the only one saying that. This post from SANS ISC is saying exactly the same thing.

Thursday, March 24, 2011

Light Blue Touchpaper » Blog Archive » Can we Fix Federated Authentication?

Can we Fix Federated Authentication?

March 24th, 2011 at 11:44 UTC by Ross Anderson

My paper Can We Fix the Security Economics of Federated Authentication? asks how we can deal with a world in which your mobile phone contains your credit cards, your driving license and even your car key. What happens when it gets stolen or infected?

Using one service to authenticate the users of another is an old dream but a terrible tar-pit. Recently it has become a game of pass-the-parcel: your newspaper authenticates you via your social networking site, which wants you to recover lost passwords by email, while your email provider wants to use your mobile phone and your phone company depends on your email account. The certification authorities on which online trust relies are open to coercion by governments – which would like us to use ID cards but are hopeless at making systems work. No-one even wants to answer the phone to help out a customer in distress. But as we move to a world of mobile wallets, in which your phone contains your credit cards and even your driving license, we’ll need a sound foundation that’s resilient to fraud and error, and usable by everyone. Where might this foundation be? I argue that there could be a quite surprising answer.

The paper describes some work I did on sabbatical at Google and will appear next week at the Security Protocols Workshop.

Entry filed under: Academic papers, Banking security, Legal issues, Protocols, Security economics, Security engineering, Social networks, Web security

Great paper by Ross Anderson. I like this piece from the first page about SSO:

"There are always systems that just don’t fit. Even in young high-tech firms with everyone trying to pull in the same direction – in short, where there are no security-economics issues of strategic or adversarial behaviour between firms – there are always new apps for which the business case is so strong that exceptions are made to the rules. This should warn us of the inherent limits of any vision of a universal logon working for all people across all systems everywhere."

This is not limited to universal logon; it could also be applied to universal visibility, universal least privilege, universal antivirus coverage, and many others.

Is Risk assessment just change resistance?

An interesting thing about risk management is that providing risk assessment results to business will often be seen as reactive and resistant behaviour by the business. Let's see how risk assessment results are usually inserted within the context of a business initiative:
 
1. Business requests something (to IT, operations, product development, whatever)
2. Project or Initiative is defined and it triggers a risk assessment, according to the organization's security policy
3. Risk assessment is conducted and the results are presented to the business for decision (accept/mitigate/freak out/whatever)
 
So, the interesting aspect in this sequence of events is that, by the Business's perspective, the report with the results from the risk assessment seems to be a direct reaction from its request. This will frequently cause the impression of "I had asked someone to do something and they came back telling me why it shouldn't be done". In other words, resistance to change, one of the biggest sins in business. Isn't it easy to understand why those presenting the risk findings are not seen as "saviours" by the business, as some of us would probably picture themselves?
 
There are some ways to reduce these effects. The first one is to avoid FUD or any of those black magic (that's for you, Rothman) style risk measurement methodologies. If you are in a situation where your work is being seen as resistance to change, the worst thing you can do is do it in a way you cannot defend the presented results.
 
Another thing to be done is raise awareness in the business about the importance and role of the risk management process within the organization. The most important thing about this step is that it must be top-down. Business executives won't buy it if it's not their boss telling them they should manage the risks related to their initiatives. You can try, but you won't be able to succeed if it's not properly done in terms of economic incentives. The executives should have as much incentives to manage the risk of their initiatives as to make them successful. In fact, the definition of a successful initiative must comprise appropriate risk management.
 
The last suggestion is related to the roles in the process. It's common to have, for example, "Product" asking "Operations" to do something, and then "Security" going back to "Product" with a Risk Report. Can you see what is happening? The business (a.k.a "Product") sees "Operations" as those who will deliver what it wants, and "Security" as those trying to avoid that. How can we fix it? The risk assessment results should not be presented by Security to the business. The risk report should be part of the business case (or whatever artifact is presented to the business for approval), together with all other costs and expected results. And, of course, presented by those who will make it happen, not Security alone. 
 
We'll always be generating data that brings the executives wishful thinking about their pet projects down to reality. There's not much we can do about it, but we can at least avoid or reduce the impression that we do it just to react against change.

Wednesday, March 23, 2011

Lenny Zeltser on Information Security — 7 Inconvenient Truths for Information Security

7 Inconvenient Truths for Information Security

Information security policies and corresponding controls are often unrealistic. They don’t recognize how employees need to interact with computer systems and applications to get work done. The result is a set of safeguards that provide a false sense of security.

This problem will continue to grow due to consumerization of IT: the notion that employees increasingly employ powerful personal devices and services for work. This trend makes it easier for the employees to engage in practices that make their life and work more convenient while introducing security risks to their employer.

Corporate IT security departments need to recognize that employees:

  • Use personal mobile devices and computers to interact with corporate data assets.
  • Take advantage of file replication services, such as Dropbox, to make access to corporate data more convenient.
  • Employ the same password for most corporate systems and, probably, personal on-line services.
  • Write down passwords, PINs and other security codes on paper, in text files and email messages.
  • Click on links and view attachments they receive through email and on-line social networks.
  • Disable security software if they believe it slows them down.
  • Don’t read security policies or, if they read them, don’t remember what was in them.

These are inconvenient truths that, if acknowledge by organizations as being common, can be incorporated into enterprise risk management discussions. Doing this will have strong implications for how IT security technologies and practices are configured and deployed.

Lenny Zeltser

This is a very interesting post from Lenny Zeltser. It's not only about things that we keep trying to avoid when they are just plain representation of the user (and business) needs. These inconvenient truths should be used as basic assumptions for any security strategy. By doing it you'll be building security that is not based on weak assumed controls, and will have more chance to succeed when they fail.

So, try this as an exercise: Assume all the items listed by Lenny as truth for your environment. Think about how efficient your remaining controls will be against the most common threats; and, finally, Identify what you could do to compensate any weaknesses you might have found.

Keep that list. That will probably be more valuable than what you can get from a lot of complex and expensive "strategy exercises" out there :-)

Tuesday, March 22, 2011

Deputies

It was nice to read two tweets by Richard Bejtlich Today about the importance of having a "second in command":

@taosecurity BNET on "Why you need a second in command." .mil/.gov get this, have "deputy" roles. Agree, if you lead you need help.
@taosecurity Deputies are great for sanity checks, like telling you that you're making a mistake or that you should consider other aspects of an issue.
I can tell from my own experience that having someone you trust as a deputy or second in command is extremely important. I want to emphasize the importance of the "sanity check" role. You need to be sure that your second in command feels comfortable enough to tell when you are doing something he believes is a mistake. It worked for me when I have good friends working for me. They were not working for me only because they were my friends, but they were also some of the professionals I had (and still have) more respect for. The friendship helped reducing the hierarchy thing that would normally people feel uneasy to disagree with the boss, so they were always helping me to see when I could be overlooking something or even doing something too stupid to work.
Get yourself surrounded by good people, and nominate a second in command that knows he/she call bullshit on your face whenever it's necessary. It might save you from yourself someday.

Friday, March 18, 2011

the most important infosec word

VISIBILITY

 

That's right. Visibility is the most important word in information security.

 

You cannot manage risk of what you don't know.You cannot defend what you don't know.

You cannot react against what you don't know.

 

I can make this list go on forever, but you've got the point. I could quote Sun Tzu, Galileo, Machiavelli and many others, but I don't think we need their insights to see such a clear thing.

 

Before putting more effort on additional hardening, ask yourself how much visibility you have into your organization, environment, network, apps. Preventing efforts are not always the top priority.

Friday, March 4, 2011

The key issue on current risk measurement?

During a presentation about the current risk measurement discussions in our field I realized (yeah, not enough to say "epiphany" :-)) the key issue on the current methods is the complete lack of calibration and feedback.
 
Most organizations don't have any process to collect data and use it to verify their risk measurement results. Maybe the H/M/L stuff could work if an ongoing process to make it reflect the expectations of the business in terms of risk and to tune the likelihood and impact values and bands according to what is observed in reality was in place.
 
 I've never heard about any organization doing that, I'd really love to see the results if anyone is doing it out there.

Thursday, March 3, 2011

The great IT risk measurement debate

This is one of the best information security pieces I've read in the last years. If you have any interest in risk measurement, risk management and security decision making, go now read this very good pair of articles (in fact, one split in two pieces) with the transcription of a debate between Alex Hutton and Doug Hubbard. This is a very good indication of what's currently going on in this field and the revolution (evolution?) we are experiencing in Risk Management. Stop for a minute the implementation of that GRC crap and read this.

RSA Conference: Ben Rothke: Security Reading Room: Everything I need to know about PowerPoint, I learned from Adi Shamir

One of the highlights of the annual RSA conference are presentations from Adi Shamir; the S in RSA.  For those that don’t know who he is; let me put it this way; if there would be a Mount Rushmore for information security, he would be on it.  

 

With that, Shamir along with Ronald Rivest and Len Adleman were awarded the RSA Conference Lifetime Achievement Award at the conference this year.

 

Shamir is a most unassuming person.  If you saw him get out of a cab, you might think he was the driver.  His ensemble for a talk is a t-shirt, running shoes and jeans.  He does not have to dress for the part; his accomplishments do that for him.

 

Shamir’s presentations are more unassuming than he is.  No clip art, no flashy images and certainly no animation.  I don’t think that he has changed his font in over a decade.  And therein lays the rub.  Shamir is so overwhelming with content, that his presentations require zero flash or animation.  People come to his talks knowing that he is full of form and substance, with zero hype or funky PowerPoint animation.

 

Most of us can’t bring to the presentation the same firepower and brainpower that Shamir does.  Nonetheless, what we can all learn from him is to focus more on the content and substance, and not on the font.

After attending to a lot of useless sessions at RSA, I cannot agree more with Rothke on this one. If you have content to show, the wrapping doesn't matter.

Wednesday, March 2, 2011

That was a real Fire starter!

This week began with some real fun caused by Mike Rothman's Firestarter post, "Risk Metrics Are Crap". A lot of the people jumped up and down on the post and there have been a lot of sharp complaints about his point of view. Well, I must say I completely agree with what he is saying on that post, and I also think that most of those criticizing the post haven't really understood what he's trying to say.
 
But what's the real problem with risk metrics? IMHO it's a twofold problem. First, those numbers are too hard to obtain. Hey, before someone tries to throw the DBIR report on me, I'm talking about all the required data to build a reasonable risk metric. There's plenty of data out there for several aspects, but almost never everything you'll need for a good enough number good  (I'm not even talking about laser precision stuff). I'll even make Mr. Hutton mad by mentioning the Black Swan effect :-). This is often linked to the challenge of identifying the likelihood of the incidents, but I don't think that's the case. We can derive a lot of reasonable information from the Verizon Report for that. We can see that zero days are not the monster that some FUD based vendors try to make you believe, and it's quite obvious that custom malware and SQL injection attempts will knock on your door. What is really challenging in generating a good risk number is the Impact assessment. I don't even need to talk about ALE, the key issue is predicting how a single incident will impact your business; think Heartland, HBGary, the Wikileaks cables. The impact from those breaches was probably way out of sight for those organizations.
 
Second, context and scope. Those aspects vary so much that it's almost impossible to apply the same data in two different assessments. Just a small list of factors that will make likelihood and impact factors different for the same issue for two different organizations: Size, demographics (different countries, for example), business, public image, technologies being used, internal organization and IT maturity, adversaries involved. About scope: with the level of interconnectivity and interdependency in organizations Today, how can we clearly determine the scope of a risk number? How much the number from "adjacent scopes" can interfere with your numbers? That's extremely hard to tell.
 

Please note that I didn't say at any point that it's impossible to get those numbers. I've probably said "almost impossible", and that's the catch. Theoretically they are ok, but I believe that the cost for getting good enough numbers will almost always be far greater than the real benefit.

 
One of the most important points on Mike's post, is that he's not saying that metrics are crap. He is saying that RISK metrics are crap. He even says that security metrics are important:
 
"BTW, I’m not taking a dump on all quantification. I have always been a big fan of security (as opposed to risk) metrics. From an operational standpoint, we need to measure our activity and work to improve it. I have been an outspoken proponent of benchmarking, which requires sharing data (h/t to New School), and I expect to be kicking off a research project to dig into security benchmarking within the next few weeks."
 
Mike also sys that it's important to assess risk. He clearly states that in the post:
 
"That said, I do think it’s very important to assess risk, as opposed to trying to quantify it. No, I’m not talking out of both sides of my mouth. We need to be able to categorize every decision into a number of risk buckets that can be used to compare the relative risk of any decision we make against other choices we could make. For example, we should be able to evaluate the risk of firing our trusted admin (probably pretty risky, unless your de-provisioning processes kick ass) versus not upgrading your perimeter with a fancy application aware box (not as risky because you already block Facebook and do network layer DLP)."
 
 
That's the main point, let's focus on the decision process, not on generating the risk metrics. Risk metrics are often created to be used during decision making on security investments and prioritization. I believe it's quite limited to consider that those decisions can only be made based on absolute numbers. Executives (in fact, almost all types of professionals) have to do that all the time. I'm not saying we should not try to do evidence based decision making. I'm saying that creating fantasy metrics disguised as evidence doesn't work.
 
Decision making can use a lot of factors different than risk metrics. It can use those good security metrics to derive prioritization on threats and the efficiency and effectiveness of controls. Controls and measures costs (obvious obvious obvious), easiness to integrate with the organization IT environment, resiliency (that's extremely important - how well the control does adapt to exceptions and changes?) are just some of the factors that can be considered.
 
Again, not all decisions are number based! Look at how many business decisions are made with simple tools such as a SWOT analysis. Not to mention the more elaborated techniques you can find.
 
(We can even stick to a numeric approach applying some number crunch magic powder, if we have enough data (VERIS will help us with that), but even that doesn't mean we'll generate the magic "RISK" number.)
 
In short, security decisions must be evidence based, but that evidence is most likely NOT a quantified risk metric.