Wednesday, August 26, 2009

Sign Seth Hardy's petition for (ISC)2 Board of Directors ballot

Folks, this is serious and important. A lot of us has several complaints about the way that the CISSP certification is modeled, the quality of the questions and how it is interpreted by the industry. Seth Hardy is asking for support to be included in the (ISC)2 Board of Directors election ballot. He needs 633 signatures on his petition in order to be included. Here are Seth's objectives for joining the Board:

I want to make the certification exams offered by (ISC)2 more respected on a technical level. While I understand that the exams are not focused on technology -- "Security Transcends Technology", even! -- this is not a valid reason for exams that have outdated, misleading, or incorrect material.

I want greater accountability from (ISC)2 to its members. This is focused on (but not limited to) exam procedure and feedback. If there is a problem, it should be acknowledged and addressed in a reasonably transparent manner.

I want the purpose and scope of the (ISC)2 certifications to be well-defined. The CISSP certification is considered the de facto standard for technical security jobs; if it is not designed to do this, there should be clear guidelines from (ISC)2 on where it is appropriate and inappropriate to be gauging the skill and qualifications of a job applicant depending on whether they have the certification.

You can sign his petition at


Friday, August 21, 2009

On the technical details of the breaches

We finally have some information about what really happened on Heartland, Hannaford and 7-Eleven breaches.

Even if the initial SQL injection was in a SSL connection (my assumption is there was no initial reaction due to lack of detection), the rest of the attack should still be easy to detect. What are these companies doing about network security monitoring and intrusion detection? Seems to me that this is a point where current PCI-DSS requirements might not be sufficient. Requirements 10, 11.4 and 11.5 are good candidates to be improved.


Thursday, August 20, 2009

Good risk management leads to Compliance?

This is a quite logical line of thought, but there is one catch. Not all regulations are created in order to reduce risk to the part who is responsible for applying the controls and will go over compliance validation. Think about PCI-DSS compliance by merchants. It tries to reduce risk for card brands, issuers and acquirers by forcing the key point of compromise (merchants) to apply the proper controls. However, the cost for the merchant to apply those controls is higher than the risk reduction he will get. That's why fines are usually established by regulating bodies, to artificially increase the risk to the entity who is responsible for applying the controls. If this "manipulation of risk economy" is not properly done, the "good risk management leads to compliance" concept does not work. 


Robert Carr, PCI, QSAs...

I tried to resist posting about this last discussion. For those who are not aware of it, a very quick overview:

  1. Payment processing company (Heartland) had a breach, leaking thousands of credit card information
  2. Heartland's CEO complains that they went through the regular PCI-DSS audit and the QSA had not pointed out the issues related to the breach
  3. Security industry goes mad about his complaints: "compliance is not security", "compliant at that time doesn't mean always compliant", "PCI-DSS is just a set of minimum requirements", the QSA report is just information based on their own honesty, etc, etc, and finally, "he should know all that".
I agree with my peers on almost everything that was said on #3, but I'd like to point to some issues here. First, there is a kind of "cognitive dissonance" about PCI-DSS in our industry. It is sold (not by everybody, I must say) to high level executives as the best thing since sliced bread for breach risk reduction, but when something happens we promptly start saying that it is just an initial step in a longer journey, it is composed only of minimum requirements and so on. Think for a while about all the things you heard people saying while briefing executives about PCI-DSS and trying to get a budget to implement the requirements; have they always made clear all the limitations of PCI in terms of risk reduction?

I'm trying to see this episode with my "CEO glasses". I imagine what I would do if someone would come to me asking for money to implement requirements from a regulation that will do little to reduce my risk; wouldn't it sound to you that the standard is worthless? Also, I need to hire a company, that was trained by the organization who created the standard, to tell me if I'm in compliance with it. Assuming that I did that with the best intentions, provided my CSO with all necessary resources to stay in compliance and not just be in compliance at the audit time, shouldn't I assume that if a breach occurs its valid to verify if the breach occurred because of conditions that should have been identified by the auditors? And, in this case, that they share the responsibility?

I'm not necessarily saying that it is right or wrong, just that it seems very reasonable to me that CEOs would follow this line of thought. To be honest, I'm not the only one thinking like this. This post from the New School of Information Security blog goes along the same way.


Friday, August 14, 2009

Don't worry about security reputation IF...

There is a ongoing discussion on some forums about the "fallacy" that the damage to the security reputation of an organization due to a security incident is not as bad as security professionals use to say. This is based on this post from Larry Walsh.
I'm sure there is a lot of exaggeration on the effects of an incident. Some business tend to fell more the effects of an incident than others, for instance. We can tell that the retail business can survive pretty much harmless to an incident, like we saw with TJX and so many others. But what about payment services companies?
The last two examples are really interesting, CardSystems and Heartland. CardSystem is out of business because of its incident. Heartland is surviving, but take a look at their share price:

The effects of the incident (see the that big drop in January?) are clear and it will take time to recover from it. The company is spending a lot of money to rebuild its credibility, there is a real impact to the value of the organization. One can argue the part of the impact is due to the financial risk from litigation and fines, not to reputation only. That's true, but I'm sure that even by not considering that impact we would still see some considerable impact.
The impact can be zero? Yes it can, but it depends on a series of factors, like the organization business, details of the incident (what type of information has leaked, how it happened) and how the organization dealt with it.


Monday, August 10, 2009

These are the vulnerabilities I'm worried about

For those who are addicted to vulnerability information feeds, you are probably already aware of the XML Libraries data parsing  vulnerabilities. This is the kind of vulnerability that creeps me out. When you've got vulnerabilities related to an easily identifiable software, like "Windows 2008", "Firefox 3.5" or "Java Runtime Environment 6", it is easy to understand if you are vulnerable or not.When the issue is on libraries, libraries that are used everywhere, this thing becomes a nightmare. You are now relying on the ability of all your software providers (COTS software and "tailored" stuff) to identify the usage of those libraries in their products, and also on the ability of your developers to do the same. Does your vulnerability management process includes a procedure to check with developers if they are using vulnerable libraries? Do you track libraries on those processes too? I haven't seen that being done out there.There are lots of file scanning technologies deployed everywhere. Antivirus, content discovery, DLP. Can we leverage those technologies to look for the presence of vulnerable libraries? I wonder if someone is already doing that...

Friday, August 7, 2009

Risk intuition and security awareness

Schneier has posted a very good post on "Risk intuition" and risk perception in general. This part was particularly interesting:

"[...] I listened to yet another conference presenter complaining about security awareness training. He was talking about the difficulty of getting employees at his company to actually follow his security policies: encrypting data on memory sticks, not sharing passwords, not logging in from untrusted wireless networks. "We have to make people understand the risks," he said.


"Fire someone who breaks security procedure, quickly and publicly," I suggested to the presenter. "That'll increase security awareness faster than any of your posters or lectures or newsletters." If the risks are real, people will get it."

He is totally right about it. Employees perceive very fast the organization posture on its own rules. Everyday decisions are usually based on personal risks, and not on organization related risks. The employee is thinking mostly about the risk to his performance and to his job, not to the company itself. If people starts to be punished for security policy violations, this "personal risk" starts to be considered on decisions like forwarding internal mail to external accounts and sharing passwords.

I had the opportunity to witness the change in people's behaviour because of changes in management posture before. In one of these cases a group of developers used to share passwords among their group to "keep things running while they are away" and were encouraged by their manager to do so. They immediately changed this behaviour as soon as that manager was publicly reprimanded by his director due to promoting bad security practices and warned that it would be formally punished if identified again.

The other case, at the same organization, was related to prohibited content being accessed on the Internet. We didn't have content filtering at that time, but by using some simple Perl scripts and Proxy logs I was able to trigger the process of warning managers of abuse from the biggest offenders. The actions taken by those managers (strongly encouraged by higher management) over those warnings triggered a huge change in behaviour from all users, that could be clearly noted in the next month's logs. People just realized that there was a real risk related to that behaviour, so they changed it. An interest fact about this case was that some users went the other way and started using stuff like proxy websites to avoid the controls. The same mechanism (report of users doing that) that triggered this behaviour was also used to reduce it. Users doing that were punished, and the message that Internet access was being monitored and that attempts to abuse it would be punished was clearly received. 

So, if you want to know what's the best investment on security awareness: real punishment of violations. Change the employee personal risk/reward equation.