Thursday, January 29, 2009
Good example of flawed process
I've just read about an Unix engineer from Fannie Mae being sued for trying to deploy a time-bomb script on their servers after being fired. The guy was able to access the servers after being fired, so it's a very good example of a flawed termination process. An interesting thing here is that he was a contractor, so what probably happened (and I'm just expeculating here, based on what I've seen before) was that they had a process for doing that for employees but not for contractors. Here is a strong evidence for that:"[...] access to Fannie Mae's computers for contractors' employees was controlled by the company's procurement department, which did not terminate Makwana’s computer access until late in the evening Oct. 24." (he was fired based on facts on Oct.11)People with access and privileges are people with access and privileges, no matter if employees or contractors. Always verify if you are not letting one of those groups out of your security procedures, from background checking to termination.
Tuesday, January 27, 2009
Heartland and PCI
Martin Mckeay, Mike Dahn, Anton Chuvakin and a lot of others are talking about the impact and/or the meaning of the Heartland breach on PCI. It raised the debate about compliance versus security, with valid points on "doing security first" and "security and compliance only have few points in common". I agree with both, but there is also something else that's not being mentioned.
PCI and regulations in general are usually written to address issues that cause more risk and are more common. They are also built to fit most of the target organizations. That means that every organization has its own particular risks and characteristics that may be a very important security concern but that is not necessarily addressed by the standard. To address everything for everybody on the standards would make the cost related compliance AND validation something huge, out of the scale of reasonable costs for risk mitigation.
There is a way to solve that by building risk management based standards, like ISO27001, but they are usually more expensive to implement (and to validate). Also, those standards work very well to deal with risks to the organization, not to third parties (like cardholders), though considering audit issues and fines a risk themselves can help on fixing this "glitch". Honestly, to complicated for me, I don't believe that the results from implementing those risk management systems are not proportional to the costs.
If both ways of writing (and using) regulations are flawed, what are our alternatives? I'm still not sure, but I think that maybe a mixed approach could bring better results. I also think that threat detection is considerably underestimated and could be improved by forcing some real time collaboration among organizations. Feeding data from several different organizations defenses (like firewalls and IDSes) into a massive correlation system would probably bring the same benefits that the current card fraud detection mechanisms are delivering for years.
Wednesday, January 21, 2009
from the other side
I'm usually ranting here about the usage of statistics, risk metrics and other quantitative approaches (as ROI) to support security decisions. Well, there is a small but very smart comment from Lindstrom regarding some of "our" arguments against those methods. I completely agree with him. That's why this blog is named "Security Balance", it's my statement that we need to pursue the balance between different approaches (security / productivity, quantitative / qualitative, network / endpoint, prevention / detection, awareness / enforcement) to achieve the best possible results. Usually my criticism over a specific subject is related to an excessive confidence about its importance of effectiveness, and it should not be taken as a suggestion to completely drop that in favor of the other side. Balance is the key to better security.
Tuesday, January 20, 2009
Deperimeterization without endpoint control?
Do you know what that is? That's a complete disaster!I've got the tip for this very interesting Burton Group discussion from Anton Chuvakin's post (who also has an overflowing "2blog" queue :-).There is a way to summarize that discussion. The key issue on deperimeterization is the control over the endpoint. If you are pushing the defenses to the endpoint, you better control it. So, if you are allowing endpoints that you don't control to access your data, it's not your data anymore.Think for a moment, how a data-centric security approach would work? It would be something like agents that run on every endpoint or that go together with data, encapsulating it. Either way, it will run on the endpoint. If the user is controlling the endpoint ring-0 by having admin rights on the box, he will be able to modify/trick the security agent into doing things with the data that it shouldn't be supposed to do. Now, quick answer, how can you avoid users from having admin rights over their own devices? You can't!Imagine that you have printed some very sensitive document in a very, very bleeding edge technology paper. It can't be copied by any photocopy machine, and it will destroy the data on it if someone tries to put it through one of those machines. If you allow someone to get that paper to anywhere where you can't see them, they will copy it like the XII century monks used to do it! So, what can be done to avoid it? First, the user can NEVER control the device. How can you avoid that if he owns it? Well, I don't like it, but the only alternative is something like a very broad adoption of the TPM. However, I doubt that those devices will become popular, and if that happens also will be the ways to hack it.The other alternative is not that cool, but I believe it's closer to reality. Things will still be like what they are today. I mean, we'll still have to put some restrictions over which devices can be used, we'll still have to have some control over the physical and network environments, will still have to deal will ACCESS CONTROL. That's not as sexy as virtualization, deperimeterization and any other ation, but it's the root of information security. We'll still have to choose carefully who can access the information and under which circunstances it will happen.Did you really think that, with all these new variables, security would be that simple? :-)
Thursday, January 15, 2009
Distributed malware identification
The info about Senthil Cheetancheri proposal on fighting zero-day attacks with a peer-to-peer software that shares information about anomalous behavior is spread through a lot of security blogs and portals today. It is not that innovative, but it's certainly something nice to think about.I would go a little further and propose something a little different. We could build a distributed system like SETI-at-home (I've just discovered a very irritating wordpress behaviour when using the "at" symbol!) to not fight, but to identify malware. Today there are websites where people post information about executables found on their computers and others can vote if that piece of software is malicious or not. Mixing information gathered automatically by an agent and votes from people it would be possible to use the agent not only as a very wide information collection network but also as a antivirus. Additional stuff like centrally managed white lists (to avoid people exploiting the system to make Windows DLLs to be identified as malicious, for example) and behaviour analysis could make it a very effective defense.That's a very nice case for a open source project!
Wednesday, January 14, 2009
Is it time for rewriting SMB stuff?
Since the beginning of Microsoft security efforts there are lots of reports of chunks of code being rewritten from scratch to address old and recurring problems. Now, why do we still have to deal with vulnerabilities related to SMB (MS09-001, MS08-063, MS06-063), when everybody knows that the components that deal with it are present and enabled on almost all Windows boxes? We have another vulnerability that impacts Windows 2000, XP, 2003, 2008 (core server included) and Vista. Does anybody know if the Windows 7 beta is also vulnerable?Isn't it time to rewrite Server and Workstation services?
Tuesday, January 13, 2009
Pareto is killing security
It's started to be a rule on security programs to have security solutions/processes implemented following the 80/20 "Pareto principle". That's pretty acceptable except for the fact that people immediately forget that remaining 20% and keep in their heads that that risk is completely mitigated. You start to see those cases piling up, absurd "no risk" situations being used as premises for business decisions and then, suddenly, everything collapses in a Wall Street black swan style.A very good tool to detect "Pareto stacks" is the famous Penetration tests. Not those where a junior guy runs a nice vulnerability scanner, but those where a very smart guy looks into your network as someone who wants to steal the golden eggs and starts to move things around in order to find and get them. I'm not saying that vulnerability scanning is crap, but penetration test is not vulnerability scanning, much less "vulnerability scanning with confirmation of exploitation possibility" (yeah, I've heard that before). That worthless kind of pentest is dying, for sure, but long live the real pentest!If you want to understand what a good penetration test is, try reading "Stealing the Network: how to own the box". It's a very fun reading and also shows what a pentest should look like.It's not (why did I write "not" here before??) easy to find servers with unpatched vulnerabilities. Let the automated services do that. Real hacks, however, don't always happen by someone exploiting those vulnerabilities. You need to find things related to the way that your organization works, things that were done out of the regular process, that 20% that everybody wishes it never exists. See some weird things that you can find by this approach here and here.Again, if going the 80/20 is that bad, what should we do instead? Look for solutions that are pervasive. Those that will work no matter if people will follow the rules, platform independent, "business proof". Those security solutions are the ones that you should put in the top of your priorities. There are not many things that can be done that way, but they will certainly bring you more results that all those 80/20.
Thursday, January 8, 2009
Risk management and kids
I was relieved to read this post from Stuart King today and see that I'm not the only one that is worried about the way that parents are behaving to protect their kids.He mentions the problem of allowing kids to go walking alone to school, using some good risk management concepts to illustrate how irrational people can behave when trying to protect their children. I lived my childhood on a place that even at that time was quite more dangerous than most of North American cities today, but even there I was allowed to go to school alone since 9 years old. My wife is one of those that tends to be over-cautious about kids, so I'm glad that we came to Canada to have our kids here. It would be hard to discuss this kind of subject with her in the middle of Sao Paulo security paranoia.Security perception is something interesting to watch. It's impressive to see the differences from how Canadians think and behave in terms of security (crime related, not Infosec) and the differences from my perceptions. I can clearly see that they worry about things that would never bother me and that I'm usually much more aware of what is happening and people around me are doing at the streets than them. As I was talking to a Canadian friend, some things that could be considered common to me (like armored cards to avoid gun point robbery) are seem as extreme situations to him. I can easily see some similar situations on Information Security. That's why it's very important to security professionals to be aware of the business and its environment. A CSO switching from one organization to another needs to understand the differences, not only internally (controls in place, organization culture, general employee security awareness, etc) but also on the threat landscape. Sometimes we meet a guy that is putting a lot of effort on a threat that is not really causing high risks, only to find later that that was a huge problem on the organization he used to work before.So, adding to Stuart King advice on avoiding being fooled by risk perception, try also to stay aware of threat differences from one place to another. You might be fighting the right battle in the wrong war.
Subscribe to:
Posts (Atom)