Friday, September 25, 2009

Am I being contraditory?

I was reading the post that I just published when I noted that the post right before that was complaining about attempts to standardize diversity, the curse of the "best practices". The funny thing is that on the last post I tried to make the case for a big standard, that would probably end up trying to do the same thing I was complaining about on the previous post. Pretty contraditory, isn't it?It is, and I'm trying to see how these two different approaches can co-exist. One option, and can see how cool that could become, is to create that big standard as a framework that would allow different implementations of the same process, but all following specifications for inputs and outputs. That would create a big standard with "sub-standard plugins", suggested implementations for specific processes. Each of those plugins would consider information from those threat modeling components I mentioned before, in a way that you could choose an implementation of a process that is more aligned to your organization profile, technology and characteristics.That would avoid excessive standardization and also ensure that the basic necessary processes are in place. Now the two posts are not that incompatible anymore and I can go to sleep without that bugging me :-)

Risk-less security

I was happy to find Anton Chuvakin's post about the issues of doing security based on risk management a few days ago.  As I said on my twitter, "discussions about decision making (risk based vs. others) is the only thing interesting for me today on the security field". Anton made a very good summary about why we should consider alternatives to risk management and who else is talking about it.Honestly, I remember when I first read that 2006 article from Donn Parker that I was somewhat disapointed by his suggestion of doing things based on compliance. It was the old security sin "checklist based security". All the recent discussions about PCI DSS are great sources of opinions and insights about the subject, and I'm seeing that there's an overall perception from the security industry that it end up being good for security. Is the checklist based security working?If PCI DSS is working, it's certainly not because of those approaching it with a checklist based mind. It is because it is a quite good prescriptive standard. It is clear about what the organizations need to do. But is has limitations.PCI DSS has a very clear goal, to protect card and cardholder data. The standard allows a quick and dirty approach for those that don't want to bother with all those requirements. Reducing scope. Think about all those requirements about wireless networks. You have two choices, doing everything required by the standard or removing that network from the scope. With PCI, as long as you can prove that the cardholder data environment is protected, the rest can be hell, it doesn't matter, you are good to go. Is it wrong? Well, the standard has a clear goal and it makes sense to define the scope around it, but it is kind of naive on assuming that it's possible to isolate network environments inside the same organization without considering that the payment process (that uses card data) is usually very close to other core business processes. So, PCI DSS is a good standard but it is limited for overall information security purposes.With this in mind, one could say that creating a "generic PCI DSS" would be the solution for risk-less security. I think it is part of the solution, for sure. The problem is that the scope for that standard is considerably bigger, in a way that it would have to include some less prescriptive requirements. Is there a way of doing that without creating a new ISO27002? Don't get me wrong, I think ISO27002 is a great standard, but it is so open to interpretation that it can almost any beast can become a certified ISMS. Also, it has on its base the risk management process, that is exactly what we are trying to avoid. The new standard would have to include requirements to solve one of the biggest challenges on information security: prioritization.Prioritization is the achilles heel of any attempt of doing security without risk management. After all, everybody knows that we cannot protect everything and during the long implementation phases the bigger pains need to be addressed first. How can we do that without using that wizardry to "guess-timate risks"?My take is that it should be done based on two sources of information: benchmarking and threat modeling. Threat models can be generated based on geographic aspects, organization and business profiles, technology in use. Threats for banks in the same context (same country, for example) are probably very similar. Organizations using the same basic software package on its workstations will share the same threats for that technology too. We should also consider that a lot of the current threats organizations face are pervasive and ubiquotous, they affect almost any organization out there. Except for very few cases, malware issues are a common problem. Sure, the impact from malware issues will be different for each organization, but it seems to me that those characteristics will probably be those considered for many other threats too. How would an organization "risk-less" work to define its security strategy and the controls to implement? Most important, how it would check its own security status? Is it ok? Should it spend more? What needs to be improved?That's where the fun is. And no, I don't have those answers. But building the processes and tools to do that is definitely the most cool thing to do on this field.

Wednesday, September 9, 2009

Standardizing diversity - does it work?

Probably not enough content for a post, but certainly for a tweet :-)It's common to see on the security standards, frameworks and best practices a lot of "standard" ways of doing things like access control and patch management. The problem is the organizations are extremely different from each other, not only on the technology but also on processes and culture. It's pretty hard to suggest a standard process that will interact with so many different components and expect it to work (and perform) in the same way for all implementations.We should try to avoid standardizing diversity and start selling the basic concepts for each of those processes. Usually, the expected outcome. For Access Control, we should state that the process should provide least privilege, segregation of duties and accountability. For Patch Management, reducing the vulnerability window and "exploitability" of systems.I'm tired of seeing people struggling to fit "best practice processes"  to their organizations (and the other way around) instead of trying to achieve the desirable outcomes. That's a waste of resources and usually puts security directly against productivity.When implementing a security process, think about the desired outcome first. You'll probably find some different ways to get the results, then just get the one that is more aligned to your organization. Remember to document how the new process achieves that, as you probably will not find auditors with this open mind out there. Let they call your process a "compensatory control", as long as it works and does not make everybody nuts :-)

Tuesday, September 8, 2009

Flash updates and firefox

New Firefox versions will warn you when your Flash plugin is out of date.This is a cool idea and will help users that are not aware of the need to update software like Flash and Acrobat Reader. I can also see this as the beginning of a trend to centralize the updating of all the crap we run on the client side. Microsoft (and Mozilla, Apple, Google) already have a very good update system for their software. By opening it to other software vendors via a public API, it could be used as a single source of updates. Adobe, instead of deploying its own update system, could simply publish its updates through Windows update system. To avoid non-authorized updates, the user could be asked for the first time if he wants to allow that organization to update its software through the system, with the identity being verified through digital certificates. That would certainly help users to keep their software updates and to reduce the number of agents checking every time if there are updates to be installed. Please guys, let's simplify this mess.

Thursday, September 3, 2009

New AppLocker from MS - Some improvements

A was reading this article about AppLocker, the application control system from Microsoft that runs on Windows Server 2008R2 and Windows 7 clients. There seems to be some very good improvements there, specially the "automatic rule creation" part.In, short, an organization can build its "gold image" desktop, with all necessary apps, and run the automatic rule creator to identify all the applications that will be on the whitelist of things that can run on the desktop. If you are mature enough to have a real good "gold image", that shouldn't be very hard to do.The issue that I can see is with patches and updates. However, the automatic rule creation can work with the Publisher information when the binaries are signed, making it easier to accept new versions for those files. I think I'll try that in a lab to see how effective that is.Another interesting thing is that you can enable it in a "Audit only" mode. I have a personal view for whitelist based controls that is deploying them to generate logs only and monitor using a SIEM or similar system. On that way the risk to disrupt the environment is reduced and the exception can be managed on two levels (changing the whitelist, ignoring speficic alerts from the controls). It is one of the best ways to do security without breaking everything and also getting more value from a SIEM deployment. Be aware, however, that the SIEM system alone will not perform any miracles, this concept can only work when you have people and processes in place to deal with the generated alerts and to constantly tune the rules. That's the price to pay for more flexible security.