Wednesday, September 28, 2011

Unrealistic Security Expectations - part 2

Unrealistic expectations are not only related to technology.  In fact, I believe it’s more common to see that in security policies and standards. Based on the unrealistic expectation that anything written in a policy will be blindly followed, we end up writing prescriptive documents describing everything an organization must do for security. Done! By putting words on paper, we solved our security problems!


The problem begins with the extremely unrealistic assumption that someone reads the security policy! Sometimes I try to understand how someone can possibly believe that a hundred-page security policy will be read; most of the times reading the policies is not only unnecessary for people to do their jobs, it’s also something that will prevent them from working!  It’s plain economics, there’s almost no incentive for them to read those documents. An assumption like that is pretty unrealistic, eh? So why do we keep being surprised when people don’t comply with the policy?


Anyway, there are processes we can use to force people to comply with policies and standards. But no process or mandate will help if we keep writing policies that are impossible to comply with. Ok, that sounds obvious, right? Well, it should, but there are lots of security policies out there just like that.


Even if it’s possible to comply, there’s another thing that will make a policy fail: Exemptions. In every organization with a security policy there’s a process to get exemptions. That’s ok, until you realize so many exemptions are being granted that the policy is simply wishful thinking. You shouldn’t expect a policy to act as a control if it’s not being followed. Yet many professionals do that. The basic “enforcement rule” applies here; if you can’t enforce a policy, or if it’s easier to get an exemption them to comply, it doesn’t meet its purpose.  


Discussions about policies and standards effectiveness usually flow to whether the bar is being set too high or too low. That’s not always the case. Sometimes the issue is related to how prescriptive the policy is. Prescriptive policies can only be applied where the current conditions are aligned to the original expectations of whoever wrote the policy. Do you remember the older version of the antivirus requirement in PCI DSS? The requirement had originally been written with Windows environments in mind. It was funny to see mainframe shops puzzled about how to comply with that requirement. Less prescriptive policies have far less expectations about the environment where they will be applied, reducing the need for exemptions.


However, it’s not as easy as just writing non-prescriptive policies and standards. Write them too open and won’t be sure if they will be interpreted in the way they should be. Policies with generic requirements are often based on an unrealistic expectation how they will be interpreted. Balance is the key here.


In the end, policies and standards are just that: guidelines and rules. They might not be followed. Have you ever thought how your security will perform if people choose not to comply with your policies? Do it. You should build your defenses based on reality, not on unrealistic expectations.

Monday, September 19, 2011

Unrealistic Security Expectations - part 1

A frequent issue I have with some blog posts, articles and tweets from my security colleagues is how frequently they rely on unrealistic expectations.  From the down-to-earth guy to the curmudgeon, it seems that all our field suffers from a collective illusion that executives will be reasonable when deciding about risk postures, people will willingly comply to security policies or architecture end states will one day be achieved. If we want to really improve security and produce sensible results, it’s time for us to wake up to reality and deal with security without unrealistic expectations.

I won’t write about the human component of these expectations, about risk related decisions and users behavior. At least on this subject I believe we’ve been seeing some ideas and people realizing we cannot expect behaviors to change and people to be conscious about security. For those who still don’t believe that go google  “candy bar password”, just to mention one of the many studies that show how poor are our decisions regarding security. My main concern is about the technology landscape within the organizations and assumptions related to it made by security professionals. I just can’t help being surprised on how naïve my peers can be about how their networks will look like in the future.

It’s easier to explain what I’m talking about with an example;  Back in distant 2004 I was discussing with the Wintel support team of the company I used to work for what should be done regarding Windows NT 4 servers, since the security patches wouldn’t be available anymore after the end of that year. At one point in  the discussion there was a general perception that the risk from having those servers in our network wouldn’t be that high, as the plan was to eventually have everything migrated to Windows 2000. When I left that company, a few years later, those servers were still around. Since then I’ve seen the same thing going on over and over again, in organizations of different sizes, countries and businesses.

IT changes are almost never implemented as “Big Bang” projects. There is always a phased approach. Paretto is always being applied, 80% of the bad stuff being removed fairly soon and the remaining stays around for a long time. An isolated situation like that wouldn’t be an issue, but in medium and large organizations we can see dozens of cases of older, unsupported, often unsecure technology, configurations, processes, just refusing to go away. That’s the nature of things and I can’t see that changing soon. The problem is how to build security in that reality. It’s just too common to see great security ideas failing to provide results because they depend on clean, stable environments. That was always the case for the Identity Management projects (“oh, those identity repositories will be retired soon, don’t worry about  those”), Log Management (“the new version that we’ll implement soon supports syslog-ng”), DLP and others. The security architects are developing solutions that depend on perfect scenarios, scenarios that will never become reality. That’s how most of the security technology deployments fail.

Here’s what we need to do to change: design your security solutions to work with REAL environments. Assume that things will fail, will not be as expected. Security solutions should be resilient to those environments, simply because that’s how our networks look like. I don’t like it, I would really love to have those perfect CMDBs available, all servers available to aggressive patching, all networks supporting 100% traffic capture for monitoring purposes. But that’s just not truth.

It’s not just “design for failure”. It’s design around failure. Your network is a mess and it will always be like that, deal with it.

In the next part I’ll expand on the unrealistic expectations for policies and standards. Meanwhile, let me know what are the unrealistic expectations you see in security and how you think we should deal with them!

Thursday, September 1, 2011

Dilbert - Alice could add a mention to ROSI too

After the kitten, Alice could also say the security project will bring a huge ROI :-)