Friday, November 26, 2010

How to measure the success of your security strategy?

The problem with metrics and measurements of security efficiency is that measurement is often done on the control perspective, and not on the actual results (I like the way Bejtlich sees it). So, there is no way to answer two important questions required to define the success of a security strategy:

  •  Are the necessary controls in place?

  • Are those controls effective?
 It is important to note that the even when the answer is “no” for both questions, an organization can still present a good incident history. That may happen because the controls in place don't provide information necessary to identify the breaches (the organization doesn't know it has been victimized) or because it simply hasn't happened yet. That’s quite common for fire incidents; no fire history does not mean that a building without fire extinguishing systems is secure against fire. In order to properly measure security, the assessment must be done in two steps:

  •  Identify the threat level for the organization

  • Test the security posture in the same way as the identified potential threats would materialize
 The threat part is the easiest. Current available data about breaches, such as the Verizon Business Data Breach Report, can point to the most common breach types, which can be translated into threat models for each organization profile. The ideal for this assessment is to mix generalized information (threats common to every organization in similar levels - such as regular malware, for example) with specific data for the target organization. The main threats for the financial industry are different from those for utility organizations, for example. Having identified the main threats, the tests that need to be performed can then be picked from a standardized list. What’s the difference between these tests and the tests that are currently performed for PCI-DSS, SAS-70, ISO27001 and other assessments? The difference is that most of those standards are control oriented, in a way that the tests will verify if a specific set of controls are in place and working properly. However, they are not always effective in identifying if the controls in place are relevant to the threats facing the organization and if the effectiveness of those controls is really affecting the likelihood of those threat to materialize. A good example is antivirus deployment. You may be able to present a 99% coverage of AV installed and updated on the organization’s workstations, but that doesn’t really say too much about the organization ability to prevent impact from malware attacks. I’ll take this example to give provide a better understanding of my suggested approach: A payment processor company goes through breaches reports and identifies that one of the biggest threats to organizations in that field is related to card data being stolen by malware. There are several ways of testing the organization ability to defend against that threat, such as:

  •  Remove one of the corporate desktops from the network and try to execute common malware found in the Internet on that machine

  • Execute a Proof of Concept customized malware on a corporate desktop

  • Execute a PoC customized malware on a corporate desktop that tries to send out a file containing sample card numbers to the Internet
 You can see by these tests that threat resistance can be tested in different levels. A series of tests against the same threat can be designed with different levels of assurance. The organization can choose which to use according to the importance of that threat to its profile, the impact and cost of the testing procedure itself. An interesting approach for this kind of assessment would be the development of a common database of tests, each of those linked to the threats that are being replicated and the level of assurance they can provide. With that database in hands, an organization can build a test set according to its needs and verify if the security strategy (and posture) works properly. Going a step further, security standards could be written to require specific sets of tests or minimum assurance levels for each test type. The organizations wouldn’t be required to implement specific controls anymore, but to resist against a series of tests that replicate the most important threats to that kind of organization. No more checklist based security. It would be something similar to vehicle crash tests and those fire resistance tests for cabling or safes, "resists to up to 30 minutes to fire". The vulnerability scanning requirements from PCI-DSS already provide some level of testing similar to what I’m mentioning above. Things like “from different points of the internal network, scan for common services and try to authenticate with default/blank password”, “from different points of the internal network, scan for the target data in open shares”, would also be tests to be performed during an assessment. The assurance level could change by running more targeted tests, without scanning procedures (providing the list of critical servers to the tester, for example), by making them more frequent (as it’s usually done with the vulnerability scans) or in non-regular intervals and even leverage internal knowledge of more important systems, common passwords and keywords within the organization and so on. The tests should be used to validate monitoring and response processes effectiveness too. Can you see how different this is from a checklist item saying "are the logs being reviewed?"? This is real security, this is constant testing and results driven, not controls driven. The tests performed should be constantly reviewed to reflect the changes in the threat landscape and even what is happening within the organization (for example, more tests targeting internal access control weaknesses during major layoff situations). An interesting aspect from evolving the security posture based on the results of those tests is that the control set doesn't need to follow standard regular frameworks or best practices. In my experience the ability to apply controls is heavily influenced by the maturity of the organization in other IT aspects. Every security professional knows that building security in the SDLC is the best way to approach application security, but for an organization facing challenges such as low development processes maturity and independent development groups it may be easier to tackle the application related threats with application firewalls, for example. Same thing for malware, where adding more "anti-malware" technologies can be replaced by using different technologies such as thin clients, less targeted Operating System platforms or a white-list based approach to software execution. Each of these approaches will appeal to different organizations depending on their approaches and maturity levels for desktop/client computing, software distribution and even IT consumerization. In order to apply this different model for security strategy I see two major challenges, but I believe they are easier to handle than those we face Today with the current controls driven approach. One is related to the security testing. The creation of a common set of tests that are results driven and that map to specific and real threats may not be as easy as I'm making it sound like. There is a risk we'll end up with watered down tests created by those generally incompetent but C-level influencing big consulting companies that will be not very different from the current checklist based controls testing. The other challenge is the ability of security professionals to identify the appropriate measures to tackle the identified threats. There is a huge reliance on (also watered down) "best practices" disguised as control frameworks today, with a lot of lazy guys thinking that security is achieved just by implementing this or that bunch of controls. They will do only that and nothing more. They put in place controls that are not the best for those specific circumstances and even controls that are not necessary at all, without thinking even for a minute in anything that is not part of that standard list (of 12 high level requirements, anyone? :-)).Does it make sense to follow this path? Yes, if these two challenges are easier to solve than those we currently face. If not, it's time to try to find another alternative. After all, I refuse to believe we already found the most efficient way to do security.

Thursday, November 4, 2010

Crazy ideas to think about: Defense x Security

We love to use analogies to discuss and illustrate information security concepts. We often see people referring to Sun Tzu's Art of War,  mentioning Army combat strategies and using military terms. Well, have you ever considered that information security mixes concepts from two different things, defense (like the Army protecting the borders and interests of a country) and internal security (law enforcement entities, such as police)? Well, anyone that works for one of those entities knows that they apply different methodologies, techniques, concepts and tools. So, shouldn't we be applying this separation in information security too?Here's the idea to consider: Is it worth (valuable? Efficient?) to organize your information security strategy in two different components, Defense and Internal Security? Defense focusing on external Threats, Internal Security on compliance, policy enforcement, access control? Let me know what you think...