Tuesday, January 30, 2018

From my Gartner Blog - The “working with an MSSP” Tome Is Here

As Anton just posted, the new version of the famous “How to Work With an MSSP to Improve Security” has just been published. I’m very happy to become a co-author (together with Anton and Mike Wonham) on this document, as it is usually one of our documents that I most frequently refer to clients during inquiry calls. After all, it’s very common to start a call about SIEM, UEBA, NTA or EDR and end it talking about MSS, after the client realizes that using those tools require people – a lot of people – on their side.

Among lots of exciting new content (this is indeed a looooong document :-)), a new guidance framework for those looking for (and eventually hiring) an MSSP:

343485_0001

You’ll notice that we added “joint content development” as part of the Operating phase. This is something we also added to the recently updated Use Cases document. After all, there’s no reason to believe the MSSP knows everything you want them to detect for you; so, how do you tell them that? If you hired an MSSP, do you know if you still have people on your side capable of working with them to develop content?

There is also an important reminder for organizations expecting to have the entire security monitoring process managed by the service provider:

“When customers perform triage, they will often find cases of false positives. Many organizations don’t report these back to the MSSP, only to complain later that they keep receiving the same false-positive alerts repeatedly! Although the MSSP is responsible for tuning the detection systems it manages, such tuning typically requires feedback from the customer. This feedback goes beyond a simple statement like, “This alert is a false positive.” Adding context about why the alert is a false positive will allow the MSSP to perform the appropriate tuning. It will also avoid cases where entire classes of alerts are disabled due to uncertainty around what type of activity is causing them”

I had countless conversations with organizations complaining about the false positives sent by the MSSP. But it’s impressive how many of them are not prepared to report back those events to the provider in a way that would allow them to tune their systems and avoid a similar occurrence in the future. This is a recurrent theme in this document: You MUST WORK WITH THE MSSP, not expect them to figure everything out alone.

We have this and far more in the doc; please read it and don’t forger to provide feedback: http://surveys.gartner.com/s/gtppaperfeedback

 

 

The post The “working with an MSSP” Tome Is Here appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2EmY5iN
via IFTTT

Wednesday, January 17, 2018

From my Gartner Blog - Security Monitoring Use Cases, the UPDATE!

Posting about updated documents is often boring, but this time I’m talking about my favorite Gartner document, as usual, co-authored with Anton“How to Develop and Maintain Security Monitoring Use Cases”!

This document described an approach to identity, prioritize, implement and manage security monitoring use cases. Of course, it has a lot on SIEM, as it’s usually the chosen tool for implementation of those use cases, but we revised to ensure we are also covering technologies such as UEBA, EDR and even SOAR. If we consider that detection can often be implemented as multi-stage process, that’s a natural evolution!

The major changes are:

  • Revamping the main graphic of the document to better illustrate how the process works (below)
  • Putting more emphasis on some of the artifacts generated by the process, such as use case lists
  • Evolving the language around about doing use case development as software development to say “doing it as AGILE software development”
  • Reinforcing the types of use cases that are usually managed by this process: threat, controls and asset oriented
  • Including tips for use case management when working with a MSSP (we are writing more about this in our upcoming MSSP doc, BTW)

The summary diagram for the framework can be seen below:Enlarge Image

Again, we are always looking for feedback on our research. If you have anything to say about this document, please use this page to do it.

The post Security Monitoring Use Cases, the UPDATE! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2mOTUoo
via IFTTT

Friday, January 12, 2018

From my Gartner Blog - Automation – Why Only Now?

As we ramp up our research on SOAR and start looking at some interesting tools for automated security testing, something crossed my mind: Why are we only seeing security operations automation and security testing automation technologies come to market now? I mean, automating workflows is not new technology, so why are these specific workflows only being automated now?

I believe the answer includes multiple reasons, but I see two as key:

The need: Of course it would be great to automate SOC tasks back in 2005, but at that time the environments were more stable, and the volume of threat activity lower. Because virtualization was still not everywhere, the number of systems running was also smaller. The smaller pace of change and size of the technology environments, as well as a less aggressive threat landscape were still compatible with mostly manual security operations. With cloud, devops, crazy state sponsored threats and very short breach to impact scenarios like ransomware it is imperative for organizations to be able to adapt and react faster. At the required scale, that’s only possible with more automation.

The tools: Yes, the ability to write an automated workflow was already there, but integration was still painful! There were only some APIs available from the different security (or even general IT) tools, and most of the time they were not standardized and not simple as the abundant REST APIs we see today. In the past, if you wanted to fully automate a SOC playbook you would probably need to include all required capabilities in a single tool, without the option to orchestrate the work of multiple independent solutions. So, it is not that automation tools were not available; the broad menu of tools to be centrally orchestrated didn’t exist.

 

The increase in need is independent of how the security industry evolves, but I see the second reason in a very positive way. We are constantly bashing the vendor community on the release of new tools based on exaggerated marketing claims, but we should also acknowledge this movement of making the tools friendlier to integration as a positive evolution of the industry. There have been many standards and attempts to create common languages and protocols to integrate tools, but apparently opening them for integration via REST APIs has provided far more benefits than initiatives like IF-MAP, DXL, CIF, IODEF, IDMEF.

What else do you think is driving this automation trend in security?

 

The post Automation – Why Only Now? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2mwWa3o
via IFTTT

Tuesday, January 9, 2018

From my Gartner Blog - Threat Simulation – How real does it have to be?

We are starting our research on “Testing Security”. So far we’ve been working with a fairly broad scope, as Anton’s post on the topic explained. One of the things we are looking at is the group of tools that has been called “breach and attack simulation tools”.

Tools that automate exploitation have been around for years; we can mention things like Metasploit, Core Impact, CANVAS and others. Those are tools used by pentesters so they don’t need to rewrite their exploits for each specific condition they find during their test. So what’s different in the new wave of tools?

The idea of using these tools is to have a consistent way to continuously test your controls, from prevention to detection (and even response). They are not focused on making exploitation easier, but to run an entire intrusion scenario, end to end, to check how the controls in the attacked environment react to each step. They go beyond exploitation and include automation of the many steps in the attack chain, including command and control, lateral movement, resource access and exfiltration. They also add a layer of reporting and visualization that allows the users to see how each attack  is performed and what the tool was (or was not) able to accomplish.

We are just starting to talk to some of the vendors in this space, but I noticed there’s one point they seem to argue about: how much real should these tests be? Some of the vendors in this space noticed there is a strong resistance from many organizations in running automated exploitation tools in their production environments, so they built their tools to only simulate the more aggressive steps of the attacks. Some of these tools even take the approach of “assuming compromise”, bypassing the exploitation phase and focusing on the later stages of the attack chain.

It is an interesting strategy to provide something acceptable to the more conservative organizations, but there are some limitations that come with that approach. In fact, I see two:

First, many prevention and detection technologies are focused on the exploitation actions. If there are no exploitation actions, they just won’t be tested. So, if the tool replicates a “lateral movement scenario” on an endpoint using a arbitrary fake credential that mimics the outcome of successfully running mimikatz, no tool or approach that looks for the signs (or prevents) of that attack technique being used (like this) will be tested. If the organization uses deception breadcrumbs, for example, they wouldn’t be touched, so there’s no way to check if they would actually be extracted and used during an attack. Same thing for monitoring signs of the exploit or even preventing them from working using exploitation mitigation technologies. So, the testing scenarios would be, in a certain way, incomplete.

Second, the fact that exploitation is not necessarily something that happens only in the beginning of the attack chain. It is often used as one of the first steps to get code running into the target environment, but many exploitation actions could come later as part of the attack chain for privilege elevation, lateral movement and resource access. So, assuming that exploitation has only a small role, at the beginning of the attack chain, is a very risky approach when you are looking for what needs to be tested in the entire control set.

Looking at these two points in isolation suggests that breach and attack simulation tools should perform real exploitation to properly test the existing controls. But apart from the concerns of disrupting production systems, there are other challenges with incorporating exploits in the automated tests. The vendor or the organization using the systems now needs to the ability to incorporate new exploits as new vulnerabilities come up, check if each one of those are safe or if they could damage the ability of the real systems to protect themselves after the test is completed (some exploits disable security controls permanently, so using them during a test could actually reduce the security of the environment). The approach of avoiding exploitation eliminates those concerns.

If both approaches are valid, it is important to the organization to understand the limitations of the tests and what still needs to be tested manually or through alternative means (such as good and old checklist?). This also brings another question we should look at during this research: how to integrate the findings of all these testing approaches to provide a unique view of the state of the security controls? That’s something for another post.

The post Threat Simulation – How real does it have to be? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2CZj4dv
via IFTTT