Wednesday, February 14, 2018

From my Gartner Blog - BAS and Red Teams Will Kill The Pentest

With our research on testing security methods and Breach and Attack Simulation tools (BAS), we ended up with an interesting discussion about the role of the pentest. I think we can risk saying that pentesting, as it is today, will cease to exist (I’ll avoid the trap to say “pentesting is dead”, ok? :-)).

Let me clarify things here before everyone starts to scream! Simple pentesting, for pure vulnerability finding goals and with no intent to replicate threat behavior, will vanish. This is different from the pentest that many people will prefer to call “red team exercises”, those very high quality exercises where you really try to replicate the approach and methods of real threats. That approach is in fact growing, and that growth is one of the factors that will kill the vanilla pentest.

But to kill the pentest we need pressure from two sides. The red team is replacing the pentest from the high maturity side, but what about the low maturity side? Well, that’s where vulnerability assessments and BAS comes into play.

If you look at how pentests are performed today, discounting the red team style of exercises, you’ll see that it’s not very different than a good vulnerability assessment. But still, it’s different, because it involves exploiting vulnerabilities, and that exploitation can move the assessor to another point in the network that can be used for another round of scanning/exploitation. And that’s where BAS tools come into play.

BAS automates the simple pentest, performing the basic cycle of scan/exploit/repeat-until-everything-is-owned. If you have the ability to do that with a simple click of a button, why would you use a human to do that? The tool can ensure consistency, provide better reporting and do it faster. Not to mention requiring less skills (you don’t even need to know how to use Metasploit!). So, with BAS, you either go for human tests because you want a red team, or you use the tool for the simple style of testing.

But, you may argue, not everyone will buy and deploy those tools, so there’s still room for the service providers selling basic pentesting. Well…no! BAS will not be offered only as something you can buy and deploy on your environment. It will also, like all the other security tools, be offered as SaaS. With that, you don’t need to buy and deploy it anymore, you can “rent it” for a single exercise. This is simpler than hiring pentesters, and provides better results (again, I’m starting to sound repetitive, but excluding the really great pentests…). So, why would you hire people to do it?

pentest-killed

 

In the future, your options for testing your security will be vulnerability scanning, BAS or red teaming. Each one with specific objectives, advantages and disadvantages, but there’s no need for people running basic pentests anymore.

If you currently use those simple pentests, do you see your organization eventually moving to this new scenario? If not, I’d love to know why!

 

The post BAS and Red Teams Will Kill The Pentest appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2EtszTs
via IFTTT

Tuesday, January 30, 2018

From my Gartner Blog - The “working with an MSSP” Tome Is Here

As Anton just posted, the new version of the famous “How to Work With an MSSP to Improve Security” has just been published. I’m very happy to become a co-author (together with Anton and Mike Wonham) on this document, as it is usually one of our documents that I most frequently refer to clients during inquiry calls. After all, it’s very common to start a call about SIEM, UEBA, NTA or EDR and end it talking about MSS, after the client realizes that using those tools require people – a lot of people – on their side.

Among lots of exciting new content (this is indeed a looooong document :-)), a new guidance framework for those looking for (and eventually hiring) an MSSP:

343485_0001

You’ll notice that we added “joint content development” as part of the Operating phase. This is something we also added to the recently updated Use Cases document. After all, there’s no reason to believe the MSSP knows everything you want them to detect for you; so, how do you tell them that? If you hired an MSSP, do you know if you still have people on your side capable of working with them to develop content?

There is also an important reminder for organizations expecting to have the entire security monitoring process managed by the service provider:

“When customers perform triage, they will often find cases of false positives. Many organizations don’t report these back to the MSSP, only to complain later that they keep receiving the same false-positive alerts repeatedly! Although the MSSP is responsible for tuning the detection systems it manages, such tuning typically requires feedback from the customer. This feedback goes beyond a simple statement like, “This alert is a false positive.” Adding context about why the alert is a false positive will allow the MSSP to perform the appropriate tuning. It will also avoid cases where entire classes of alerts are disabled due to uncertainty around what type of activity is causing them”

I had countless conversations with organizations complaining about the false positives sent by the MSSP. But it’s impressive how many of them are not prepared to report back those events to the provider in a way that would allow them to tune their systems and avoid a similar occurrence in the future. This is a recurrent theme in this document: You MUST WORK WITH THE MSSP, not expect them to figure everything out alone.

We have this and far more in the doc; please read it and don’t forger to provide feedback: http://surveys.gartner.com/s/gtppaperfeedback

 

 

The post The “working with an MSSP” Tome Is Here appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2EmY5iN
via IFTTT

Wednesday, January 17, 2018

From my Gartner Blog - Security Monitoring Use Cases, the UPDATE!

Posting about updated documents is often boring, but this time I’m talking about my favorite Gartner document, as usual, co-authored with Anton“How to Develop and Maintain Security Monitoring Use Cases”!

This document described an approach to identity, prioritize, implement and manage security monitoring use cases. Of course, it has a lot on SIEM, as it’s usually the chosen tool for implementation of those use cases, but we revised to ensure we are also covering technologies such as UEBA, EDR and even SOAR. If we consider that detection can often be implemented as multi-stage process, that’s a natural evolution!

The major changes are:

  • Revamping the main graphic of the document to better illustrate how the process works (below)
  • Putting more emphasis on some of the artifacts generated by the process, such as use case lists
  • Evolving the language around about doing use case development as software development to say “doing it as AGILE software development”
  • Reinforcing the types of use cases that are usually managed by this process: threat, controls and asset oriented
  • Including tips for use case management when working with a MSSP (we are writing more about this in our upcoming MSSP doc, BTW)

The summary diagram for the framework can be seen below:Enlarge Image

Again, we are always looking for feedback on our research. If you have anything to say about this document, please use this page to do it.

The post Security Monitoring Use Cases, the UPDATE! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2mOTUoo
via IFTTT

Friday, January 12, 2018

From my Gartner Blog - Automation – Why Only Now?

As we ramp up our research on SOAR and start looking at some interesting tools for automated security testing, something crossed my mind: Why are we only seeing security operations automation and security testing automation technologies come to market now? I mean, automating workflows is not new technology, so why are these specific workflows only being automated now?

I believe the answer includes multiple reasons, but I see two as key:

The need: Of course it would be great to automate SOC tasks back in 2005, but at that time the environments were more stable, and the volume of threat activity lower. Because virtualization was still not everywhere, the number of systems running was also smaller. The smaller pace of change and size of the technology environments, as well as a less aggressive threat landscape were still compatible with mostly manual security operations. With cloud, devops, crazy state sponsored threats and very short breach to impact scenarios like ransomware it is imperative for organizations to be able to adapt and react faster. At the required scale, that’s only possible with more automation.

The tools: Yes, the ability to write an automated workflow was already there, but integration was still painful! There were only some APIs available from the different security (or even general IT) tools, and most of the time they were not standardized and not simple as the abundant REST APIs we see today. In the past, if you wanted to fully automate a SOC playbook you would probably need to include all required capabilities in a single tool, without the option to orchestrate the work of multiple independent solutions. So, it is not that automation tools were not available; the broad menu of tools to be centrally orchestrated didn’t exist.

 

The increase in need is independent of how the security industry evolves, but I see the second reason in a very positive way. We are constantly bashing the vendor community on the release of new tools based on exaggerated marketing claims, but we should also acknowledge this movement of making the tools friendlier to integration as a positive evolution of the industry. There have been many standards and attempts to create common languages and protocols to integrate tools, but apparently opening them for integration via REST APIs has provided far more benefits than initiatives like IF-MAP, DXL, CIF, IODEF, IDMEF.

What else do you think is driving this automation trend in security?

 

The post Automation – Why Only Now? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2mwWa3o
via IFTTT

Tuesday, January 9, 2018

From my Gartner Blog - Threat Simulation – How real does it have to be?

We are starting our research on “Testing Security”. So far we’ve been working with a fairly broad scope, as Anton’s post on the topic explained. One of the things we are looking at is the group of tools that has been called “breach and attack simulation tools”.

Tools that automate exploitation have been around for years; we can mention things like Metasploit, Core Impact, CANVAS and others. Those are tools used by pentesters so they don’t need to rewrite their exploits for each specific condition they find during their test. So what’s different in the new wave of tools?

The idea of using these tools is to have a consistent way to continuously test your controls, from prevention to detection (and even response). They are not focused on making exploitation easier, but to run an entire intrusion scenario, end to end, to check how the controls in the attacked environment react to each step. They go beyond exploitation and include automation of the many steps in the attack chain, including command and control, lateral movement, resource access and exfiltration. They also add a layer of reporting and visualization that allows the users to see how each attack  is performed and what the tool was (or was not) able to accomplish.

We are just starting to talk to some of the vendors in this space, but I noticed there’s one point they seem to argue about: how much real should these tests be? Some of the vendors in this space noticed there is a strong resistance from many organizations in running automated exploitation tools in their production environments, so they built their tools to only simulate the more aggressive steps of the attacks. Some of these tools even take the approach of “assuming compromise”, bypassing the exploitation phase and focusing on the later stages of the attack chain.

It is an interesting strategy to provide something acceptable to the more conservative organizations, but there are some limitations that come with that approach. In fact, I see two:

First, many prevention and detection technologies are focused on the exploitation actions. If there are no exploitation actions, they just won’t be tested. So, if the tool replicates a “lateral movement scenario” on an endpoint using a arbitrary fake credential that mimics the outcome of successfully running mimikatz, no tool or approach that looks for the signs (or prevents) of that attack technique being used (like this) will be tested. If the organization uses deception breadcrumbs, for example, they wouldn’t be touched, so there’s no way to check if they would actually be extracted and used during an attack. Same thing for monitoring signs of the exploit or even preventing them from working using exploitation mitigation technologies. So, the testing scenarios would be, in a certain way, incomplete.

Second, the fact that exploitation is not necessarily something that happens only in the beginning of the attack chain. It is often used as one of the first steps to get code running into the target environment, but many exploitation actions could come later as part of the attack chain for privilege elevation, lateral movement and resource access. So, assuming that exploitation has only a small role, at the beginning of the attack chain, is a very risky approach when you are looking for what needs to be tested in the entire control set.

Looking at these two points in isolation suggests that breach and attack simulation tools should perform real exploitation to properly test the existing controls. But apart from the concerns of disrupting production systems, there are other challenges with incorporating exploits in the automated tests. The vendor or the organization using the systems now needs to the ability to incorporate new exploits as new vulnerabilities come up, check if each one of those are safe or if they could damage the ability of the real systems to protect themselves after the test is completed (some exploits disable security controls permanently, so using them during a test could actually reduce the security of the environment). The approach of avoiding exploitation eliminates those concerns.

If both approaches are valid, it is important to the organization to understand the limitations of the tests and what still needs to be tested manually or through alternative means (such as good and old checklist?). This also brings another question we should look at during this research: how to integrate the findings of all these testing approaches to provide a unique view of the state of the security controls? That’s something for another post.

The post Threat Simulation – How real does it have to be? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2CZj4dv
via IFTTT

Monday, December 4, 2017

From my Gartner Blog - Threat Detection Is A Multi-Stage Process

We are currently working on our SOAR research, as Anton has extensively blogged about. SOAR tools have been used to help organizations  triage and respond to the deluge of alerts coming from tools such as SIEM and UEBA. Although this is sometimes seen as the earlier stages of incident response, I’ve been increasingly seeing it as a way to implement “multi-stage threat detection”.

Let’s look at a basic use case of SOAR tools. Before the tool coming into play, there could be a playbook like this:

The SIEM performs basic correlation between a threat intelligence feed and firewall logs, generating an alert for every match (I know, many will argue it’s a bad use case example, but many orgs are actually doing it exactly like that). The SOC analyst would triage each of those events by identifying the internal workstation responsible for that traffic, checking it with an EDR tool, extracting some additional indicators related to that network traffic (the binary file that initiated the connection request, for example) and submitting them to external validation services or sandboxes. If the result is positive, they would use the EDR tool to kill the process, remove the files from the endpoint and also search for the existence of the same indicators on other systems.

With the SOAR tool in place, the organization can automate almost everything performed by the analyst, effectively moving from minutes to seconds to execute all the actions above. The tool starts the playbook when an alert from the SIEM arrives, integrating with the EDR tool and the validation services. We could expand it even further to make it add the new identified indicators to blacklists and firewall rules. Of course, corrective measures would be executed only after the analyst authorizes them.

Now, let’s think about an alternative, hypothetical world:

Your SIEM is immensely powerful and fast. So, you send all the detailed endpoint telemetry collected from the EDR tool to it. You also download all the databases of the external validation services into it. Then, you build a monster correlation rule that will cross the TI feed, the EDR data (linking connection requests to processes and binaries) to that huge database of known malicious processes and binaries. Now you’re doing almost everything from that playbook above on the SIEM, in just one shot (ok, I’m cheating, the sandbox validation still needs a step apart…although the SIEM could have sandbox capabilities embedded; it is immensely powerful, remember?).  No need for the playbook, or the SOAR too, at all!

Unfortunately there’s no such thing as a SIEM like that. That’s why we end up having this single detection use case implemented in multiple steps. if you think about it this way, you’ll see that the SIEM alert is not meant to be a final detection, subject to “false positives”. It’s just the first part of a multi-stage process, each stage looking at a smaller universe of “threat candidates”.

Thinking about detection as multi-stage process unlocks interesting use cases that wouldn’t be able to be implemented as an “atomic decision model”. Any interesting detection use cases that would be discarded because of high false positive rates could be a good fit for a multi-stage process.

But multi-stage detection is not effective if done manually. Score based correlation, as done by UEBA and some SIEM tools, can help linking multiple atomic detection items, but those situations where you need to query external systems (such as sandboxes), external services or big reference sets are still problematic. But SOAR comes to rescue! Now you can have an automated pipeline that takes those initial detection cases (or even entities that hit a certain score threshold) and put them through whatever validation and adhoc queries you might need to turn them into “confirmed detection”, full contextualized alerts.

Most of us would think about advanced automated response use cases, dynamically patching or removing things from the network, as the main way to get value from SOAR.  Not necessarily. Making detection smarter is probably where most organizations will find the value for those tools.

 

 

 

 

The post Threat Detection Is A Multi-Stage Process appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2BHFT1K
via IFTTT

Tuesday, November 28, 2017

From my Gartner Blog - Machine Learning or AI?

We may sound pedantic when pointing we should be talking about Machine Learning, and not AI, for security threat detection use cases. But there is a strong reason why: to deflate the hype around it. Let me quickly mention a real world situation where the indiscriminate use of those terms caused confusion and frustration:

One of our clients was complaining about the “real Machine Learning”  capabilities of a UEBA solution. According to them, “it was just rule based”. What do you mean by rule based? Well, for them, having to tell the tool that it needs to detect behavior deviations on the authentication events for each individual user, based on the location (source IP) and on the time of the event, is not really ML, but a rule based detection. I would say it’s both.

Yes, it is really a rule, as you have to define what type of anomaly (to the data field – or ‘feature’  – level) it should be looking for. So, you need to know enough about the malicious activity you are looking for, so you can specify the type of behavior anomaly it will present.

But on this “rule”, how do you define what “an anomaly”  is? That’s where the Machine Learning goes. The tools will have to automatically profile each individual user authentication behavior, focusing on those data fields specified from the authentication events. You just can’t do it with, let’s say, a “standard SIEM rule”. There is real Machine Learning being used there.

But what about AI – Artificial Intelligence? ML is a small subset of a field of knowledge known as AI. But the problem is that AI has much more than just ML. And that’s what that client was expecting when they complained about the “rules”. We still need people to figure out those rules and write the ML models to implement them. There’s no machine capable of doing that – yet.

There have been some attempts based on “deep learning”  (another piece of the AI domain), but nothing concrete exists. You can always point ML systems to all data collected from your environment so it can point to anomalies, but you’ll soon find out there are far more anomalies that are not related to security incidents than you are lead to believe by some pixie dust vendors. Broad network based anomaly detection has been around for years, but it hasn’t been able to deliver efficient threat detection without a lot of human work to figure out which anomalies are worth investigating.

Some UEBA vendors have decent ML capabilities, but they are not good on defining good rules/models/use cases to apply it. So, you may end up with good ML technology, but with mediocre threat detection capabilities, if you don’t have good people writing the detection content. For those going through the “build you own” path, this is even more challenging, as you need the magical combination of people who understand threats and what type of anomalies they would create and people who understand ML to write the content to find them.

Isn’t that just like SIEM? Indeed, it is. People bought SIEM in the past expecting to avoid the IDS signature development problem. Now they are repeating the same mistake buying UEBA to avoid the SIEM rules development problem. Do you think it’s going to work this time?

 

 

 

The post Machine Learning or AI? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2BlLxpn
via IFTTT