Wednesday, January 17, 2018

From my Gartner Blog - Security Monitoring Use Cases, the UPDATE!

Posting about updated documents is often boring, but this time I’m talking about my favorite Gartner document, as usual, co-authored with Anton“How to Develop and Maintain Security Monitoring Use Cases”!

This document described an approach to identity, prioritize, implement and manage security monitoring use cases. Of course, it has a lot on SIEM, as it’s usually the chosen tool for implementation of those use cases, but we revised to ensure we are also covering technologies such as UEBA, EDR and even SOAR. If we consider that detection can often be implemented as multi-stage process, that’s a natural evolution!

The major changes are:

  • Revamping the main graphic of the document to better illustrate how the process works (below)
  • Putting more emphasis on some of the artifacts generated by the process, such as use case lists
  • Evolving the language around about doing use case development as software development to say “doing it as AGILE software development”
  • Reinforcing the types of use cases that are usually managed by this process: threat, controls and asset oriented
  • Including tips for use case management when working with a MSSP (we are writing more about this in our upcoming MSSP doc, BTW)

The summary diagram for the framework can be seen below:Enlarge Image

Again, we are always looking for feedback on our research. If you have anything to say about this document, please use this page to do it.

The post Security Monitoring Use Cases, the UPDATE! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2mOTUoo
via IFTTT

Friday, January 12, 2018

From my Gartner Blog - Automation – Why Only Now?

As we ramp up our research on SOAR and start looking at some interesting tools for automated security testing, something crossed my mind: Why are we only seeing security operations automation and security testing automation technologies come to market now? I mean, automating workflows is not new technology, so why are these specific workflows only being automated now?

I believe the answer includes multiple reasons, but I see two as key:

The need: Of course it would be great to automate SOC tasks back in 2005, but at that time the environments were more stable, and the volume of threat activity lower. Because virtualization was still not everywhere, the number of systems running was also smaller. The smaller pace of change and size of the technology environments, as well as a less aggressive threat landscape were still compatible with mostly manual security operations. With cloud, devops, crazy state sponsored threats and very short breach to impact scenarios like ransomware it is imperative for organizations to be able to adapt and react faster. At the required scale, that’s only possible with more automation.

The tools: Yes, the ability to write an automated workflow was already there, but integration was still painful! There were only some APIs available from the different security (or even general IT) tools, and most of the time they were not standardized and not simple as the abundant REST APIs we see today. In the past, if you wanted to fully automate a SOC playbook you would probably need to include all required capabilities in a single tool, without the option to orchestrate the work of multiple independent solutions. So, it is not that automation tools were not available; the broad menu of tools to be centrally orchestrated didn’t exist.

 

The increase in need is independent of how the security industry evolves, but I see the second reason in a very positive way. We are constantly bashing the vendor community on the release of new tools based on exaggerated marketing claims, but we should also acknowledge this movement of making the tools friendlier to integration as a positive evolution of the industry. There have been many standards and attempts to create common languages and protocols to integrate tools, but apparently opening them for integration via REST APIs has provided far more benefits than initiatives like IF-MAP, DXL, CIF, IODEF, IDMEF.

What else do you think is driving this automation trend in security?

 

The post Automation – Why Only Now? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2mwWa3o
via IFTTT

Tuesday, January 9, 2018

From my Gartner Blog - Threat Simulation – How real does it have to be?

We are starting our research on “Testing Security”. So far we’ve been working with a fairly broad scope, as Anton’s post on the topic explained. One of the things we are looking at is the group of tools that has been called “breach and attack simulation tools”.

Tools that automate exploitation have been around for years; we can mention things like Metasploit, Core Impact, CANVAS and others. Those are tools used by pentesters so they don’t need to rewrite their exploits for each specific condition they find during their test. So what’s different in the new wave of tools?

The idea of using these tools is to have a consistent way to continuously test your controls, from prevention to detection (and even response). They are not focused on making exploitation easier, but to run an entire intrusion scenario, end to end, to check how the controls in the attacked environment react to each step. They go beyond exploitation and include automation of the many steps in the attack chain, including command and control, lateral movement, resource access and exfiltration. They also add a layer of reporting and visualization that allows the users to see how each attack  is performed and what the tool was (or was not) able to accomplish.

We are just starting to talk to some of the vendors in this space, but I noticed there’s one point they seem to argue about: how much real should these tests be? Some of the vendors in this space noticed there is a strong resistance from many organizations in running automated exploitation tools in their production environments, so they built their tools to only simulate the more aggressive steps of the attacks. Some of these tools even take the approach of “assuming compromise”, bypassing the exploitation phase and focusing on the later stages of the attack chain.

It is an interesting strategy to provide something acceptable to the more conservative organizations, but there are some limitations that come with that approach. In fact, I see two:

First, many prevention and detection technologies are focused on the exploitation actions. If there are no exploitation actions, they just won’t be tested. So, if the tool replicates a “lateral movement scenario” on an endpoint using a arbitrary fake credential that mimics the outcome of successfully running mimikatz, no tool or approach that looks for the signs (or prevents) of that attack technique being used (like this) will be tested. If the organization uses deception breadcrumbs, for example, they wouldn’t be touched, so there’s no way to check if they would actually be extracted and used during an attack. Same thing for monitoring signs of the exploit or even preventing them from working using exploitation mitigation technologies. So, the testing scenarios would be, in a certain way, incomplete.

Second, the fact that exploitation is not necessarily something that happens only in the beginning of the attack chain. It is often used as one of the first steps to get code running into the target environment, but many exploitation actions could come later as part of the attack chain for privilege elevation, lateral movement and resource access. So, assuming that exploitation has only a small role, at the beginning of the attack chain, is a very risky approach when you are looking for what needs to be tested in the entire control set.

Looking at these two points in isolation suggests that breach and attack simulation tools should perform real exploitation to properly test the existing controls. But apart from the concerns of disrupting production systems, there are other challenges with incorporating exploits in the automated tests. The vendor or the organization using the systems now needs to the ability to incorporate new exploits as new vulnerabilities come up, check if each one of those are safe or if they could damage the ability of the real systems to protect themselves after the test is completed (some exploits disable security controls permanently, so using them during a test could actually reduce the security of the environment). The approach of avoiding exploitation eliminates those concerns.

If both approaches are valid, it is important to the organization to understand the limitations of the tests and what still needs to be tested manually or through alternative means (such as good and old checklist?). This also brings another question we should look at during this research: how to integrate the findings of all these testing approaches to provide a unique view of the state of the security controls? That’s something for another post.

The post Threat Simulation – How real does it have to be? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2CZj4dv
via IFTTT

Monday, December 4, 2017

From my Gartner Blog - Threat Detection Is A Multi-Stage Process

We are currently working on our SOAR research, as Anton has extensively blogged about. SOAR tools have been used to help organizations  triage and respond to the deluge of alerts coming from tools such as SIEM and UEBA. Although this is sometimes seen as the earlier stages of incident response, I’ve been increasingly seeing it as a way to implement “multi-stage threat detection”.

Let’s look at a basic use case of SOAR tools. Before the tool coming into play, there could be a playbook like this:

The SIEM performs basic correlation between a threat intelligence feed and firewall logs, generating an alert for every match (I know, many will argue it’s a bad use case example, but many orgs are actually doing it exactly like that). The SOC analyst would triage each of those events by identifying the internal workstation responsible for that traffic, checking it with an EDR tool, extracting some additional indicators related to that network traffic (the binary file that initiated the connection request, for example) and submitting them to external validation services or sandboxes. If the result is positive, they would use the EDR tool to kill the process, remove the files from the endpoint and also search for the existence of the same indicators on other systems.

With the SOAR tool in place, the organization can automate almost everything performed by the analyst, effectively moving from minutes to seconds to execute all the actions above. The tool starts the playbook when an alert from the SIEM arrives, integrating with the EDR tool and the validation services. We could expand it even further to make it add the new identified indicators to blacklists and firewall rules. Of course, corrective measures would be executed only after the analyst authorizes them.

Now, let’s think about an alternative, hypothetical world:

Your SIEM is immensely powerful and fast. So, you send all the detailed endpoint telemetry collected from the EDR tool to it. You also download all the databases of the external validation services into it. Then, you build a monster correlation rule that will cross the TI feed, the EDR data (linking connection requests to processes and binaries) to that huge database of known malicious processes and binaries. Now you’re doing almost everything from that playbook above on the SIEM, in just one shot (ok, I’m cheating, the sandbox validation still needs a step apart…although the SIEM could have sandbox capabilities embedded; it is immensely powerful, remember?).  No need for the playbook, or the SOAR too, at all!

Unfortunately there’s no such thing as a SIEM like that. That’s why we end up having this single detection use case implemented in multiple steps. if you think about it this way, you’ll see that the SIEM alert is not meant to be a final detection, subject to “false positives”. It’s just the first part of a multi-stage process, each stage looking at a smaller universe of “threat candidates”.

Thinking about detection as multi-stage process unlocks interesting use cases that wouldn’t be able to be implemented as an “atomic decision model”. Any interesting detection use cases that would be discarded because of high false positive rates could be a good fit for a multi-stage process.

But multi-stage detection is not effective if done manually. Score based correlation, as done by UEBA and some SIEM tools, can help linking multiple atomic detection items, but those situations where you need to query external systems (such as sandboxes), external services or big reference sets are still problematic. But SOAR comes to rescue! Now you can have an automated pipeline that takes those initial detection cases (or even entities that hit a certain score threshold) and put them through whatever validation and adhoc queries you might need to turn them into “confirmed detection”, full contextualized alerts.

Most of us would think about advanced automated response use cases, dynamically patching or removing things from the network, as the main way to get value from SOAR.  Not necessarily. Making detection smarter is probably where most organizations will find the value for those tools.

 

 

 

 

The post Threat Detection Is A Multi-Stage Process appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2BHFT1K
via IFTTT

Tuesday, November 28, 2017

From my Gartner Blog - Machine Learning or AI?

We may sound pedantic when pointing we should be talking about Machine Learning, and not AI, for security threat detection use cases. But there is a strong reason why: to deflate the hype around it. Let me quickly mention a real world situation where the indiscriminate use of those terms caused confusion and frustration:

One of our clients was complaining about the “real Machine Learning”  capabilities of a UEBA solution. According to them, “it was just rule based”. What do you mean by rule based? Well, for them, having to tell the tool that it needs to detect behavior deviations on the authentication events for each individual user, based on the location (source IP) and on the time of the event, is not really ML, but a rule based detection. I would say it’s both.

Yes, it is really a rule, as you have to define what type of anomaly (to the data field – or ‘feature’  – level) it should be looking for. So, you need to know enough about the malicious activity you are looking for, so you can specify the type of behavior anomaly it will present.

But on this “rule”, how do you define what “an anomaly”  is? That’s where the Machine Learning goes. The tools will have to automatically profile each individual user authentication behavior, focusing on those data fields specified from the authentication events. You just can’t do it with, let’s say, a “standard SIEM rule”. There is real Machine Learning being used there.

But what about AI – Artificial Intelligence? ML is a small subset of a field of knowledge known as AI. But the problem is that AI has much more than just ML. And that’s what that client was expecting when they complained about the “rules”. We still need people to figure out those rules and write the ML models to implement them. There’s no machine capable of doing that – yet.

There have been some attempts based on “deep learning”  (another piece of the AI domain), but nothing concrete exists. You can always point ML systems to all data collected from your environment so it can point to anomalies, but you’ll soon find out there are far more anomalies that are not related to security incidents than you are lead to believe by some pixie dust vendors. Broad network based anomaly detection has been around for years, but it hasn’t been able to deliver efficient threat detection without a lot of human work to figure out which anomalies are worth investigating.

Some UEBA vendors have decent ML capabilities, but they are not good on defining good rules/models/use cases to apply it. So, you may end up with good ML technology, but with mediocre threat detection capabilities, if you don’t have good people writing the detection content. For those going through the “build you own” path, this is even more challenging, as you need the magical combination of people who understand threats and what type of anomalies they would create and people who understand ML to write the content to find them.

Isn’t that just like SIEM? Indeed, it is. People bought SIEM in the past expecting to avoid the IDS signature development problem. Now they are repeating the same mistake buying UEBA to avoid the SIEM rules development problem. Do you think it’s going to work this time?

 

 

 

The post Machine Learning or AI? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2BlLxpn
via IFTTT

Sunday, October 15, 2017

From my Gartner Blog - Our SIEM Assessment paper update is out!

The results of our “summer of SIEM” are starting to come up; our assessment document on SIEM (basically, a “what” and “why” paper, that sits besides our big “how” doc on the same topic) has been updated. It has some quite cool new stuff aligned to some of our most recent research on security analytics, UEBA, SOC and other things that often touch or is directly related to SIEM.

Some cool bits from the doc:

“Organizations considering SIEM should realize that using an SIEM tool is not about procuring an appliance or software, but about tying an SIEM product to an organization’s security operations. Such an operation may be a distinct SOC or simply a team (for smaller organizations, a team of one) involved with using the tool. Purchasing the tool will also be affected by the structure and size of an organization security operation: While some SIEM tools excel in a full enterprise SOC, others enable a smaller team to do security monitoring better.”

“While some question SIEM threat detection value, Gartner views SIEM as the best compromise technology for a broad set of threat detection use cases. Definitely, EDR works better for detecting threats on the endpoints, while NTA promises superior detection performance on network traffic metadata. However, network- and endpoint-heavy approaches (compared to logs) suffer from major weaknesses and are inadequate unless you also do log monitoring. For example, many organizations dislike endpoint agents (hence making EDR unpalatable), and growing use of Secure Sockets Layer and other network encryption generally ruins Layer 7 traffic analysis.”

“UEBA vendors have been frequently mentioned as interesting alternatives due to their different license models. While most SIEM vendors base their price on data volumes (such as by events per second or gigabytes of data indexed), these solutions focus on the number of users being monitored irrespective of the amount of data processed. This model has been seen as a more attractive model for organizations trying to expand their data collection without necessarily changing the number of users currently being monitored. (Note that UEBA vendors offer user-based pricing even for tools addressing traditional SIEM use cases.) UEBA products have also been offered as solutions with lower content development and tuning requirements due to their promised use of analytics instead of expert-written rules. This makes them attractive to organizations looking for an SIEM tool but concerned with the resource requirements associated with its operation. The delivery of that promise will, however, strongly depend on the use cases to be deployed.”

As usual, please don’t forget to provide us feedback about the papers!

 

 

Next wave of research: SOAR, MSS and Security Monitoring use cases! Here we go :-)

 

The post Our SIEM Assessment paper update is out! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2ylZAL6
via IFTTT

From my Gartner Blog - Speaking at the Gartner Security Summit Dubai

I have a few sessions at the Gartner Security and Risk Management Summit in Dubai, October 16th and 17th. This is the wrap up of the Security Summit season for me; I’ll be presenting some content that I already presented in DC and in São Paulo, earlier this year. I also have a session on SOC that was originally presented by Anton on the other events. It’s my first time in Dubai and I’m excited to see any different perspectives from the audience there on the problems we cover. My sessions there:

Workshop: Developing, Implementing and Optimizing Security Monitoring Use Cases
Mon, 16 Oct 2017 11:00 – 12:30
An extra reason to be excited about the use cases workshop: we’ll be updating our paper from 2016 on that topic! I’m expecting to get the impressions of the attendees on our framework and potential points to improve or expand

Endpoint Detection and Response (EDR) Tool Architecture and Operations Practices

Mon, 16 Oct 2017 14:30 – 15:15

Industry Networking: FSI Sector: Responding to Changes in the Threat Landscape and the Risk Environment

Mon, 16 Oct 2017 16:30 – 17:30
How to Build and Operate a Modern SOC
Tue, 17 Oct 2017 10:30 – 11:15

Magic Quadrant: Security Information and Event Management

Tue, 17 Oct 2017 12:40 – 13:00

The post Speaking at the Gartner Security Summit Dubai appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2yhUjqh
via IFTTT