Monday, December 4, 2017

From my Gartner Blog - Threat Detection Is A Multi-Stage Process

We are currently working on our SOAR research, as Anton has extensively blogged about. SOAR tools have been used to help organizations  triage and respond to the deluge of alerts coming from tools such as SIEM and UEBA. Although this is sometimes seen as the earlier stages of incident response, I’ve been increasingly seeing it as a way to implement “multi-stage threat detection”.

Let’s look at a basic use case of SOAR tools. Before the tool coming into play, there could be a playbook like this:

The SIEM performs basic correlation between a threat intelligence feed and firewall logs, generating an alert for every match (I know, many will argue it’s a bad use case example, but many orgs are actually doing it exactly like that). The SOC analyst would triage each of those events by identifying the internal workstation responsible for that traffic, checking it with an EDR tool, extracting some additional indicators related to that network traffic (the binary file that initiated the connection request, for example) and submitting them to external validation services or sandboxes. If the result is positive, they would use the EDR tool to kill the process, remove the files from the endpoint and also search for the existence of the same indicators on other systems.

With the SOAR tool in place, the organization can automate almost everything performed by the analyst, effectively moving from minutes to seconds to execute all the actions above. The tool starts the playbook when an alert from the SIEM arrives, integrating with the EDR tool and the validation services. We could expand it even further to make it add the new identified indicators to blacklists and firewall rules. Of course, corrective measures would be executed only after the analyst authorizes them.

Now, let’s think about an alternative, hypothetical world:

Your SIEM is immensely powerful and fast. So, you send all the detailed endpoint telemetry collected from the EDR tool to it. You also download all the databases of the external validation services into it. Then, you build a monster correlation rule that will cross the TI feed, the EDR data (linking connection requests to processes and binaries) to that huge database of known malicious processes and binaries. Now you’re doing almost everything from that playbook above on the SIEM, in just one shot (ok, I’m cheating, the sandbox validation still needs a step apart…although the SIEM could have sandbox capabilities embedded; it is immensely powerful, remember?).  No need for the playbook, or the SOAR too, at all!

Unfortunately there’s no such thing as a SIEM like that. That’s why we end up having this single detection use case implemented in multiple steps. if you think about it this way, you’ll see that the SIEM alert is not meant to be a final detection, subject to “false positives”. It’s just the first part of a multi-stage process, each stage looking at a smaller universe of “threat candidates”.

Thinking about detection as multi-stage process unlocks interesting use cases that wouldn’t be able to be implemented as an “atomic decision model”. Any interesting detection use cases that would be discarded because of high false positive rates could be a good fit for a multi-stage process.

But multi-stage detection is not effective if done manually. Score based correlation, as done by UEBA and some SIEM tools, can help linking multiple atomic detection items, but those situations where you need to query external systems (such as sandboxes), external services or big reference sets are still problematic. But SOAR comes to rescue! Now you can have an automated pipeline that takes those initial detection cases (or even entities that hit a certain score threshold) and put them through whatever validation and adhoc queries you might need to turn them into “confirmed detection”, full contextualized alerts.

Most of us would think about advanced automated response use cases, dynamically patching or removing things from the network, as the main way to get value from SOAR.  Not necessarily. Making detection smarter is probably where most organizations will find the value for those tools.

 

 

 

 

The post Threat Detection Is A Multi-Stage Process appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2BHFT1K
via IFTTT

Tuesday, November 28, 2017

From my Gartner Blog - Machine Learning or AI?

We may sound pedantic when pointing we should be talking about Machine Learning, and not AI, for security threat detection use cases. But there is a strong reason why: to deflate the hype around it. Let me quickly mention a real world situation where the indiscriminate use of those terms caused confusion and frustration:

One of our clients was complaining about the “real Machine Learning”  capabilities of a UEBA solution. According to them, “it was just rule based”. What do you mean by rule based? Well, for them, having to tell the tool that it needs to detect behavior deviations on the authentication events for each individual user, based on the location (source IP) and on the time of the event, is not really ML, but a rule based detection. I would say it’s both.

Yes, it is really a rule, as you have to define what type of anomaly (to the data field – or ‘feature’  – level) it should be looking for. So, you need to know enough about the malicious activity you are looking for, so you can specify the type of behavior anomaly it will present.

But on this “rule”, how do you define what “an anomaly”  is? That’s where the Machine Learning goes. The tools will have to automatically profile each individual user authentication behavior, focusing on those data fields specified from the authentication events. You just can’t do it with, let’s say, a “standard SIEM rule”. There is real Machine Learning being used there.

But what about AI – Artificial Intelligence? ML is a small subset of a field of knowledge known as AI. But the problem is that AI has much more than just ML. And that’s what that client was expecting when they complained about the “rules”. We still need people to figure out those rules and write the ML models to implement them. There’s no machine capable of doing that – yet.

There have been some attempts based on “deep learning”  (another piece of the AI domain), but nothing concrete exists. You can always point ML systems to all data collected from your environment so it can point to anomalies, but you’ll soon find out there are far more anomalies that are not related to security incidents than you are lead to believe by some pixie dust vendors. Broad network based anomaly detection has been around for years, but it hasn’t been able to deliver efficient threat detection without a lot of human work to figure out which anomalies are worth investigating.

Some UEBA vendors have decent ML capabilities, but they are not good on defining good rules/models/use cases to apply it. So, you may end up with good ML technology, but with mediocre threat detection capabilities, if you don’t have good people writing the detection content. For those going through the “build you own” path, this is even more challenging, as you need the magical combination of people who understand threats and what type of anomalies they would create and people who understand ML to write the content to find them.

Isn’t that just like SIEM? Indeed, it is. People bought SIEM in the past expecting to avoid the IDS signature development problem. Now they are repeating the same mistake buying UEBA to avoid the SIEM rules development problem. Do you think it’s going to work this time?

 

 

 

The post Machine Learning or AI? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2BlLxpn
via IFTTT

Sunday, October 15, 2017

From my Gartner Blog - Our SIEM Assessment paper update is out!

The results of our “summer of SIEM” are starting to come up; our assessment document on SIEM (basically, a “what” and “why” paper, that sits besides our big “how” doc on the same topic) has been updated. It has some quite cool new stuff aligned to some of our most recent research on security analytics, UEBA, SOC and other things that often touch or is directly related to SIEM.

Some cool bits from the doc:

“Organizations considering SIEM should realize that using an SIEM tool is not about procuring an appliance or software, but about tying an SIEM product to an organization’s security operations. Such an operation may be a distinct SOC or simply a team (for smaller organizations, a team of one) involved with using the tool. Purchasing the tool will also be affected by the structure and size of an organization security operation: While some SIEM tools excel in a full enterprise SOC, others enable a smaller team to do security monitoring better.”

“While some question SIEM threat detection value, Gartner views SIEM as the best compromise technology for a broad set of threat detection use cases. Definitely, EDR works better for detecting threats on the endpoints, while NTA promises superior detection performance on network traffic metadata. However, network- and endpoint-heavy approaches (compared to logs) suffer from major weaknesses and are inadequate unless you also do log monitoring. For example, many organizations dislike endpoint agents (hence making EDR unpalatable), and growing use of Secure Sockets Layer and other network encryption generally ruins Layer 7 traffic analysis.”

“UEBA vendors have been frequently mentioned as interesting alternatives due to their different license models. While most SIEM vendors base their price on data volumes (such as by events per second or gigabytes of data indexed), these solutions focus on the number of users being monitored irrespective of the amount of data processed. This model has been seen as a more attractive model for organizations trying to expand their data collection without necessarily changing the number of users currently being monitored. (Note that UEBA vendors offer user-based pricing even for tools addressing traditional SIEM use cases.) UEBA products have also been offered as solutions with lower content development and tuning requirements due to their promised use of analytics instead of expert-written rules. This makes them attractive to organizations looking for an SIEM tool but concerned with the resource requirements associated with its operation. The delivery of that promise will, however, strongly depend on the use cases to be deployed.”

As usual, please don’t forget to provide us feedback about the papers!

 

 

Next wave of research: SOAR, MSS and Security Monitoring use cases! Here we go :-)

 

The post Our SIEM Assessment paper update is out! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2ylZAL6
via IFTTT

From my Gartner Blog - Speaking at the Gartner Security Summit Dubai

I have a few sessions at the Gartner Security and Risk Management Summit in Dubai, October 16th and 17th. This is the wrap up of the Security Summit season for me; I’ll be presenting some content that I already presented in DC and in São Paulo, earlier this year. I also have a session on SOC that was originally presented by Anton on the other events. It’s my first time in Dubai and I’m excited to see any different perspectives from the audience there on the problems we cover. My sessions there:

Workshop: Developing, Implementing and Optimizing Security Monitoring Use Cases
Mon, 16 Oct 2017 11:00 – 12:30
An extra reason to be excited about the use cases workshop: we’ll be updating our paper from 2016 on that topic! I’m expecting to get the impressions of the attendees on our framework and potential points to improve or expand

Endpoint Detection and Response (EDR) Tool Architecture and Operations Practices

Mon, 16 Oct 2017 14:30 – 15:15

Industry Networking: FSI Sector: Responding to Changes in the Threat Landscape and the Risk Environment

Mon, 16 Oct 2017 16:30 – 17:30
How to Build and Operate a Modern SOC
Tue, 17 Oct 2017 10:30 – 11:15

Magic Quadrant: Security Information and Event Management

Tue, 17 Oct 2017 12:40 – 13:00

The post Speaking at the Gartner Security Summit Dubai appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2yhUjqh
via IFTTT

Wednesday, September 13, 2017

From my Gartner Blog - SOAR research is coming!

As Anton anticipated on this post, we’ll be writing about SOAR – Security Orchestration, Automation and Response – tools. Of course many people, seeing this coming from Gartner, will think: “oh great, here are those guys creating new fancy acronyms for silly markets with a bunch of VC powered startups”. Yes, I agree that usually that’s the feeling. But let’s consider a few FACTS:

  • Some of these new vendors have already been acquired by big players such as FireEye (Invotas), Microsoft (Hexadite) and Rapid7 (Kommand). So, it seems that what they are offering is interesting enough to be integrated into other security technologies out there.
  • We often complain about the lack of skilled manpower in security. It is a very common issue to put together SOC teams. And whenever lack of manpower becomes an issue, AUTOMATION is a potential solution.
  • We also like to complain about the ever growing number of security tools being used by organizations. How can you properly integrate them so you can actually get the full value from them? You have tools to detect threats on the network, but you need to investigate those alerts on the affected endpoints using your EDR tool; with so many moving parts in place, some ORCHESTRATION is definitely required.
  • Finally, we also keep saying organizations are not reacting fast enough to incidents. Again, one of the most common ways to do things faster is streamlining processes (WORKFLOW) and leveraging AUTOMATION.

So, the need for the capabilities is there. We may argue that they should be embedded in current tools or that they are not complex enough to require a new product, just a bunch of Python or Powershell scripts. For the first point yes, this could definitely help the integration, but if you use the automation capabilities from each tool individually you may end up with “automated spaghetti workflows”, what would become a nightmare to support, troubleshoot and maintain. A hub and spoke approach can help keeping the complexity manageable. What is that hub? SOAR! Can it be done purely with scripts? Well, I bet you can replicate a lot of these products capabilities with some clever scripting, but how many organizations have people to do that and want to have more code to support, troubleshoot and maintain?

There are other interesting things related to SOAR that we want to explore: is this the new “single pane of glass” for the SOC? Does it make sense to leverage Machine Learning on these use cases? Are organizations looking for the glue only or for content (playbooks)? Some of the things we have in our minds for this upcoming and exciting research project.

So, of you are a SOAR vendor, don’t forget to schedule a Vendor Briefing with us! You can find more details here.

The post SOAR research is coming! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2wqEF6V
via IFTTT

Wednesday, August 2, 2017

From my Gartner Blog - Our new Vulnerability Assessment Tools Comparison is out!

Vulnerability assessment is usually seen as a boring topic and most people think the scanners are all equal – reaching the “commodity” status. Well, for basic scanning capabilities, that’s certainly true. But vulnerability scanners need to stay current with the evolution of IT environments; think all the changes in corporate networks in the past 20 years due to virtualization, mobility, cloud, containers and others. Those things certainly affect vulnerability management programs and how we scan for vulnerabilities. These IT changes force scanners to adapt, and we end up seeing some interesting differences at the fringes. Our new document, “A Comparison of Vulnerability and Security Configuration Assessment Solutions”, compares the 5 leaders of this space (BeyondTrust, Qualys, Rapid7, Tenable and Tripwire) and show how and where they differ.

Some of the capabilities where we found interesting differences are:

  • Agent based scan
  • Integration with virtualization platforms
  • Integration with IaaS cloud providers
  • Mobile devices vulnerability assessment capabilities
  • VA on containers
  • Delivery models (on-prem, SaaS)

 

As we’ve been doing, please consider providing feedback on the paper; this helps us improve our research :-)

The post Our new Vulnerability Assessment Tools Comparison is out! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2f8bFho
via IFTTT

Thursday, July 27, 2017

From my Gartner Blog - SIEM, Detection & Response: Build or Buy?

As Anton already blogged (many times) and twitted about, we are working to refresh some of our SIEM research and also on a new document about SaaS SIEM. This specific one has triggered some interesting conversations about who buy services and who buy products, and how that decision is usually made.

There are usually some shortcuts to find out if the organization should look, for example, for a MDR service or for a SIEM (and related processes and team to manage/use it). They are usually related to the organization’s preference for relying on external parties or doing things internally, the availability of resources to manage and operate technology or some weird accounting strategy that moves the needle towards capital investments or operational expenses. But what if there’s no shortcut? What if there’s really no preference for either path, how should an organization decide if it should rely on services for threat detection and response, or if it should build those capabilities internally? Making things more complicated, what if the answer is a bit of each, how to define the right mix?

Initially I can see a few factors as key points for that decision:

  • Cost – What option would be cheaper?
  • Flexibility – Which option would give me more freedom to change direction, put less restrictions on how things could/should be done?
  • Control – Which option gives me more control over the outcome and results?
  • Effectiveness – Which option will provide me, for lack of a better word, “better” threat detection / response capabilities?
  • Time to value – Which option can be implemented and provide value faster?

(Yes, there are other factors, including the security of your own data, but many times those factors end up in the “shortcuts” category above. Stuff like “we don’t put our stuff in the cloud”; makes the decision really easy, but that’s not the point here.)

Some of these factors have clear winners: time to value is almost always better with services, while doing everything yourself will obviously give you more control than any type of service.

Flexibility is more contentious. Services will be less flexible as no service provider (apart from pure staff augmentation) will give you the option to define how every piece of the puzzle should work. However, building things and hiring people will often freeze your resources more than just paying a services monthly bill. If you build everything in a certain way and then decide to change everything, you’ll probably have to pay some things twice. Moving from one service provider to another can be easier when contracts are made for flexibility.

And what about the last point, which model will provide the best results? If you are a Fortune 100 company, you’ll probably be in a position, in terms of resources, context and requirements, to build something that will be better than any service provider will be able to do for you. But if you’re not in that category, the best service providers will probably be able to give you better capabilities that you would be able to build AND maintain; just think about the challenge of keeping a very good and motivated team for more than a few months!

A simple framework for deciding between outsourcing or building in house could just look at those 5 factors, but you didn’t think the problem was that easy, right? Because the decision IS NOT BINARY! Today you can fully outsource your security operations, outsource some processes or even keep processes and people and rely on tools provided in a SaaS model. The number of questions to ask yourself and factors to consider grows exponentially.

For now we are just looking at a very specific outsourcing point, the SIEM as a tool. We hope to build some type of decision framework as one of the outcomes of our current research, but I’d like to revisit the broader problem in the future. And you, how did you decide between build or buy your detection and response capabilities?

The post SIEM, Detection & Response: Build or Buy? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2w4FzpU
via IFTTT