Monday, November 28, 2016

From my Gartner Blog - Comparing UEBA Solutions

As Anton anticipated, we’ve started working on our next research cycle, now with the intent of producing a comparison of UEBA (User and Entity Behavior Analytics) solutions. We produced a paper comparing EDR solutions a few months ago, but so far the discussion on how to compare UEBA solutions has been far more complex (and interesting!).

First, while on EDR we focused on comparing how the tools would fare related to five key use cases, for UEBA the use cases are basically all the same: detecting threats.  The difference is not only on which threats should be detected, but also on how to detect the same threats. Many of these tools have some focus on internal threats (if you consider “pseudo-internal” too, ALL of them focus on internal threats), and there are many ways you could detect those. A common example across these tools: detecting an abnormal pattern of resource access by an user.  That could indicate that the user is accessing data he/she is not supposed to access, or even that credentials were compromised and are being used by an attacker to access data.

But things are even more complicated.

Have you notice that “abnormal pattern of resource access” there?

What does it mean? That’s where tools can do things in very different ways, arriving on the same (or on vastly different results) results. You can build a dynamic profile of things the user usually access and alert when something out of that list is touched. You can also do that considering additional variables for context, like time, source (e.g. from desktop or from mobile), application and others. And why should we stop at profiling only the individual user? Would it be considered anomalous if the user’s peers usually access that resource? Ok, but who are the user peers? How do you build a peer list? Point to an OU on AD? Or learn it dynamically by putting together people with similar behaviors?

(while dreaming about how we can achieve our goal with this cool “Machine Learning” stuff, let’s not forget you could do some of this with SIEM rules only…)

So, we can see how one single use case can be implemented by the different solutions. How do we define what is “better”? This is pretty hard, especially because there’s not something like AV-TEST available to test these different methods (models, algorithms, rules…taxonomy alone is crazy enough).

So what can we do about it? We need to talk to users of all these solutions and get data from the field about how they are performing in real environments. That’s OK. But after that we need to figure out, for good and bad feedback, how those things map to each solution feature set. If clients of solution X are happy about how it’s great on detecting meaningful anomalies (oh, by the way, this is another thing we’ll discuss in another blog post – which anomalies are just that, and which ones are meaningful from a threat detection perspective), we need to figure out what in X makes it good for that use case, so we can find which features and capabilities matter (and which are just noise and unnecessary fluff). Do I need to say we’ll be extremely busy in the next couple of months?

Of course, we could also use some help here; if you’ve been through a bake-off or a comparison between UEBA tools, let us know how you’ve done it; we’d love to hear that!

The post Comparing UEBA Solutions appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2fFIQDF
via IFTTT

Friday, November 18, 2016

From my Gartner Blog - Deception Technologies – The Paper

After some very fun research, we’re finally publishing our paper on deception technologies:

Applying Deception Technologies and Techniques to Improve Threat Detection and Response
18 November 2016 | ID: G00314562
Augusto Barros | Anton Chuvakin

Summary: Deception is a viable option to improve threat detection and response capabilities. Technical professionals focused on security should evaluate deception as a “low-friction” method to detect lateral threat movement, and as an alternative or a complement to other detection technologies.

It was a very fun paper to write. We’ve been using and talking about honeypots and other deception techniques and technologies for ages, but it seems that it’s finally the time to use those in enterprise environments as part of a comprehensive security architecture and strategy. Here are some fun bits from the paper:

  • Many organizations report low-friction deployment, management and operation as the primary advantages of deception tools over other threat detection tools (such as SIEM, UEBA and NTA).
  • Improved detection capabilities are the main motivation of those who adopt deception technologies. Most have no motivation to actively engage with attackers, and cut access or interaction as soon as detection happens.
  • Test the effectiveness of deception tools by running a POC or a pilot on a production environment. Utilize threat simulation tools, or perform a quality penetration test without informing the testers about the deceptions in place.

(overview of deception technologies – Gartner (2016)

The corporate world has invested in many different technologies for threat detection. Yet, it is still hard to find organizations actively using deception techniques and technologies as part of their detection and response strategies, or for risk reduction outcomes.

However, with recent advances in technologies such as virtualization and software-defined networking (SDN), it has become easier to deploy, manage and monitor “honeypots,” the basic components of network-based deception, making deception techniques viable alternatives for regular organizations. At the same time, the limitations of existing security technologies have become more obvious, requiring a rebalance of focus from preventative approaches to detection and response

[…]

Although a direct, fact-based comparison between the effectiveness of deception techniques and the effectiveness of other detection approaches does not exist, enough success reports do exist to justify including deception as part of a threat detection strategy.

The post Deception Technologies – The Paper appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2g3fjnR
via IFTTT

Monday, October 17, 2016

From my Gartner Blog - So You Want To Build A SOC?

Now you can! But should you do it?

As anticipated here and here, our new paper about how to plan, design, operate and evolve a Security Operations Center is out!

This is a big doc with guidance for organizations with the intent of building their SOC (or for those that have one and want to make it better :-)). One of the things we gave special attention to was the first question to be answered: do you need a SOC? It’s not as simple as it sounds, as the commitment of resources and pre-requisites, as the paper describes in detail, are quite big. There are alternatives (namely service providers) out there that should really be considered before embarking in that journey.

Also, even if you are certain you want (and need) to do it, you most certainly won’t do it alone. One of our main findings in this paper is that most SOCs are in fact hybrid SOCs, with service providers filling competency gaps and providing resources that are usually not cost effective to have in house unless you are a very particular (and rare) type of organization.

Here are a few interesting pieces from the paper:

“Although most existing security operations centers (SOCs) are modeled as alert pipelines, a good SOC includes threat intelligence (TI) consumption and generation practices tied closely to incident response (IR) and hunting activities.”

“Modern SOCs should move beyond SIEM and include additional technologies (such as NFT, EDR, TIP, UEBA, and SIRP) to improve visibility, threat detection and IR capabilities.”

“Any organization establishing a SOC should have a plan for staff retention from the outset. Security skills are rare, and attrition from the intense operational work that is natural for a SOC make hiring and retention key issues for keeping a SOC functional.”

“There is no such thing as a list of “tools a SOC must have.” Many SOCs make do with serious tool limitations by compensating the deficiencies with process, additional people, alternative technologies (think SharePoint instead of SOAR tools) or scripts. However, the chances of success of a SOC greatly improve when tools providing visibility, analysis, and action and management are present. Most SOCs (at a basic maturity level) operate with, at minimum, a SIEM for analysis and VA tools for visibility. As the maturity of the SOC increases, the need for additional tools becomes stronger. A basic SOC, for example, can simply detect some malicious activity on the SIEM and send an email to the CSIRT or even to the help desk for action. That might be enough for organizations that just remove infected computers from the network and reimage them. But if the intent is to learn about the real extent of an incident (and whether other computers and assets have been compromised) and extract data to be used to improve preventive and detective controls, additional visibility (e.g., EDR and NFT) and management (e.g., workflow and case management) tools will be necessary.”

The paper is available for Garter GTP clients. However, I’d like to point out that Anton recently did a webinar based on this same research, which is available for free on Gartner’s website. Have fun watching it and don’t forget to provide us feedback 😉

The post So You Want To Build A SOC? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2dW4uoD
via IFTTT

Friday, September 30, 2016

From my Gartner Blog - Deception as a Feature

One of the things we are also covering as part of our research on deception technologies is the inclusion of deception techniques as features in other security products. There are many solutions that could benefit from honeypots and honeytokens to increase their effectiveness: SIEM, UEBA, EDR, WAF, and others. We’ve been tracking a few cases where vendors added those features to their products and you can expect to see a few examples in our upcoming research.

Now, let’s explore this a bit further. The “pure deception” technologies market is still very incipient and not large in terms of revenue. The average ticket for this new pack of vendors is still small when compared to the cost of other security technologies, what makes me wonder if it is a viable market for more than a couple niche players. I don’t doubt there is a market, but it might not become big enough to accommodate all the vendors that are popping up every week.

Lawrence Pingree recently said, “deception is a new strategy that security programs can use for both detection and response”, and I certainly agree with him. My questions then is, considering deception keeps growing as an important component of security programs, will we see organizations adopting it via additional features of broader scope security solutions or will they necessarily have to buy (or build) exclusive platforms for it?

In the future, will we see organizations buying “deception products” or adding deception questions to their security products RFPs?

The post Deception as a Feature appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2djExz7
via IFTTT

Tuesday, September 27, 2016

From my Gartner Blog - Building a Business Case for Deception

So we’ve been working on our deception technologies research (have we mentioned we want to hear YOUR story about how YOU are using those?) and one of the things we are trying to understand is how organizations are building business cases for deceptions tools. As Anton said, most of the times deception will be seen as a “nice to have”, not a “must have”. With so many organizations struggling to get money for the musts, how would they get money for a should?

Anton mentioned two main lines to justify the investment:

  1. Better threat detection
  2. Better (higher quality) alerts

In general, most arguments will support one of the two points above. However, I think we can add some more:

– More “business aligned” detection: with all these vendors doing things such as SCADA and SWIFT decoys, it looks like one of the key ideas to justify deception tools is the ability to make them very aligned to the attacker motivations. However, in the end, isn’t that just one way of supporting #1 above?

– Cheap (ok, “less expensive”) detection: most of the products out there are not as expensive as other detection technologies, and certainly are cheaper when you consider the TCO – Total Cost of Ownership. They usually cost less from a pure product price point of view and also require less gear/staff to operate. This is, IMO, the #3 on the list above, but could also be seen as an expansion of #2 (high quality alerts -> less resources used for response -> less expensive).

– Less friction or reduced risk of issues: Some security technologies can be problematic to implement, but it’s hard to break anything with deception tools; organizations that are too sensitive about messing with production environments might see deception as a good way to avoid unnecessary risks of disruption. I can see this as an interesting argument for IoT/OT (sensitive healthcare systems, for example). Do we have a #4?

– Acting as an alternative control: This is very similar to the point above. Some organizations will have issues where detection tools relying on sniffing networks, receiving logs or installing agents just cannot be implemented. Think situations like no SPAN ports or taps available/desirable, legacy systems that don’t generate events, performance bottlenecks preventing the generation of log events or installation of agents, etc. When you have all those challenges and still want to improve detection, what do you do? Deception can be the alternative to not doing anything. This looks like a strong #5 to me.

– Diversity of approaches: This is a bit weak, but it makes some sense. You might have many detection systems at network and endpoint level, but you’re still looking for malicious activity among all the noise of normal operations.  Doesn’t it just make sense to have something that approaches the problem differently? I know it’s a quite weak argument, but surprisingly I believe many attempts to deploy deception tools start based on this idea. At least for me it is worth a place on the list.

With all these we have a total of 6 points that could be used to justify an investment in deception technologies. What else do you see as a compelling argument for that? Also, how would you compare these tools to other security technologies if you only have resources or budget to deploy one of them? When does deception win?

Again, let us hear your stories!

The post Building a Business Case for Deception appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2d7o3v6
via IFTTT

Tuesday, September 13, 2016

From my Gartner Blog - New Research: Deception Technologies!

With the work on our upcoming SOC paper and on the TI paper refresh winding down, we are preparing to start some exciting research in our new project: Deception Technologies!

We’ve been blogging about this for some time, but the time to do some structured on the topic has finally come. There are many vendors offering some interesting technology based on deception techniques, and we can see some increased interest from our clients on the topic. Our intent is to write an assessment about the technologies and how they are being applied by organizations.

An interesting question to ponder on is about when an organization should adopt deception techniques. I briefly touched this on my last post about the topic, but I need to expand on that as part of this research. For instance, when an organization should start deploying deception techniques? How to decide, for example, when to invest in a distributed deception platform (DDP) instead of in another security technology? Also, when does it make sense to divert resources and effort to deception from other initiatives? It’s clear that an organization shouldn’t, for example, start deploying a DDP before doing a decent job on vulnerability management; but when you consider more recent technologies or things deployed by more mature organizations, such as UBA: Does it make sense to do deception before that? How should we answer that question? Those are some of the questions we’ll try to answer with this research.

Of course, the vendors have been very responsible and willing to brief us on their products, but it’s also important for us to see things from the end user perspective. So, if you are using deception technologies, let us know!

The post New Research: Deception Technologies! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2ccKcr3
via IFTTT

Monday, August 8, 2016

From my Gartner Blog - Arriving at a Modern SOC Model

While writing our new (and exciting) research on “how to build a SOC”, we came into a conclusion that a modern SOC has some interesting differences from the old vanilla SOC that most organizations have in place. In essence, the difference is related to the inclusion of Threat Intelligence and Hunting/Continuous IR activities. The way that a traditional SOC operates is more or less like this:

soc_1

While the “newer” model is something like:

soc_2

So far, this is not surprising or particularly exciting. That’s just plain evolution. Now, this becomes more interesting when you start to work on guidance for organizations that right now are planning to build their (new) SOC. Should they plan to build it as a modern SOC, or should they build as a traditional SOC and then move it to the modern model as it matures?

So far we haven’t seen substantial evidence to back any of those two options. I can see how “building it the right way” would make sense, as you don’t want to waste resources planning and writing processes twice, and there is no point in building a less effective model when you know there is a better way to do things. But the modern model also requires more resources (people and tools). Some of those newer processes are also frequently seen as part of organizations with mature security operations. Can they be performed by those that are not as mature? Does those processes actually work on immature organizations? This is a “do it right the first time” versus a “walk, then run” discussion.

Do you happen to have experience with a mature modern SOC? If so, how did you arrive there? Was it built like that or did it evolve from the traditional model? It would be even more interesting to hear from people with FAIL stories from one of those two approaches. Don’t be shy, let us hear your stories :-)

The post Arriving at a Modern SOC Model appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2aGNcdf
via IFTTT