Monday, October 17, 2016

From my Gartner Blog - So You Want To Build A SOC?

Now you can! But should you do it?

As anticipated here and here, our new paper about how to plan, design, operate and evolve a Security Operations Center is out!

This is a big doc with guidance for organizations with the intent of building their SOC (or for those that have one and want to make it better :-)). One of the things we gave special attention to was the first question to be answered: do you need a SOC? It’s not as simple as it sounds, as the commitment of resources and pre-requisites, as the paper describes in detail, are quite big. There are alternatives (namely service providers) out there that should really be considered before embarking in that journey.

Also, even if you are certain you want (and need) to do it, you most certainly won’t do it alone. One of our main findings in this paper is that most SOCs are in fact hybrid SOCs, with service providers filling competency gaps and providing resources that are usually not cost effective to have in house unless you are a very particular (and rare) type of organization.

Here are a few interesting pieces from the paper:

“Although most existing security operations centers (SOCs) are modeled as alert pipelines, a good SOC includes threat intelligence (TI) consumption and generation practices tied closely to incident response (IR) and hunting activities.”

“Modern SOCs should move beyond SIEM and include additional technologies (such as NFT, EDR, TIP, UEBA, and SIRP) to improve visibility, threat detection and IR capabilities.”

“Any organization establishing a SOC should have a plan for staff retention from the outset. Security skills are rare, and attrition from the intense operational work that is natural for a SOC make hiring and retention key issues for keeping a SOC functional.”

“There is no such thing as a list of “tools a SOC must have.” Many SOCs make do with serious tool limitations by compensating the deficiencies with process, additional people, alternative technologies (think SharePoint instead of SOAR tools) or scripts. However, the chances of success of a SOC greatly improve when tools providing visibility, analysis, and action and management are present. Most SOCs (at a basic maturity level) operate with, at minimum, a SIEM for analysis and VA tools for visibility. As the maturity of the SOC increases, the need for additional tools becomes stronger. A basic SOC, for example, can simply detect some malicious activity on the SIEM and send an email to the CSIRT or even to the help desk for action. That might be enough for organizations that just remove infected computers from the network and reimage them. But if the intent is to learn about the real extent of an incident (and whether other computers and assets have been compromised) and extract data to be used to improve preventive and detective controls, additional visibility (e.g., EDR and NFT) and management (e.g., workflow and case management) tools will be necessary.”

The paper is available for Garter GTP clients. However, I’d like to point out that Anton recently did a webinar based on this same research, which is available for free on Gartner’s website. Have fun watching it and don’t forget to provide us feedback 😉

The post So You Want To Build A SOC? appeared first on Augusto Barros.

from Augusto Barros

Friday, September 30, 2016

From my Gartner Blog - Deception as a Feature

One of the things we are also covering as part of our research on deception technologies is the inclusion of deception techniques as features in other security products. There are many solutions that could benefit from honeypots and honeytokens to increase their effectiveness: SIEM, UEBA, EDR, WAF, and others. We’ve been tracking a few cases where vendors added those features to their products and you can expect to see a few examples in our upcoming research.

Now, let’s explore this a bit further. The “pure deception” technologies market is still very incipient and not large in terms of revenue. The average ticket for this new pack of vendors is still small when compared to the cost of other security technologies, what makes me wonder if it is a viable market for more than a couple niche players. I don’t doubt there is a market, but it might not become big enough to accommodate all the vendors that are popping up every week.

Lawrence Pingree recently said, “deception is a new strategy that security programs can use for both detection and response”, and I certainly agree with him. My questions then is, considering deception keeps growing as an important component of security programs, will we see organizations adopting it via additional features of broader scope security solutions or will they necessarily have to buy (or build) exclusive platforms for it?

In the future, will we see organizations buying “deception products” or adding deception questions to their security products RFPs?

The post Deception as a Feature appeared first on Augusto Barros.

from Augusto Barros

Tuesday, September 27, 2016

From my Gartner Blog - Building a Business Case for Deception

So we’ve been working on our deception technologies research (have we mentioned we want to hear YOUR story about how YOU are using those?) and one of the things we are trying to understand is how organizations are building business cases for deceptions tools. As Anton said, most of the times deception will be seen as a “nice to have”, not a “must have”. With so many organizations struggling to get money for the musts, how would they get money for a should?

Anton mentioned two main lines to justify the investment:

  1. Better threat detection
  2. Better (higher quality) alerts

In general, most arguments will support one of the two points above. However, I think we can add some more:

– More “business aligned” detection: with all these vendors doing things such as SCADA and SWIFT decoys, it looks like one of the key ideas to justify deception tools is the ability to make them very aligned to the attacker motivations. However, in the end, isn’t that just one way of supporting #1 above?

– Cheap (ok, “less expensive”) detection: most of the products out there are not as expensive as other detection technologies, and certainly are cheaper when you consider the TCO – Total Cost of Ownership. They usually cost less from a pure product price point of view and also require less gear/staff to operate. This is, IMO, the #3 on the list above, but could also be seen as an expansion of #2 (high quality alerts -> less resources used for response -> less expensive).

– Less friction or reduced risk of issues: Some security technologies can be problematic to implement, but it’s hard to break anything with deception tools; organizations that are too sensitive about messing with production environments might see deception as a good way to avoid unnecessary risks of disruption. I can see this as an interesting argument for IoT/OT (sensitive healthcare systems, for example). Do we have a #4?

– Acting as an alternative control: This is very similar to the point above. Some organizations will have issues where detection tools relying on sniffing networks, receiving logs or installing agents just cannot be implemented. Think situations like no SPAN ports or taps available/desirable, legacy systems that don’t generate events, performance bottlenecks preventing the generation of log events or installation of agents, etc. When you have all those challenges and still want to improve detection, what do you do? Deception can be the alternative to not doing anything. This looks like a strong #5 to me.

– Diversity of approaches: This is a bit weak, but it makes some sense. You might have many detection systems at network and endpoint level, but you’re still looking for malicious activity among all the noise of normal operations.  Doesn’t it just make sense to have something that approaches the problem differently? I know it’s a quite weak argument, but surprisingly I believe many attempts to deploy deception tools start based on this idea. At least for me it is worth a place on the list.

With all these we have a total of 6 points that could be used to justify an investment in deception technologies. What else do you see as a compelling argument for that? Also, how would you compare these tools to other security technologies if you only have resources or budget to deploy one of them? When does deception win?

Again, let us hear your stories!

The post Building a Business Case for Deception appeared first on Augusto Barros.

from Augusto Barros

Tuesday, September 13, 2016

From my Gartner Blog - New Research: Deception Technologies!

With the work on our upcoming SOC paper and on the TI paper refresh winding down, we are preparing to start some exciting research in our new project: Deception Technologies!

We’ve been blogging about this for some time, but the time to do some structured on the topic has finally come. There are many vendors offering some interesting technology based on deception techniques, and we can see some increased interest from our clients on the topic. Our intent is to write an assessment about the technologies and how they are being applied by organizations.

An interesting question to ponder on is about when an organization should adopt deception techniques. I briefly touched this on my last post about the topic, but I need to expand on that as part of this research. For instance, when an organization should start deploying deception techniques? How to decide, for example, when to invest in a distributed deception platform (DDP) instead of in another security technology? Also, when does it make sense to divert resources and effort to deception from other initiatives? It’s clear that an organization shouldn’t, for example, start deploying a DDP before doing a decent job on vulnerability management; but when you consider more recent technologies or things deployed by more mature organizations, such as UBA: Does it make sense to do deception before that? How should we answer that question? Those are some of the questions we’ll try to answer with this research.

Of course, the vendors have been very responsible and willing to brief us on their products, but it’s also important for us to see things from the end user perspective. So, if you are using deception technologies, let us know!

The post New Research: Deception Technologies! appeared first on Augusto Barros.

from Augusto Barros

Monday, August 8, 2016

From my Gartner Blog - Arriving at a Modern SOC Model

While writing our new (and exciting) research on “how to build a SOC”, we came into a conclusion that a modern SOC has some interesting differences from the old vanilla SOC that most organizations have in place. In essence, the difference is related to the inclusion of Threat Intelligence and Hunting/Continuous IR activities. The way that a traditional SOC operates is more or less like this:


While the “newer” model is something like:


So far, this is not surprising or particularly exciting. That’s just plain evolution. Now, this becomes more interesting when you start to work on guidance for organizations that right now are planning to build their (new) SOC. Should they plan to build it as a modern SOC, or should they build as a traditional SOC and then move it to the modern model as it matures?

So far we haven’t seen substantial evidence to back any of those two options. I can see how “building it the right way” would make sense, as you don’t want to waste resources planning and writing processes twice, and there is no point in building a less effective model when you know there is a better way to do things. But the modern model also requires more resources (people and tools). Some of those newer processes are also frequently seen as part of organizations with mature security operations. Can they be performed by those that are not as mature? Does those processes actually work on immature organizations? This is a “do it right the first time” versus a “walk, then run” discussion.

Do you happen to have experience with a mature modern SOC? If so, how did you arrive there? Was it built like that or did it evolve from the traditional model? It would be even more interesting to hear from people with FAIL stories from one of those two approaches. Don’t be shy, let us hear your stories :-)

The post Arriving at a Modern SOC Model appeared first on Augusto Barros.

from Augusto Barros

Friday, July 8, 2016

From my Gartner Blog - Are Security Monitoring Alerts Becoming Obsolete?

If I ask anyone working on a SOC about a high level description of their monitoring process, the answer will most likely look like this:

“The SIEM generates an alert, the first level analyst validates it and send it to the second level. Then…”

Most SOCs today work by putting their first level analysts – the most junior analysts, usually assigned to be the 24×7 eyes on console – parsing the alerts generated by their security monitoring infrastructure and deciding if that’s something that needs action by the more experienced/skilled second level. There is usually some prioritization on the alerts with the assignment of severity levels, reminiscent from old syslog severity labels such as CRITICAL, WARNING, INFORMATIONAL, DEBUG.

Most SOCs will have far more alerts being generated than manpower resources to address all of them, so they usually put rules in place such as “let’s address all HIGHs immediately, address as much of the MEDIUMs as we can, don’t need to touch the LOWs”. It is certainly prioritization 101, but what happens when there are too many critical/high alerts? Should they prioritize inside that group as well? Also, what if many medium or low severity alerts are being generated about the same entity (an IP, or a user), isn’t that something that should be bumped up in the prioritization queue? Many teams and tools try to address those concerns in one way or another, but I have the impression that this entire model is showing signs of decline.

If we take a careful look at the interface of the newest generation of security tools we will notice the alerts are not the entities listed in the primary screens anymore; what most tools are doing now is consolidating the many different generated alerts into a numeric scoring mechanism for different entities, most commonly users and endpoints. Most of tools call those scores “risk scores” (which is awful and confusing, as it’s usually nothing related to “risk”). The idea is to show on the main screen the entities with “top scores”, those that had more signs of security issues linked to them, so the analyst can click on one of them and see all reasons, or alerts, behind the high score. This would automatically address the issue of prioritizing among the most critical and the concerns about multiple alerts about a single entity.

For a SOC using a score based view the triage process could be adapted in two different ways: on the first one the highest scores are addressed directly by the second level, removing the first level pre-assessment and allowing for a faster response for something more likely to be a serious issue, while the first level works on a second tier of scores. The second way would be to use the same method of initial parsing by the first level, but with the basic difference that they would keep picking entities from the top of the list and work as far into it as they can, sending the cases that require further actions to the second level (which can apply the same approach to the cases being forward by the L1).

This may look like a simple change (or, for the cynics, no change at all), but using scores can really be a good way to improve the prioritization of SOC efforts. But scores are not only useful for that. They are also a mechanism to improve correlation of security events, usually coming from different security monitoring systems or even from SIEM correlation rules.

What we normally see as security events correlation is something like “if you see X and Y, alert”, or “if you see n times X then Y, alert”. Recently many correlation rules have been created trying to reflect the “attack chain”: “if you find a malware infection event, following from a payload download and an established C&C, alert”. The issue with that is that you need very good normalization on the existing events in order to keep the number of rules at an acceptable level (you don’t want to write a rule for every combination where there is an event related to C&C detection, for example). You could also miss attacks where the observed events are not following the expected attack chain.

The improvement on prioritization comes from the fact that in this new model every event would increment the scores for the associated entities with a certain discrete amount. Any time a new event or event type is defined within the system, the amount of points to be added to the score is determined. Smarter systems could even dynamically define those points according to attributes of the event (more points to a data exfiltration event when more data being transferred is detected). The beauty of the score model is that scores would go up (and eventually hit a point where the entity would become a target for further investigation) by any combination of events, with no need to previously envision the full attack chain and describe it in a correlation rule. This is how most modern UEBA (User and Entity Behavior Analytics) tools work today: a set of interesting anomalies are defined within the system (either by pre-packaged content or defined by the users) and every time they are observed the scores for the affected entities is incremented.

Here is a nice example of an UEBA tool interface using scores:


Score based monitoring systems can improve even further. Feedback from the analysts could be used to dynamically adapt scores from each event or event type, using something like Naive Bayes, for example. We’ve been doing that for Spam filtering for ages.

The score trend is already clear in the technologies side; SOC managers and analysts should review their processes and training to get the full benefits of that approach. How do you see your organization adopting that approach? Feasible? Just a distant dream? Maybe you think it doesn’t make sense?

Of course, if your SOC is already working on a score based approach, I’d also love to hear about that experience!

The post Are Security Monitoring Alerts Becoming Obsolete? appeared first on Augusto Barros.

from Augusto Barros

Wednesday, July 6, 2016

From my Gartner Blog - What’s Like to Use Non-MRTI Threat Intelligence

We often hear clients asking about threat intelligence related processes: how to collect, refine and utilize it (by the way, this document is being updated; let us know if you have feedback about it!). It’s very easy to explain and visualize when we are talking about machine readable TI (MRTI for short); your tools ingest the data feed and look for the IOCs in your environment. But what about the other type of threat intelligence, the “Non-MRTI” type?

Here’s a simple example. Take a look at this post from the McAfee Labs Blog. It is a nice explanation of a somewhat new exploitation technique used by malware they recently analyzed. This is a typical “TTP” (Tactics, Techniques and Procedures) piece of TI (and by the way…did you notice it’s FREELY AVAILABLE?). It describes threat behavior. Of course, it could be more valuable if there was more information to link it to threat actors, campaigns, etc, but it is valuable nevertheless. But coming back to the point of this post: why am I talking about it?

Because you can use to check where you are in terms of processes to leverage this kind of TI. Try to answer, for example some of this questions:

  • Do I have people looking for and reading this type of information?
  • Do I have a process that takes this type of information and turns it into actionable advice for my security operations?

With that you can see if the basic processes are in place; you can further extend this small self-assessment with more detailed questions such as:

  • Would this technique work in my environment?
  • Am I currently prepared (in terms of tools and monitoring use cases) to detect this?
  • If not, what changes do I need to do on my environment and tools to detect it?

Some people expect some ethereal process or method when we talk about consuming TI; there’s nothing special about that. If you can answer “yes” to all, or even some of the questions above, you’re already doing it. Of course, there are different maturity levels, types of TI and sources of information, but all that can evolve over time. So, if you are thinking about your capabilities to consume TI, take a look at the example above. It might give you some interesting insights.



The post What’s Like to Use Non-MRTI Threat Intelligence appeared first on Augusto Barros.

from Augusto Barros