Monday, November 28, 2016

From my Gartner Blog - Comparing UEBA Solutions

As Anton anticipated, we’ve started working on our next research cycle, now with the intent of producing a comparison of UEBA (User and Entity Behavior Analytics) solutions. We produced a paper comparing EDR solutions a few months ago, but so far the discussion on how to compare UEBA solutions has been far more complex (and interesting!).

First, while on EDR we focused on comparing how the tools would fare related to five key use cases, for UEBA the use cases are basically all the same: detecting threats.  The difference is not only on which threats should be detected, but also on how to detect the same threats. Many of these tools have some focus on internal threats (if you consider “pseudo-internal” too, ALL of them focus on internal threats), and there are many ways you could detect those. A common example across these tools: detecting an abnormal pattern of resource access by an user.  That could indicate that the user is accessing data he/she is not supposed to access, or even that credentials were compromised and are being used by an attacker to access data.

But things are even more complicated.

Have you notice that “abnormal pattern of resource access” there?

What does it mean? That’s where tools can do things in very different ways, arriving on the same (or on vastly different results) results. You can build a dynamic profile of things the user usually access and alert when something out of that list is touched. You can also do that considering additional variables for context, like time, source (e.g. from desktop or from mobile), application and others. And why should we stop at profiling only the individual user? Would it be considered anomalous if the user’s peers usually access that resource? Ok, but who are the user peers? How do you build a peer list? Point to an OU on AD? Or learn it dynamically by putting together people with similar behaviors?

(while dreaming about how we can achieve our goal with this cool “Machine Learning” stuff, let’s not forget you could do some of this with SIEM rules only…)

So, we can see how one single use case can be implemented by the different solutions. How do we define what is “better”? This is pretty hard, especially because there’s not something like AV-TEST available to test these different methods (models, algorithms, rules…taxonomy alone is crazy enough).

So what can we do about it? We need to talk to users of all these solutions and get data from the field about how they are performing in real environments. That’s OK. But after that we need to figure out, for good and bad feedback, how those things map to each solution feature set. If clients of solution X are happy about how it’s great on detecting meaningful anomalies (oh, by the way, this is another thing we’ll discuss in another blog post – which anomalies are just that, and which ones are meaningful from a threat detection perspective), we need to figure out what in X makes it good for that use case, so we can find which features and capabilities matter (and which are just noise and unnecessary fluff). Do I need to say we’ll be extremely busy in the next couple of months?

Of course, we could also use some help here; if you’ve been through a bake-off or a comparison between UEBA tools, let us know how you’ve done it; we’d love to hear that!

The post Comparing UEBA Solutions appeared first on Augusto Barros.

from Augusto Barros

Friday, November 18, 2016

From my Gartner Blog - Deception Technologies – The Paper

After some very fun research, we’re finally publishing our paper on deception technologies:

Applying Deception Technologies and Techniques to Improve Threat Detection and Response
18 November 2016 | ID: G00314562
Augusto Barros | Anton Chuvakin

Summary: Deception is a viable option to improve threat detection and response capabilities. Technical professionals focused on security should evaluate deception as a “low-friction” method to detect lateral threat movement, and as an alternative or a complement to other detection technologies.

It was a very fun paper to write. We’ve been using and talking about honeypots and other deception techniques and technologies for ages, but it seems that it’s finally the time to use those in enterprise environments as part of a comprehensive security architecture and strategy. Here are some fun bits from the paper:

  • Many organizations report low-friction deployment, management and operation as the primary advantages of deception tools over other threat detection tools (such as SIEM, UEBA and NTA).
  • Improved detection capabilities are the main motivation of those who adopt deception technologies. Most have no motivation to actively engage with attackers, and cut access or interaction as soon as detection happens.
  • Test the effectiveness of deception tools by running a POC or a pilot on a production environment. Utilize threat simulation tools, or perform a quality penetration test without informing the testers about the deceptions in place.

(overview of deception technologies – Gartner (2016)

The corporate world has invested in many different technologies for threat detection. Yet, it is still hard to find organizations actively using deception techniques and technologies as part of their detection and response strategies, or for risk reduction outcomes.

However, with recent advances in technologies such as virtualization and software-defined networking (SDN), it has become easier to deploy, manage and monitor “honeypots,” the basic components of network-based deception, making deception techniques viable alternatives for regular organizations. At the same time, the limitations of existing security technologies have become more obvious, requiring a rebalance of focus from preventative approaches to detection and response


Although a direct, fact-based comparison between the effectiveness of deception techniques and the effectiveness of other detection approaches does not exist, enough success reports do exist to justify including deception as part of a threat detection strategy.

The post Deception Technologies – The Paper appeared first on Augusto Barros.

from Augusto Barros

Monday, October 17, 2016

From my Gartner Blog - So You Want To Build A SOC?

Now you can! But should you do it?

As anticipated here and here, our new paper about how to plan, design, operate and evolve a Security Operations Center is out!

This is a big doc with guidance for organizations with the intent of building their SOC (or for those that have one and want to make it better :-)). One of the things we gave special attention to was the first question to be answered: do you need a SOC? It’s not as simple as it sounds, as the commitment of resources and pre-requisites, as the paper describes in detail, are quite big. There are alternatives (namely service providers) out there that should really be considered before embarking in that journey.

Also, even if you are certain you want (and need) to do it, you most certainly won’t do it alone. One of our main findings in this paper is that most SOCs are in fact hybrid SOCs, with service providers filling competency gaps and providing resources that are usually not cost effective to have in house unless you are a very particular (and rare) type of organization.

Here are a few interesting pieces from the paper:

“Although most existing security operations centers (SOCs) are modeled as alert pipelines, a good SOC includes threat intelligence (TI) consumption and generation practices tied closely to incident response (IR) and hunting activities.”

“Modern SOCs should move beyond SIEM and include additional technologies (such as NFT, EDR, TIP, UEBA, and SIRP) to improve visibility, threat detection and IR capabilities.”

“Any organization establishing a SOC should have a plan for staff retention from the outset. Security skills are rare, and attrition from the intense operational work that is natural for a SOC make hiring and retention key issues for keeping a SOC functional.”

“There is no such thing as a list of “tools a SOC must have.” Many SOCs make do with serious tool limitations by compensating the deficiencies with process, additional people, alternative technologies (think SharePoint instead of SOAR tools) or scripts. However, the chances of success of a SOC greatly improve when tools providing visibility, analysis, and action and management are present. Most SOCs (at a basic maturity level) operate with, at minimum, a SIEM for analysis and VA tools for visibility. As the maturity of the SOC increases, the need for additional tools becomes stronger. A basic SOC, for example, can simply detect some malicious activity on the SIEM and send an email to the CSIRT or even to the help desk for action. That might be enough for organizations that just remove infected computers from the network and reimage them. But if the intent is to learn about the real extent of an incident (and whether other computers and assets have been compromised) and extract data to be used to improve preventive and detective controls, additional visibility (e.g., EDR and NFT) and management (e.g., workflow and case management) tools will be necessary.”

The paper is available for Garter GTP clients. However, I’d like to point out that Anton recently did a webinar based on this same research, which is available for free on Gartner’s website. Have fun watching it and don’t forget to provide us feedback 😉

The post So You Want To Build A SOC? appeared first on Augusto Barros.

from Augusto Barros

Friday, September 30, 2016

From my Gartner Blog - Deception as a Feature

One of the things we are also covering as part of our research on deception technologies is the inclusion of deception techniques as features in other security products. There are many solutions that could benefit from honeypots and honeytokens to increase their effectiveness: SIEM, UEBA, EDR, WAF, and others. We’ve been tracking a few cases where vendors added those features to their products and you can expect to see a few examples in our upcoming research.

Now, let’s explore this a bit further. The “pure deception” technologies market is still very incipient and not large in terms of revenue. The average ticket for this new pack of vendors is still small when compared to the cost of other security technologies, what makes me wonder if it is a viable market for more than a couple niche players. I don’t doubt there is a market, but it might not become big enough to accommodate all the vendors that are popping up every week.

Lawrence Pingree recently said, “deception is a new strategy that security programs can use for both detection and response”, and I certainly agree with him. My questions then is, considering deception keeps growing as an important component of security programs, will we see organizations adopting it via additional features of broader scope security solutions or will they necessarily have to buy (or build) exclusive platforms for it?

In the future, will we see organizations buying “deception products” or adding deception questions to their security products RFPs?

The post Deception as a Feature appeared first on Augusto Barros.

from Augusto Barros

Tuesday, September 27, 2016

From my Gartner Blog - Building a Business Case for Deception

So we’ve been working on our deception technologies research (have we mentioned we want to hear YOUR story about how YOU are using those?) and one of the things we are trying to understand is how organizations are building business cases for deceptions tools. As Anton said, most of the times deception will be seen as a “nice to have”, not a “must have”. With so many organizations struggling to get money for the musts, how would they get money for a should?

Anton mentioned two main lines to justify the investment:

  1. Better threat detection
  2. Better (higher quality) alerts

In general, most arguments will support one of the two points above. However, I think we can add some more:

– More “business aligned” detection: with all these vendors doing things such as SCADA and SWIFT decoys, it looks like one of the key ideas to justify deception tools is the ability to make them very aligned to the attacker motivations. However, in the end, isn’t that just one way of supporting #1 above?

– Cheap (ok, “less expensive”) detection: most of the products out there are not as expensive as other detection technologies, and certainly are cheaper when you consider the TCO – Total Cost of Ownership. They usually cost less from a pure product price point of view and also require less gear/staff to operate. This is, IMO, the #3 on the list above, but could also be seen as an expansion of #2 (high quality alerts -> less resources used for response -> less expensive).

– Less friction or reduced risk of issues: Some security technologies can be problematic to implement, but it’s hard to break anything with deception tools; organizations that are too sensitive about messing with production environments might see deception as a good way to avoid unnecessary risks of disruption. I can see this as an interesting argument for IoT/OT (sensitive healthcare systems, for example). Do we have a #4?

– Acting as an alternative control: This is very similar to the point above. Some organizations will have issues where detection tools relying on sniffing networks, receiving logs or installing agents just cannot be implemented. Think situations like no SPAN ports or taps available/desirable, legacy systems that don’t generate events, performance bottlenecks preventing the generation of log events or installation of agents, etc. When you have all those challenges and still want to improve detection, what do you do? Deception can be the alternative to not doing anything. This looks like a strong #5 to me.

– Diversity of approaches: This is a bit weak, but it makes some sense. You might have many detection systems at network and endpoint level, but you’re still looking for malicious activity among all the noise of normal operations.  Doesn’t it just make sense to have something that approaches the problem differently? I know it’s a quite weak argument, but surprisingly I believe many attempts to deploy deception tools start based on this idea. At least for me it is worth a place on the list.

With all these we have a total of 6 points that could be used to justify an investment in deception technologies. What else do you see as a compelling argument for that? Also, how would you compare these tools to other security technologies if you only have resources or budget to deploy one of them? When does deception win?

Again, let us hear your stories!

The post Building a Business Case for Deception appeared first on Augusto Barros.

from Augusto Barros

Tuesday, September 13, 2016

From my Gartner Blog - New Research: Deception Technologies!

With the work on our upcoming SOC paper and on the TI paper refresh winding down, we are preparing to start some exciting research in our new project: Deception Technologies!

We’ve been blogging about this for some time, but the time to do some structured on the topic has finally come. There are many vendors offering some interesting technology based on deception techniques, and we can see some increased interest from our clients on the topic. Our intent is to write an assessment about the technologies and how they are being applied by organizations.

An interesting question to ponder on is about when an organization should adopt deception techniques. I briefly touched this on my last post about the topic, but I need to expand on that as part of this research. For instance, when an organization should start deploying deception techniques? How to decide, for example, when to invest in a distributed deception platform (DDP) instead of in another security technology? Also, when does it make sense to divert resources and effort to deception from other initiatives? It’s clear that an organization shouldn’t, for example, start deploying a DDP before doing a decent job on vulnerability management; but when you consider more recent technologies or things deployed by more mature organizations, such as UBA: Does it make sense to do deception before that? How should we answer that question? Those are some of the questions we’ll try to answer with this research.

Of course, the vendors have been very responsible and willing to brief us on their products, but it’s also important for us to see things from the end user perspective. So, if you are using deception technologies, let us know!

The post New Research: Deception Technologies! appeared first on Augusto Barros.

from Augusto Barros

Monday, August 8, 2016

From my Gartner Blog - Arriving at a Modern SOC Model

While writing our new (and exciting) research on “how to build a SOC”, we came into a conclusion that a modern SOC has some interesting differences from the old vanilla SOC that most organizations have in place. In essence, the difference is related to the inclusion of Threat Intelligence and Hunting/Continuous IR activities. The way that a traditional SOC operates is more or less like this:


While the “newer” model is something like:


So far, this is not surprising or particularly exciting. That’s just plain evolution. Now, this becomes more interesting when you start to work on guidance for organizations that right now are planning to build their (new) SOC. Should they plan to build it as a modern SOC, or should they build as a traditional SOC and then move it to the modern model as it matures?

So far we haven’t seen substantial evidence to back any of those two options. I can see how “building it the right way” would make sense, as you don’t want to waste resources planning and writing processes twice, and there is no point in building a less effective model when you know there is a better way to do things. But the modern model also requires more resources (people and tools). Some of those newer processes are also frequently seen as part of organizations with mature security operations. Can they be performed by those that are not as mature? Does those processes actually work on immature organizations? This is a “do it right the first time” versus a “walk, then run” discussion.

Do you happen to have experience with a mature modern SOC? If so, how did you arrive there? Was it built like that or did it evolve from the traditional model? It would be even more interesting to hear from people with FAIL stories from one of those two approaches. Don’t be shy, let us hear your stories :-)

The post Arriving at a Modern SOC Model appeared first on Augusto Barros.

from Augusto Barros

Friday, July 8, 2016

From my Gartner Blog - Are Security Monitoring Alerts Becoming Obsolete?

If I ask anyone working on a SOC about a high level description of their monitoring process, the answer will most likely look like this:

“The SIEM generates an alert, the first level analyst validates it and send it to the second level. Then…”

Most SOCs today work by putting their first level analysts – the most junior analysts, usually assigned to be the 24×7 eyes on console – parsing the alerts generated by their security monitoring infrastructure and deciding if that’s something that needs action by the more experienced/skilled second level. There is usually some prioritization on the alerts with the assignment of severity levels, reminiscent from old syslog severity labels such as CRITICAL, WARNING, INFORMATIONAL, DEBUG.

Most SOCs will have far more alerts being generated than manpower resources to address all of them, so they usually put rules in place such as “let’s address all HIGHs immediately, address as much of the MEDIUMs as we can, don’t need to touch the LOWs”. It is certainly prioritization 101, but what happens when there are too many critical/high alerts? Should they prioritize inside that group as well? Also, what if many medium or low severity alerts are being generated about the same entity (an IP, or a user), isn’t that something that should be bumped up in the prioritization queue? Many teams and tools try to address those concerns in one way or another, but I have the impression that this entire model is showing signs of decline.

If we take a careful look at the interface of the newest generation of security tools we will notice the alerts are not the entities listed in the primary screens anymore; what most tools are doing now is consolidating the many different generated alerts into a numeric scoring mechanism for different entities, most commonly users and endpoints. Most of tools call those scores “risk scores” (which is awful and confusing, as it’s usually nothing related to “risk”). The idea is to show on the main screen the entities with “top scores”, those that had more signs of security issues linked to them, so the analyst can click on one of them and see all reasons, or alerts, behind the high score. This would automatically address the issue of prioritizing among the most critical and the concerns about multiple alerts about a single entity.

For a SOC using a score based view the triage process could be adapted in two different ways: on the first one the highest scores are addressed directly by the second level, removing the first level pre-assessment and allowing for a faster response for something more likely to be a serious issue, while the first level works on a second tier of scores. The second way would be to use the same method of initial parsing by the first level, but with the basic difference that they would keep picking entities from the top of the list and work as far into it as they can, sending the cases that require further actions to the second level (which can apply the same approach to the cases being forward by the L1).

This may look like a simple change (or, for the cynics, no change at all), but using scores can really be a good way to improve the prioritization of SOC efforts. But scores are not only useful for that. They are also a mechanism to improve correlation of security events, usually coming from different security monitoring systems or even from SIEM correlation rules.

What we normally see as security events correlation is something like “if you see X and Y, alert”, or “if you see n times X then Y, alert”. Recently many correlation rules have been created trying to reflect the “attack chain”: “if you find a malware infection event, following from a payload download and an established C&C, alert”. The issue with that is that you need very good normalization on the existing events in order to keep the number of rules at an acceptable level (you don’t want to write a rule for every combination where there is an event related to C&C detection, for example). You could also miss attacks where the observed events are not following the expected attack chain.

The improvement on prioritization comes from the fact that in this new model every event would increment the scores for the associated entities with a certain discrete amount. Any time a new event or event type is defined within the system, the amount of points to be added to the score is determined. Smarter systems could even dynamically define those points according to attributes of the event (more points to a data exfiltration event when more data being transferred is detected). The beauty of the score model is that scores would go up (and eventually hit a point where the entity would become a target for further investigation) by any combination of events, with no need to previously envision the full attack chain and describe it in a correlation rule. This is how most modern UEBA (User and Entity Behavior Analytics) tools work today: a set of interesting anomalies are defined within the system (either by pre-packaged content or defined by the users) and every time they are observed the scores for the affected entities is incremented.

Here is a nice example of an UEBA tool interface using scores:


Score based monitoring systems can improve even further. Feedback from the analysts could be used to dynamically adapt scores from each event or event type, using something like Naive Bayes, for example. We’ve been doing that for Spam filtering for ages.

The score trend is already clear in the technologies side; SOC managers and analysts should review their processes and training to get the full benefits of that approach. How do you see your organization adopting that approach? Feasible? Just a distant dream? Maybe you think it doesn’t make sense?

Of course, if your SOC is already working on a score based approach, I’d also love to hear about that experience!

The post Are Security Monitoring Alerts Becoming Obsolete? appeared first on Augusto Barros.

from Augusto Barros

Wednesday, July 6, 2016

From my Gartner Blog - What’s Like to Use Non-MRTI Threat Intelligence

We often hear clients asking about threat intelligence related processes: how to collect, refine and utilize it (by the way, this document is being updated; let us know if you have feedback about it!). It’s very easy to explain and visualize when we are talking about machine readable TI (MRTI for short); your tools ingest the data feed and look for the IOCs in your environment. But what about the other type of threat intelligence, the “Non-MRTI” type?

Here’s a simple example. Take a look at this post from the McAfee Labs Blog. It is a nice explanation of a somewhat new exploitation technique used by malware they recently analyzed. This is a typical “TTP” (Tactics, Techniques and Procedures) piece of TI (and by the way…did you notice it’s FREELY AVAILABLE?). It describes threat behavior. Of course, it could be more valuable if there was more information to link it to threat actors, campaigns, etc, but it is valuable nevertheless. But coming back to the point of this post: why am I talking about it?

Because you can use to check where you are in terms of processes to leverage this kind of TI. Try to answer, for example some of this questions:

  • Do I have people looking for and reading this type of information?
  • Do I have a process that takes this type of information and turns it into actionable advice for my security operations?

With that you can see if the basic processes are in place; you can further extend this small self-assessment with more detailed questions such as:

  • Would this technique work in my environment?
  • Am I currently prepared (in terms of tools and monitoring use cases) to detect this?
  • If not, what changes do I need to do on my environment and tools to detect it?

Some people expect some ethereal process or method when we talk about consuming TI; there’s nothing special about that. If you can answer “yes” to all, or even some of the questions above, you’re already doing it. Of course, there are different maturity levels, types of TI and sources of information, but all that can evolve over time. So, if you are thinking about your capabilities to consume TI, take a look at the example above. It might give you some interesting insights.



The post What’s Like to Use Non-MRTI Threat Intelligence appeared first on Augusto Barros.

from Augusto Barros

From my Gartner Blog - Coming to Sao Paulo for the Gartner Security Summit

I’m very excited to come back to São Paulo for the Garter Security and Risk Management Summit in August. During August 2nd and 3rd I’ll have a packed schedule there, including a shared keynote with analysts Claudio Neiva and Felix Gaehtgens. The other sessions I’ll be delivering during the event are (titles and descriptions in portuguese):

Tue, 2 Aug 13:45 – 14:30 – O Mundo está Mudando: Como Isso Afeta o Meu Programa de Gestão de Vulnerabilidade?
Esta sessão abrange os efeitos das últimas tendências tecnológicas, como a expansão da “cloud” e o uso da mobilidade, nos Programas de Gestão de Vulnerabilidade. Os desafios aos processos de GV (Gestão de Vulnerabilidade) relacionados a essas tendências são discutidos, bem como as estratégias para lidar com as suas implicações e, assim, adaptar o programa de forma a manter o risco sob controle. Temas Principais: Como realizar a avaliação da vulnerabilidade nos ambientes de “cloud”, móveis e de IoT? Como integrar a gestão de vulnerabilidade às práticas de DevOps? Como lidar com o aumento contínuo da quantidade de dados de vulnerabilidade gerados pelas avaliações de vulnerabilidade?
Wed, 3 Aug 09:15 Р10:45 РDesenvolvimento, Implementa̤̣o e Otimiza̤̣o dos Casos de Uso de Monitoramento da Seguran̤a
Esse workshop terá como foco, por meio da colaboração de pares, a implementação e a otimização dos casos de uso de monitoramento da segurança. Os participantes serão orientados pela estrutura do Gartner para identificar e refinar os seus requisitos a fim de produzir os seus próprios casos de uso de monitoramento da segurança com base em seus desafios e prioridades atuais. Principais questões: Como selecionar casos de uso de monitoramento da segurança? Como priorizar casos de uso para a implementação? Como otimizar casos de uso de monitoramento da segurança?
Wed, 3 Aug 13:45 Р14:30 РMesa-redonda: Construindo e Mantendo um Programa Efetivo de Gesṭo de Vulnerabilidade
Os participantes devem trazer as suas experiências sobre gestão de vulnerabilidade e os desafios enfrentados para torná-la uma medida de segurança efetiva. A mesa-redonda discutirá as soluções potenciais e como os programas de gestão de vulnerabilidade efetiva podem lidar com esses desafios e permanecer relevantes como parte da estrutura de controle da segurança global.
Wed, 3 Aug 15:45 Р16:30 РDesenvolvendo Casos de Uso de Monitoramento de Seguran̤a: Como Executar Bem
Os sistemas de Monitoramento de Segurança, como SIEM, só são efetivos quando o conteúdo apropriado é implementado e otimizado para fornecer resultados alinhados aos principais riscos e necessidades da organização. Esta sessão apresenta a estrutura do Gartner para identificar, priorizar, implementar e otimizar casos de uso de monitoramento de segurança. Principais questões: Como selecionar casos de uso de monitoramento da segurança? Como priorizar casos de uso para a implementação? Como otimizar casos de uso de monitoramento da segurança?
If you are coming to any of these sessions, please come and say “Olá” :-)

The post Coming to Sao Paulo for the Gartner Security Summit appeared first on Augusto Barros.

from Augusto Barros

Tuesday, July 5, 2016

From my Gartner Blog - The EDR Comparison Paper is Out!

This is old news, but the paper was published right before the maelstrom of the Gartner Security Summit. The paper compares the EDR solutions from 10 vendors (those more visible to Gartner based on number of inquiry calls specifically about EDR):

  • Carbon Black Enterprise Response
  • Cisco Advanced Malware Protection for Endpoints
  • Confer
  • CounterTack
  • CrowdStrike Falcon
  • Cybereason
  • FireEye Endpoint Security (HX Series)
  • Guidance Software’s EnCase Endpoint Security
  • RSA, The Security Division of EMC, Enterprise Compromise Assessment Tool (ECAT)
  • Tanium


The paper includes two major comparisons, a view of  EDR tools capabilities based on our previous paper on the subject, and another one about how well each of those tools support the 5 EDR use cases (also identified in the previous paper):


  • Incident data search and investigation
  • Suspicious activity detection
  • Alert triage or suspicious activity validation
  • Threat hunting or data exploration
  • Stopping malicious activity

The details of the criteria used for that comparison, as well as the results can be found in the paper (Gartner GTP subscription required). However, I can highlight a few of the key findings from our research:

  • Endpoint detection and response (EDR) vendors are often competing for the same budget used for endpoint protection platforms (EPPs) and other endpoint security tools, as well as for advanced threat and IR budgets, if available.
  • EDR is not a replacement for other endpoint security tools; it is often a detection and visibility complement to other tools providing endpoint security capabilities.
  • At end-user devices, Mac OS support is becoming more common, but some EDR solutions still don’t support it. Support for mobile devices is even more complicated and almost nonexistent.

You can also see Anton’s posts about our recent EDR research.

The post The EDR Comparison Paper is Out! appeared first on Augusto Barros.

from Augusto Barros

From my Gartner Blog - Notes From My First Security Summit

I’ve finally found some time to collect my notes and impressions from my first Gartner Security and Risk Management Summit, back in June. I delivered one full session on Vulnerability Management and a short debate session with Anton Chuvakin about outsourcing security operations. We also hosted a roundtable on Vulnerability Management and a workshop on developing security monitoring use cases. On top of that, many one on one meetings with attendees and vendor meetings. Yes, it was a very busy week!

For those that went to the event but couldn’t catch the sessions, they are available on Gartner Events on Demand. If you find time to watch them, feel free to provide feedback on this space too, ok?

Some of my notes from the summit pointed to a couple of trends that I thought would be interesting to share:

  • Many medium organizations still on the “we’re just starting now” mode; yes, it’s 2016, but there are still organizations out there taking their first steps on a security program. It’s interesting to see some common trends from them: challenges on dealing with MSSPs, how to measure the results of their programs, finding the appropriate skills for the team.


  • Vulnerability scan results are still showing too many inconsistencies: yes, it’s 2016 (again) and we’re still seeing many organizations complaining that the results of their VA tools are not reliable and often plagued with false positives. This is an interesting result from a “market for lemons” scenario: it’s too hard for organizations to compare the quality of the results from the scanners available on the market, so there’s no incentive for those vendors to improve on that sense. If you are a VA tool vendor struggling to differentiate from the pack, pay attention to this: find a good way to prove your results are more reliable; there are organizations out there that could see it as a big enough reason to switch from their current solution.

The next event I’ll be presenting is in early August, the security summit in São Paulo. It’ll be fun to meet some old friends there, and a chance to dust off the Portuguese. Hope to see some of you there.

The post Notes From My First Security Summit appeared first on Augusto Barros.

from Augusto Barros

Thursday, May 19, 2016

From my Gartner Blog - Our first EDR paper is OUT!

It’s almost impossible to get ahead of Dr. Chuvakin on blog posts and announcing new research, but I’m lucky enough he is driving at this precise moment and not able to do it before me :-)

Our first of two Endpoint Detection and Response papers, “Endpoint Detection and Response Tool Architecture and Practices”, is out.

This document should be the “starting point” to anyone trying to understand what EDR tools are, what they should be used for and what to consider before implementing this technology. Key EDR use cases are incident-related search and investigation, suspicious activity detection, alert triage and validation, threat hunting, and stopping malicious activity.

Things you can find on this paper:

  • EDR Definition
  • EDR Key Capabilities
  • Why did EDR tools appear?
  • Building a Business Case for EDR

And much more. I hope you enjoy. Then next one is a comparison of the most visible EDR tools out there, it’ll be out in a few days.

The post Our first EDR paper is OUT! appeared first on Augusto Barros.

from Augusto Barros

Wednesday, April 13, 2016

From my Gartner Blog - How to Plan and Execute Modern Security Incident Response – NEW

I had the opportunity to work with Anton on updating one of his best documents, “How to Plan and Execute Modern Security Incident Response”, which was published today on (GTP Access required). The document is a nice assessment of what organizations should be doing in terms of incident response today. It covers some of the basics, but also the changes we’ve been seeing in those practices in the past couple of years, especially the move to continuous IR. As we say there,

“The traditional route of detecting incidents using security monitoring technologies is not the whole answer to today’s threat landscape, which is laden with skilled and persistent threat actors. Leading organizations don’t just develop excellent security monitoring capabilities that operate in near-real time (such as mature SOC capabilities based on SIEM tools). They also seek to explore the data they collect in order to discover — rather than detect in real time — incidents that their own detection controls missed.”

This is just one of the juicy bits from the document. You can read more about in Anton’s blog.

The post How to Plan and Execute Modern Security Incident Response – NEW appeared first on Augusto Barros.

from Augusto Barros

From my Gartner Blog - Gartner Security & Risk Management Summit – US

So, the great Security & Risk Management Summit is approaching (June 13-16), and I’m happy to be one of the speakers there. My sessions on the agenda are:

Please come and say hi, it’s always good to know who reads this blog :-)


The post Gartner Security & Risk Management Summit – US appeared first on Augusto Barros.

from Augusto Barros

Wednesday, March 16, 2016

From my Gartner Blog - RSA Conference 2016 observations

It’s a bit late to write about what I saw at RSA this year (it’s almost time for the Gartner Security & Risk Management Summit!), but I’ve finally defeated procrastination and managed to write down my thoughts. Here it is:


Keywords: isolation, visibility, “analytics”, deep/smart/machine learning: most booths would have at least one of these. A more careful analysis indicated that technologies such as SDN and microvirtualization are bringing a new wave of isolation and compartimentalization products. Also, the message that the attackers are already in has finally been absorbed, generating the demand for visibility technologies. And, finally, analytics and machine learning, because they sound cool and someone needs to provide the Kool-Aid.


Crazy feature combinations: Anton mentioned this on his analysis too; many vendors building odd combinations of features, making very hard to define what their products are about. In essence, it seems that there is a general lack of vision for product roadmaps and a frantic attempt by the startups to meet the needs of the first big customers, in a kind of “roadmap by the biggest cash cow” mode.


The brains are moving: The “brains” of security monitoring environments used to be in the SIEM, the central point where all events and alerts would be correlated and prioritized. It seems that many organizations are giving up on that model, either putting the brains in each monitoring component (EDR tools for endpoint monitoring, NFT tools for network monitoring, UBA tools for user activity monitoring, etc) and using the SIEM just as a simple SOC interface or even as data source for those external “brains”. There also those cases where the vendors are providing “brains as a service”, consuming data from the client environment, processing on the cloud with proprietary engines (“analytics”, ML, very smart analysts, correlating with very exclusive TI, etc) and delivering alerts or “badness scores” for entities. Some of those vendors believe they can provide an alternative to SIEM, which is very resource demanding, using this model.


For years we’ve been listening to peers criticizing the “RSA circus”. I understand their frustration and lack of tolerance to all the marketing and buzzwords, but for most organizations those vendors are the primary source of security technology and skills. They need to navigate through that craziness to find the pieces they need for their security strategies. Being there and assessing what is being offered is crucial to understand how to translate common needs into product and service requirements that can actually be addressed by what is on the shelves. It could certainly be easier if there was less spin and unreasonable marketing approaches, but with the amount of money being spent on security that is just a utopic desire. We need to deal with chaos and learn how to extract what we need from that.

The post RSA Conference 2016 observations appeared first on Augusto Barros.

from Augusto Barros

Tuesday, February 16, 2016

From my Gartner Blog - The Security Monitoring Use Cases Paper is Here!

I’m very happy to announce that our paper on “How to Develop and Maintain Security Monitoring Use Cases” has just been published! This is the result of our work to provide a structured approach for organizations that need to operate their security monitoring infrastructure in an integrated and coordinated way, aligning their monitoring activities with the overall security planning efforts.

Some interesting pieces from the paper:

“Use cases can be created from three different sources: compliance, threat detection and asset oriented.”

“Monitoring use cases are generally seen as SIEM content, but also can be implemented with other technologies, including user and entity behavior analytics (UEBA), data loss prevention (DLP) and others.”

“An organization can have too much process overhead in this area — agility and predictability are both needed.”

“Many organizations focus on implementing canned vendor UC content, and that approach is workable, as long as the content is tuned and further steps are taken.”

“Given all those security problems to solve, which ones should the organization do first? For example, some security architects claim that SIEM use cases must always be selected by order of importance, but that is a big mistake. Gartner research indicates that organizations should not undertake a complex and hard to develop use case as a first phase, unless absolutely necessary and unless all precautions (such as moving in small steps) are taken. On the other hand, “do only what is easy” will not yield the desired results either. A much better order is a balance of importance with “feasibility” (that is, ease of implementation).”

“The organization beginning its journey into security monitoring and use-case development should start implementing use cases one by one, using the experience to improve the processes and putting together the basic technology components that will form the core of the security monitoring infrastructure. In a “walk, then run” way, it can expand the cycles to implement multiple use cases simultaneously later, especially when the use cases share similarities on chosen technology, data sources and objectives. “

“Use cases almost never operate under static conditions; the IT and threat environments are very dynamic and could affect the use-case value, relevance and performance. Situations not identified by change management or security intelligence processes, or cases of undetected slow changes, could be identified during a periodic review of the use cases. These reviews can be built as general periodic cycles where all existing use cases are reviewed or based on a “use-case schedule” and each has its own review date based on when it was originally implemented or last reviewed. This approach requires more work on maintaining the review schedule, but also avoids accumulating too much review work on a single task. It also requires just a few reviews happening frequently instead of a big batch of work that ends up creating an audit like “use-case review season.” “


Now, as mentioned before, we’re full speed ahead with EDR. Stay tuned!

The post The Security Monitoring Use Cases Paper is Here! appeared first on Augusto Barros.

from Augusto Barros

Tuesday, February 9, 2016

From my Gartner Blog - The D in EDR

The research on EDR tools and practices renders some very interesting discussions on tools capabilities. While many EDR vendors will focus on their fast searching and automated IOC checking capabilities, the “Detection” piece is always a fun piece to talk about. Especially when you discard the basic “blacklist” approach which, by the way, may not be as simple as we think (malware polymorphism makes it far more challenging than most people think it is).

What would you expect from an EDR tool regarding “Detection”, considering we are not including there the basic IOC matching? Write down you answer, then look at it. Isn’t that something you would expect, for example, from your antivirus (or “Endpoint Protection Platform”, the grown-up name)? What kind of detection capabilities should we expect from an EDR tool but not from an EPP?

Most EDR tools trying to do something beyond EPP are taking a “behavior” based approach. Identifying exactly what the vendors refer to as “behavior based detection” is another interesting challenge. If you hard code on your tool something considered a malicious behavior (something like “disabling AV”, “setting up hidden persistence”, “establishing contact with C&C server” or “search for data files or memory pages containing credit card numbers”), is it “behavior based” detection or just a fancy signature (or “rule”)?

There are no strong definitions and descriptions for capabilities such as “behavior based detection”, “anomaly detection” (isn’t it funny that some tools doing that define what an “anomaly” is just like a…signature?), etc. Add to it the claims about Machine Learning, AI, etc, and we have the perfect storm of inflated claims and, unfortunately, expectations.  It also makes the lives of those comparing solutions a big nightmare.

To be fair with all those tools, identifying malicious activity, or just malware (malware as the main vehicle for malicious activity is so big now that we often forget that it is not a requirement), is very hard. Computers can do anything and it’s hard to understand when some instructions are part of malicious activities and when they are not. Some powershell use, for example, would be expected from system administrators and power users, but is often a good indication of malicious activity when done by a “regular” user. Only the context (that sometimes is only different from a human point of view) will tell if it’s good or bad. A malware dropper behaves almost exactly like an installer or auto-update component of regular software.

Removing the inflated claims, the existing capabilities for detection are not that bad. If it’s so hard to identify what is malicious and what is not, we may need to keep explaining that to the tools. The real risk of not meeting expectations is in believing that the tool doesn’t need to learn, or when you don’t fully understand who has the role of teacher. It might be primarily the vendor, but you still need to be able to assess if they are doing that appropriately.

What does that mean? It means tools need to be tested before buying and constantly after implemented too. Understanding how the existing and emerging threats behave and how the tools would react to them is crucial to ensure they will keep detecting bad stuff. If you have resources that can obtain that information (here’s where that “other” Threat Intelligence comes into play) and translate it into the right questions (or test scenarios) to the vendors, you’ll be able stay aware of your tools capabilities and limitations. And of course, identify the snake oil when you see it in that booth at RSA 😉

The post The D in EDR appeared first on Augusto Barros.

from Augusto Barros

Wednesday, February 3, 2016

From my Gartner Blog - SIEM Architecture and Operational Processes UPDATE!

My favorite Gartner GTP research document has just been updated:

Security Information and Event Management Architecture and Operational Processes

Using security information and event management requires more than just buying the right technology. Security architects must understand how to properly design and operate SIEM; this is critical to avoiding the costly mistake of an ineffective or failed deployment.

This document is a full guide to organizations planning to buy or implement a SIEM. It also has lots of content for those that have a SIEM in place but are struggling with getting the full value from it. It was published by Anton Chuvakin back in 2013, updated in 2014 and again now – with the addition of a co-author :-)


The post SIEM Architecture and Operational Processes UPDATE! appeared first on Augusto Barros.

from Augusto Barros

Wednesday, January 20, 2016

From my Gartner Blog - Security Market Madness

There has been a common feeling of confusion these days during vendor briefings related to “what the product is about”. It’s crazy, but we’ve been spending a lot of time just trying to match the products to existing definitions. It could be just a case of outdated definitions and the need to create new ones (Noooooooooo), but it’s deeper than that: We are seeing many different capabilities being packaged in completely different ways. So, you talk to a vendor known as an “Endpoint Detection and Response” vendor, who could also be seen as a regular (or NG) Antivirus or, wait for it, a behavior analytics tool vendor!

 That’s not only confusion for us analysts; it also makes it harder for clients to assess and select products. We know that it is happening when we talk to clients and vendors and see that tools presumably from different “categories” are competing against each other in the same initiatives. There are organizations out there comparing a UBA tool with EDR, or NFT with SIEM, etc. Why is this happening?

 I can see two possible explanations:

  • No one has a clue about what they need to buy or even what they need: This is the cynic in me speaking. Organizations working on a crazy reactive mode to the pressure of “doing something”, converting that to “buying something” without necessarily knowing what is necessary and what should be bought. Of course, this is a very common and well known path to failure.
  • Organizations are approaching the same problems in vastly different ways: There is that old saying of “many ways to skin a cat”. There are many ways of “doing security” too. Security organizations can be split in different roles and groups, using a different set of tools and building on top of different architectures. Of course, much of it will be very similar, but there’s room for different approaches. The diversity in product packages could be explained by organizations approaching the vendors with the same requirements grouped in different sets according to how they chose to operate.

I believe the truth is in the middle of those two. Is there anything else I’m missing here? Maybe the incentives to vendors to get VC funding are modeling how they present their offerings too? What do you think is behind this craziness?

Anyway, I believe the RSA Conference next month will give us a good opportunity to try to answer that. Let’s see how the Expo floor will look like and what people will be saying there.

The post Security Market Madness appeared first on Augusto Barros.

from Augusto Barros

Tuesday, January 19, 2016

From my Gartner Blog - Webinar on Security Monitoring Use Cases

As I mentioned (many times) before, our current research covers Security Monitoring Use Cases. We’ve been busy writing about that and the paper will be available soon to Gartner clients. However, I’m also delivering a webinar on the subject later this month. Good news: This one is open to everyone! Feel free to sign up on the link below, and please, bring your questions too :-)

Developing Security Monitoring Use Cases: How to Do It Right
January 28th, 9AM EST

Discussion Topics:

  • How to select security monitoring use cases
  • How to prioritize use cases for implementation
  • How to optimize security monitoring use cases

Security monitoring systems are only effective when the appropriate content is implemented and optimized to provide results. This webinar provides guidance on how to effectively identify, prioritize, implement and optimize security monitoring use cases.


The post Webinar on Security Monitoring Use Cases appeared first on Augusto Barros.

from Augusto Barros

Thursday, January 14, 2016

From my Gartner Blog - Yes, Give Deception a Chance!

So, Anton finally brought the deception subject up on his blog, leaving a small bait for me at the end of his post. Ok, that’s a great subject to return to my blogging activities in 2016.

A few years ago I jumped into a discussion about honeypots evolution and how to make them more useful for enterprises. The “Honeytoken” term was born at that moment, but it was in fact an old concept (just check Cliff Stoll’s “Cuckoo’s Egg” book, where he applied the idea to catch the hacker playing with his systems back in the 80’s). The idea was widely discussed at that time, but as many other deception techniques, it has never become a mainstream thing and the majority of organizations still don’t do anything similar. Why, we keep asking?


The main reason is that applying deception (I’m considering here deception as a detection mechanism only) is hardly seen as a requirement for having decent security. With most organizations struggling to keep their heads above the water, it wouldn’t make sense to invest time and resources in something that is not a “must”. Deception is certainly not a basic and fundamental security control, and it doesn’t make sense to invest on it when you’re still struggling with the basics. I admire the vendors that offer exclusively deception based solutions: their sales job is far more difficult than those selling things considered required for a minimum level of security.


People would usually read what I just wrote and think “ok, so I can’t forget about deception, as there’s still a lot to be done that is more important”. Not necessarily. The selection of tools and practices to apply is not a simple decision. It is mostly a resource (budget, people, time) allocation problem, but there are many additional factors that make it far more interesting that it seems. In fact, when planning detection capabilities, constraints and opportunities come in all different shapes and colors. Those will create situations where deception will make sense as the next step or measure to apply. You may have a strong monitoring infrastructure on the perimeter, for example, and not enough resources for big initiatives such as rolling our EDR or NFT for the internal network. Why not put some honeypots in place to minimize some of that gap? I believe this is not as simple as “only the most mature organizations should apply deception”. I believe there is a point in the maturity scale (not the highest!) when deception starts to be one of the things that could be useful for the organization.  You know those videogames where you “unlock” new weapons and items that you can use to keep going? Yes, deception is one of the items that are unlocked in the middle of the game.


We’ve been seeing a lot of guidance about how to look for threats inside the organization, working with red and blue teams, considering all phases of the attack chain, etc. We are far past the point where there was a generic recipe of how to do security monitoring right. Your security monitoring capabilities should be a composition assembled according to your environment and the threats you are concerned about. For many cases, applying some deception will eventually make sense. The question is not if organizations in general should be doing it, but if they have it and consider it as part of what they can do.


Apart from organizations planning their own security, there are also the security tools vendors working on the evolution of their products. That’s also an opportunity for deception techniques to be applied. Tools that track users and other entities behavior for anomalies can benefit from deception techniques (with access to honeypots and honeytokens being the ultimate behavior anomaly), and some vendors are already adding that to their feature set. An organization selecting detection products should consider those that can also apply deception techniques, as they will expand the range of available detection capabilities.


As our own research indicates, deception use by organizations is increasing (“By 2018, 10% of enterprises will use deception tools and tactics, and actively participate in deception operations against attackers”). However, I doubt it will ever be considered a “must do” security control. But security practitioners should not discard it as a viable option to improve detection, and those keeping it as part of their toolbox will always have more options to build a good security monitoring environment.

The post Yes, Give Deception a Chance! appeared first on Augusto Barros.

from Augusto Barros