Tuesday, April 18, 2017

From my Gartner Blog - Speaking at Gartner Security and Risk Mgmt Summit 2017

Another year, another Gartner Security and Risk Management Summit! The event will be in DC, between June 12 and 15th. I’ll be presenting two sessions this year:

  • Endpoint Detection and Response (EDR) Tool Architecture and Operations Practices – June 12th, 10:30AM
    Increased complexity and frequency of attacks elevate the need for enterprise-scale incident response, broad investigations and endpoint threat detection that goes beyond malware. This presentation will cover how endpoint detection and response tools help organizations speedily investigate security incidents and detect malicious activities and behaviors. Key points covered in this session include the following: • What are the top EDR use cases? • How to use EDR for threat detection. • What security processes are helped by EDR?
  • Applying Deception for Threat Detection and Response – June 14th, 9:45AM
    Deception is emerging as a viable option to improve threat detection and response capabilities. This presentation focuses on using deception as a “low-friction” method to detect lateral threat movement, and as an alternative or a complement to other detection technologies. This session will cover the following: • Should your organization utilize threat deception? • What tools and techniques are available for threat deception? • How to use deception to improve your current threat detection effectiveness. • How to customize and tune the deception controls. • What are the emerging operational practices around deception?

I also have a workshop and a roundtable together with Anton (who will be speaking about UEBA and SOC):

  • Workshop: Developing, Implementing and Optimizing Security Monitoring Use Cases – June 12th, 2:45PM
    This workshop will, through peer collaboration, focus on developing, implementing and optimizing security monitoring use cases. The participants will be guided through the Gartner framework to identify and refine their requirements to produce their own security monitoring use cases based on their current challenges and priorities.
  • Roundtable: Lessons Learned From Security Analytics Adventures – June 14th, 2:45PM
    Many organizations have been venturing beyond SIEM and applying advanced analytics techniques and approaches to security. This roundtable is an opportunity for organizations with security analytics initiatives to share their findings and expose their current challenges on how to make it effective.

If you’re planning to attend any of these sessions, please drop and say ‘hi’. Always nice to meet the readers of the blog :-)


The post Speaking at Gartner Security and Risk Mgmt Summit 2017 appeared first on Augusto Barros.

from Augusto Barros http://ift.tt/2opT4g0

From my Gartner Blog - Paper on Pentesting and Red Teams is OUT!

As anticipated here, my short paper on pentesting and red teams is finally out. It was a fun paper to write as it follows a new model for us, GTP analysts: a faster cycle of research and writing, producing a “to the point” paper. This one is about clarifying the roles of pentests, vulnerability assessments and red teams in a security program, including answers of when to use each and how to work on defining scope, selecting service providers, etc.

A few nice bits from the paper:

“Organizations still don’t have a clear understanding about the different security assessment types and when each one should be utilized. Penetration tests are often contracted by organizations expecting the type of results that would come from vulnerability assessments”

“The confusion about the different types of security assessments is the most common reason for dissatisfaction with test results. Assessments differ in many aspects, from objectives to methodologies and toolsets. Thus, understanding the differences between each type of assessment is crucial to properly select the most appropriate option for each case.”

On Vulnerability Assessments:

“Vulnerability assessments (VAs) are usually the best option for organizations looking to perform their first assessment. Performing a VA first allows an organization to find obvious technical issues, such as missing patches and poor configuration items, including default passwords.”

“A vulnerability assessment doesn’t involve exploiting vulnerabilities or trying to obtain sensitive data or privileges, so it shouldn’t be used to answer the “What could happen if someone tries to break in?” question (which is a typical question answered by a pentest).”

On Pentests:

“Pentests are mostly manual in nature because exploitation usually requires more human analysis. The test also involves moving from one asset to another while looking to achieve the test objectives, so identifying how to do it and which assets to attack is by nature a manual, creative and iterative activity. During some steps of the test, the assessor may rely on automated tools, but no penetration test can be completely automated from beginning to end.”

“Pentests are often requested by organizations to identify all vulnerabilities affecting a certain environment, with the intent to produce a list of “problems to be fixed.” This is a dangerous mistake because pentesters aren’t searching for a complete list of visible vulnerabilities. They are only looking for those that can be used toward their objective”

Red Teams:

“The real benefits from having a red team are primarily linked to its continuous operation. Apart from the findings of each exercise, a healthy competition with the red team can also be used to keep the blue team alert and engaged. Organizations planning to contract point-in-time exercises instead of a continuous service should keep in mind that the continuous planning, scenario and objectives definitions for the exercises will still have to be done internally. Otherwise, contracting a red team exercise will not be any different from procuring high-quality pentests.”

Which one to use? Go there and read the paper 😉

P.S. Don’t forget to provide your feedback here!

P.S.2. This is actually my first “solo” Gartner paper! Nevertheless, Dr. Chuvakin provided a lot of good insights and feedback too :-)


The post Paper on Pentesting and Red Teams is OUT! appeared first on Augusto Barros.

from Augusto Barros http://ift.tt/2o0bU1B

Friday, March 31, 2017

From my Gartner Blog - Pentesting and Red Teams

My current research is a quick clarification paper about penetration testing, which obviously will include a discussion about red teams. I noticed during my research that there are a few general items that are generally used to differentiate between red teams and regular penetration testing. They are:

  • Objective: Some will say penetration tests are for finding vulnerabilities, while red team exercises are to test defense and response capabilities. I tend to disagree with this view, as I believe vulnerability assessments should be used if the primary goal is to find vulnerabilities, and I’ve seen (and had been part of) many pentests performed with the intent of testing defenses.
  • Scope and restrictions: Others will mention that pentests have well defined scopes, while red teams can “do anything”. I also disagree with this notion, as I’ve seen some quite unrestricted pentests and even red team exercises have some direction on focus and the methods to be used. The red team, on its continuing operation (more on this later, hold on), may have no restrictions or narrow scope, but each exercise is usually defined by a scope and objective.
  •  Point in time vs. continuing operation: Pentests are just point in time exercises, while red teams are continuous and run different exercises. Ok. Now I think we have something.

From those 3 points, I think only the third, the continuous operation, is a defining factor for a red team. The other two, IMO, can be seen as specific ways to run a pentest, or even just “high quality pentest”.

A red team should be a continuous operation to keep the blue team on its toes. With continuous operations the red team can pick opportunities and scenarios that best fit the threat landscape of the organization at each moment and also work together with the blue team to force it into a continuous improvement mode. This also answers a common question about when to implement a red team: continuous improvement is often a defining factor of the highest maturity level in any maturity scale. So, it makes sense to assemble a red team (a continuous one, not a single “red team exercise”, which is just another pentest) when you already on a reasonably high maturity and wants to move into the continuous improvement territory.

So, anyone out there strongly disagrees with this definition? If so, why?


The post Pentesting and Red Teams appeared first on Augusto Barros.

from Augusto Barros http://ift.tt/2mW9MaD

From my Gartner Blog - SIEM Correlation is Overrated

During our research about UEBA tools, we noticed that these tools are gaining ground on SIEM solutions, with some organizations opting to focus their monitoring efforts on UEBA instead of SIEM. That brings the question, why?

The fact is, as much as we like to talk about it, Events correlation on SIEM was overrated. SIEM correlation has always been weak, too simplistic. Most cases are basic boolean chaining of events, “if this AND this AND that OR that happens, alert”. There are not many cases where this type of correlation can be written in a way that it’s not checking for a very specific attack path, one from the many thousand possibilities. In other words it is hard to generalize the use cases, so the organization needs to keep producing these rules to specific cases, with the risk of drowning in false positives if it tries to make things more productive. In the end, SIEM use cases are mostly smarter filtering and aggregation.

Yes, there are more modern rules options available. You can build rules that will have dynamic thresholds and do some smarter anomaly detection, but it is still very simplistic when compared to the generalized models from UEBA tools. They have less use cases but with a broader coverage for threats. If properly implemented, they are more effective.

Another key difference between UEBA tools and SIEM is that SIEM correlation is usually built to generate alerts for each use case. Potential threats are still looked in isolation. Some SIEMs will aggregate things based on IP and time (think the “offenses” concept from Qradar, for example), but the goal is aggregation and alert reduction, not correlation. UEBAs, on the other hand, keep risk scores (I hate the term; there’s no “risk” there, but whatever) for entities such as endpoints and users, with the use cases adding to the scores of the involved entities. The nice thing about scores is that they provide the ability to correlate things that may initially look unrelated. Different use cases involving a certain entity will raise the score to a level that makes the entity interesting and subject to investigation, without the need for an analyst to envision the possibility of those events being part of a single occurrence and implementing that as a correlation rule.

SIEM correlation is still useful, but we need to recognize its limitations and embrace the new capabilities of new tools such as UEBA to improve that. As we’ve been talking, SIEM and UEBA are getting closer every day, so now it’s just a matter of time before SIEMs move (or give the option) to track issues based on entity scores. But if you want to have that now, you should look at UEBA tools.

A good start would be our “A Comparison of UEBA Technologies and Solutions” that has just been published. If you read it, please don’t forget to provide feedback about it!

The post SIEM Correlation is Overrated appeared first on Augusto Barros.

from Augusto Barros http://ift.tt/2nrnOgD

Monday, November 28, 2016

From my Gartner Blog - Comparing UEBA Solutions

As Anton anticipated, we’ve started working on our next research cycle, now with the intent of producing a comparison of UEBA (User and Entity Behavior Analytics) solutions. We produced a paper comparing EDR solutions a few months ago, but so far the discussion on how to compare UEBA solutions has been far more complex (and interesting!).

First, while on EDR we focused on comparing how the tools would fare related to five key use cases, for UEBA the use cases are basically all the same: detecting threats.  The difference is not only on which threats should be detected, but also on how to detect the same threats. Many of these tools have some focus on internal threats (if you consider “pseudo-internal” too, ALL of them focus on internal threats), and there are many ways you could detect those. A common example across these tools: detecting an abnormal pattern of resource access by an user.  That could indicate that the user is accessing data he/she is not supposed to access, or even that credentials were compromised and are being used by an attacker to access data.

But things are even more complicated.

Have you notice that “abnormal pattern of resource access” there?

What does it mean? That’s where tools can do things in very different ways, arriving on the same (or on vastly different results) results. You can build a dynamic profile of things the user usually access and alert when something out of that list is touched. You can also do that considering additional variables for context, like time, source (e.g. from desktop or from mobile), application and others. And why should we stop at profiling only the individual user? Would it be considered anomalous if the user’s peers usually access that resource? Ok, but who are the user peers? How do you build a peer list? Point to an OU on AD? Or learn it dynamically by putting together people with similar behaviors?

(while dreaming about how we can achieve our goal with this cool “Machine Learning” stuff, let’s not forget you could do some of this with SIEM rules only…)

So, we can see how one single use case can be implemented by the different solutions. How do we define what is “better”? This is pretty hard, especially because there’s not something like AV-TEST available to test these different methods (models, algorithms, rules…taxonomy alone is crazy enough).

So what can we do about it? We need to talk to users of all these solutions and get data from the field about how they are performing in real environments. That’s OK. But after that we need to figure out, for good and bad feedback, how those things map to each solution feature set. If clients of solution X are happy about how it’s great on detecting meaningful anomalies (oh, by the way, this is another thing we’ll discuss in another blog post – which anomalies are just that, and which ones are meaningful from a threat detection perspective), we need to figure out what in X makes it good for that use case, so we can find which features and capabilities matter (and which are just noise and unnecessary fluff). Do I need to say we’ll be extremely busy in the next couple of months?

Of course, we could also use some help here; if you’ve been through a bake-off or a comparison between UEBA tools, let us know how you’ve done it; we’d love to hear that!

The post Comparing UEBA Solutions appeared first on Augusto Barros.

from Augusto Barros http://ift.tt/2fFIQDF

Friday, November 18, 2016

From my Gartner Blog - Deception Technologies – The Paper

After some very fun research, we’re finally publishing our paper on deception technologies:

Applying Deception Technologies and Techniques to Improve Threat Detection and Response
18 November 2016 | ID: G00314562
Augusto Barros | Anton Chuvakin

Summary: Deception is a viable option to improve threat detection and response capabilities. Technical professionals focused on security should evaluate deception as a “low-friction” method to detect lateral threat movement, and as an alternative or a complement to other detection technologies.

It was a very fun paper to write. We’ve been using and talking about honeypots and other deception techniques and technologies for ages, but it seems that it’s finally the time to use those in enterprise environments as part of a comprehensive security architecture and strategy. Here are some fun bits from the paper:

  • Many organizations report low-friction deployment, management and operation as the primary advantages of deception tools over other threat detection tools (such as SIEM, UEBA and NTA).
  • Improved detection capabilities are the main motivation of those who adopt deception technologies. Most have no motivation to actively engage with attackers, and cut access or interaction as soon as detection happens.
  • Test the effectiveness of deception tools by running a POC or a pilot on a production environment. Utilize threat simulation tools, or perform a quality penetration test without informing the testers about the deceptions in place.

(overview of deception technologies – Gartner (2016)

The corporate world has invested in many different technologies for threat detection. Yet, it is still hard to find organizations actively using deception techniques and technologies as part of their detection and response strategies, or for risk reduction outcomes.

However, with recent advances in technologies such as virtualization and software-defined networking (SDN), it has become easier to deploy, manage and monitor “honeypots,” the basic components of network-based deception, making deception techniques viable alternatives for regular organizations. At the same time, the limitations of existing security technologies have become more obvious, requiring a rebalance of focus from preventative approaches to detection and response


Although a direct, fact-based comparison between the effectiveness of deception techniques and the effectiveness of other detection approaches does not exist, enough success reports do exist to justify including deception as part of a threat detection strategy.

The post Deception Technologies – The Paper appeared first on Augusto Barros.

from Augusto Barros http://ift.tt/2g3fjnR

Monday, October 17, 2016

From my Gartner Blog - So You Want To Build A SOC?

Now you can! But should you do it?

As anticipated here and here, our new paper about how to plan, design, operate and evolve a Security Operations Center is out!

This is a big doc with guidance for organizations with the intent of building their SOC (or for those that have one and want to make it better :-)). One of the things we gave special attention to was the first question to be answered: do you need a SOC? It’s not as simple as it sounds, as the commitment of resources and pre-requisites, as the paper describes in detail, are quite big. There are alternatives (namely service providers) out there that should really be considered before embarking in that journey.

Also, even if you are certain you want (and need) to do it, you most certainly won’t do it alone. One of our main findings in this paper is that most SOCs are in fact hybrid SOCs, with service providers filling competency gaps and providing resources that are usually not cost effective to have in house unless you are a very particular (and rare) type of organization.

Here are a few interesting pieces from the paper:

“Although most existing security operations centers (SOCs) are modeled as alert pipelines, a good SOC includes threat intelligence (TI) consumption and generation practices tied closely to incident response (IR) and hunting activities.”

“Modern SOCs should move beyond SIEM and include additional technologies (such as NFT, EDR, TIP, UEBA, and SIRP) to improve visibility, threat detection and IR capabilities.”

“Any organization establishing a SOC should have a plan for staff retention from the outset. Security skills are rare, and attrition from the intense operational work that is natural for a SOC make hiring and retention key issues for keeping a SOC functional.”

“There is no such thing as a list of “tools a SOC must have.” Many SOCs make do with serious tool limitations by compensating the deficiencies with process, additional people, alternative technologies (think SharePoint instead of SOAR tools) or scripts. However, the chances of success of a SOC greatly improve when tools providing visibility, analysis, and action and management are present. Most SOCs (at a basic maturity level) operate with, at minimum, a SIEM for analysis and VA tools for visibility. As the maturity of the SOC increases, the need for additional tools becomes stronger. A basic SOC, for example, can simply detect some malicious activity on the SIEM and send an email to the CSIRT or even to the help desk for action. That might be enough for organizations that just remove infected computers from the network and reimage them. But if the intent is to learn about the real extent of an incident (and whether other computers and assets have been compromised) and extract data to be used to improve preventive and detective controls, additional visibility (e.g., EDR and NFT) and management (e.g., workflow and case management) tools will be necessary.”

The paper is available for Garter GTP clients. However, I’d like to point out that Anton recently did a webinar based on this same research, which is available for free on Gartner’s website. Have fun watching it and don’t forget to provide us feedback 😉

The post So You Want To Build A SOC? appeared first on Augusto Barros.

from Augusto Barros http://ift.tt/2dW4uoD