Friday, March 31, 2017

From my Gartner Blog - Pentesting and Red Teams

My current research is a quick clarification paper about penetration testing, which obviously will include a discussion about red teams. I noticed during my research that there are a few general items that are generally used to differentiate between red teams and regular penetration testing. They are:

  • Objective: Some will say penetration tests are for finding vulnerabilities, while red team exercises are to test defense and response capabilities. I tend to disagree with this view, as I believe vulnerability assessments should be used if the primary goal is to find vulnerabilities, and I’ve seen (and had been part of) many pentests performed with the intent of testing defenses.
  • Scope and restrictions: Others will mention that pentests have well defined scopes, while red teams can “do anything”. I also disagree with this notion, as I’ve seen some quite unrestricted pentests and even red team exercises have some direction on focus and the methods to be used. The red team, on its continuing operation (more on this later, hold on), may have no restrictions or narrow scope, but each exercise is usually defined by a scope and objective.
  •  Point in time vs. continuing operation: Pentests are just point in time exercises, while red teams are continuous and run different exercises. Ok. Now I think we have something.

From those 3 points, I think only the third, the continuous operation, is a defining factor for a red team. The other two, IMO, can be seen as specific ways to run a pentest, or even just “high quality pentest”.

A red team should be a continuous operation to keep the blue team on its toes. With continuous operations the red team can pick opportunities and scenarios that best fit the threat landscape of the organization at each moment and also work together with the blue team to force it into a continuous improvement mode. This also answers a common question about when to implement a red team: continuous improvement is often a defining factor of the highest maturity level in any maturity scale. So, it makes sense to assemble a red team (a continuous one, not a single “red team exercise”, which is just another pentest) when you already on a reasonably high maturity and wants to move into the continuous improvement territory.

So, anyone out there strongly disagrees with this definition? If so, why?

 

The post Pentesting and Red Teams appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2mW9MaD
via IFTTT

From my Gartner Blog - SIEM Correlation is Overrated

During our research about UEBA tools, we noticed that these tools are gaining ground on SIEM solutions, with some organizations opting to focus their monitoring efforts on UEBA instead of SIEM. That brings the question, why?

The fact is, as much as we like to talk about it, Events correlation on SIEM was overrated. SIEM correlation has always been weak, too simplistic. Most cases are basic boolean chaining of events, “if this AND this AND that OR that happens, alert”. There are not many cases where this type of correlation can be written in a way that it’s not checking for a very specific attack path, one from the many thousand possibilities. In other words it is hard to generalize the use cases, so the organization needs to keep producing these rules to specific cases, with the risk of drowning in false positives if it tries to make things more productive. In the end, SIEM use cases are mostly smarter filtering and aggregation.

Yes, there are more modern rules options available. You can build rules that will have dynamic thresholds and do some smarter anomaly detection, but it is still very simplistic when compared to the generalized models from UEBA tools. They have less use cases but with a broader coverage for threats. If properly implemented, they are more effective.

Another key difference between UEBA tools and SIEM is that SIEM correlation is usually built to generate alerts for each use case. Potential threats are still looked in isolation. Some SIEMs will aggregate things based on IP and time (think the “offenses” concept from Qradar, for example), but the goal is aggregation and alert reduction, not correlation. UEBAs, on the other hand, keep risk scores (I hate the term; there’s no “risk” there, but whatever) for entities such as endpoints and users, with the use cases adding to the scores of the involved entities. The nice thing about scores is that they provide the ability to correlate things that may initially look unrelated. Different use cases involving a certain entity will raise the score to a level that makes the entity interesting and subject to investigation, without the need for an analyst to envision the possibility of those events being part of a single occurrence and implementing that as a correlation rule.

SIEM correlation is still useful, but we need to recognize its limitations and embrace the new capabilities of new tools such as UEBA to improve that. As we’ve been talking, SIEM and UEBA are getting closer every day, so now it’s just a matter of time before SIEMs move (or give the option) to track issues based on entity scores. But if you want to have that now, you should look at UEBA tools.

A good start would be our “A Comparison of UEBA Technologies and Solutions” that has just been published. If you read it, please don’t forget to provide feedback about it!

The post SIEM Correlation is Overrated appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2nrnOgD
via IFTTT