Monday, December 4, 2017

From my Gartner Blog - Threat Detection Is A Multi-Stage Process

We are currently working on our SOAR research, as Anton has extensively blogged about. SOAR tools have been used to help organizations  triage and respond to the deluge of alerts coming from tools such as SIEM and UEBA. Although this is sometimes seen as the earlier stages of incident response, I’ve been increasingly seeing it as a way to implement “multi-stage threat detection”.

Let’s look at a basic use case of SOAR tools. Before the tool coming into play, there could be a playbook like this:

The SIEM performs basic correlation between a threat intelligence feed and firewall logs, generating an alert for every match (I know, many will argue it’s a bad use case example, but many orgs are actually doing it exactly like that). The SOC analyst would triage each of those events by identifying the internal workstation responsible for that traffic, checking it with an EDR tool, extracting some additional indicators related to that network traffic (the binary file that initiated the connection request, for example) and submitting them to external validation services or sandboxes. If the result is positive, they would use the EDR tool to kill the process, remove the files from the endpoint and also search for the existence of the same indicators on other systems.

With the SOAR tool in place, the organization can automate almost everything performed by the analyst, effectively moving from minutes to seconds to execute all the actions above. The tool starts the playbook when an alert from the SIEM arrives, integrating with the EDR tool and the validation services. We could expand it even further to make it add the new identified indicators to blacklists and firewall rules. Of course, corrective measures would be executed only after the analyst authorizes them.

Now, let’s think about an alternative, hypothetical world:

Your SIEM is immensely powerful and fast. So, you send all the detailed endpoint telemetry collected from the EDR tool to it. You also download all the databases of the external validation services into it. Then, you build a monster correlation rule that will cross the TI feed, the EDR data (linking connection requests to processes and binaries) to that huge database of known malicious processes and binaries. Now you’re doing almost everything from that playbook above on the SIEM, in just one shot (ok, I’m cheating, the sandbox validation still needs a step apart…although the SIEM could have sandbox capabilities embedded; it is immensely powerful, remember?).  No need for the playbook, or the SOAR too, at all!

Unfortunately there’s no such thing as a SIEM like that. That’s why we end up having this single detection use case implemented in multiple steps. if you think about it this way, you’ll see that the SIEM alert is not meant to be a final detection, subject to “false positives”. It’s just the first part of a multi-stage process, each stage looking at a smaller universe of “threat candidates”.

Thinking about detection as multi-stage process unlocks interesting use cases that wouldn’t be able to be implemented as an “atomic decision model”. Any interesting detection use cases that would be discarded because of high false positive rates could be a good fit for a multi-stage process.

But multi-stage detection is not effective if done manually. Score based correlation, as done by UEBA and some SIEM tools, can help linking multiple atomic detection items, but those situations where you need to query external systems (such as sandboxes), external services or big reference sets are still problematic. But SOAR comes to rescue! Now you can have an automated pipeline that takes those initial detection cases (or even entities that hit a certain score threshold) and put them through whatever validation and adhoc queries you might need to turn them into “confirmed detection”, full contextualized alerts.

Most of us would think about advanced automated response use cases, dynamically patching or removing things from the network, as the main way to get value from SOAR.  Not necessarily. Making detection smarter is probably where most organizations will find the value for those tools.

 

 

 

 

The post Threat Detection Is A Multi-Stage Process appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2BHFT1K
via IFTTT

Tuesday, November 28, 2017

From my Gartner Blog - Machine Learning or AI?

We may sound pedantic when pointing we should be talking about Machine Learning, and not AI, for security threat detection use cases. But there is a strong reason why: to deflate the hype around it. Let me quickly mention a real world situation where the indiscriminate use of those terms caused confusion and frustration:

One of our clients was complaining about the “real Machine Learning”  capabilities of a UEBA solution. According to them, “it was just rule based”. What do you mean by rule based? Well, for them, having to tell the tool that it needs to detect behavior deviations on the authentication events for each individual user, based on the location (source IP) and on the time of the event, is not really ML, but a rule based detection. I would say it’s both.

Yes, it is really a rule, as you have to define what type of anomaly (to the data field – or ‘feature’  – level) it should be looking for. So, you need to know enough about the malicious activity you are looking for, so you can specify the type of behavior anomaly it will present.

But on this “rule”, how do you define what “an anomaly”  is? That’s where the Machine Learning goes. The tools will have to automatically profile each individual user authentication behavior, focusing on those data fields specified from the authentication events. You just can’t do it with, let’s say, a “standard SIEM rule”. There is real Machine Learning being used there.

But what about AI – Artificial Intelligence? ML is a small subset of a field of knowledge known as AI. But the problem is that AI has much more than just ML. And that’s what that client was expecting when they complained about the “rules”. We still need people to figure out those rules and write the ML models to implement them. There’s no machine capable of doing that – yet.

There have been some attempts based on “deep learning”  (another piece of the AI domain), but nothing concrete exists. You can always point ML systems to all data collected from your environment so it can point to anomalies, but you’ll soon find out there are far more anomalies that are not related to security incidents than you are lead to believe by some pixie dust vendors. Broad network based anomaly detection has been around for years, but it hasn’t been able to deliver efficient threat detection without a lot of human work to figure out which anomalies are worth investigating.

Some UEBA vendors have decent ML capabilities, but they are not good on defining good rules/models/use cases to apply it. So, you may end up with good ML technology, but with mediocre threat detection capabilities, if you don’t have good people writing the detection content. For those going through the “build you own” path, this is even more challenging, as you need the magical combination of people who understand threats and what type of anomalies they would create and people who understand ML to write the content to find them.

Isn’t that just like SIEM? Indeed, it is. People bought SIEM in the past expecting to avoid the IDS signature development problem. Now they are repeating the same mistake buying UEBA to avoid the SIEM rules development problem. Do you think it’s going to work this time?

 

 

 

The post Machine Learning or AI? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2BlLxpn
via IFTTT

Sunday, October 15, 2017

From my Gartner Blog - Our SIEM Assessment paper update is out!

The results of our “summer of SIEM” are starting to come up; our assessment document on SIEM (basically, a “what” and “why” paper, that sits besides our big “how” doc on the same topic) has been updated. It has some quite cool new stuff aligned to some of our most recent research on security analytics, UEBA, SOC and other things that often touch or is directly related to SIEM.

Some cool bits from the doc:

“Organizations considering SIEM should realize that using an SIEM tool is not about procuring an appliance or software, but about tying an SIEM product to an organization’s security operations. Such an operation may be a distinct SOC or simply a team (for smaller organizations, a team of one) involved with using the tool. Purchasing the tool will also be affected by the structure and size of an organization security operation: While some SIEM tools excel in a full enterprise SOC, others enable a smaller team to do security monitoring better.”

“While some question SIEM threat detection value, Gartner views SIEM as the best compromise technology for a broad set of threat detection use cases. Definitely, EDR works better for detecting threats on the endpoints, while NTA promises superior detection performance on network traffic metadata. However, network- and endpoint-heavy approaches (compared to logs) suffer from major weaknesses and are inadequate unless you also do log monitoring. For example, many organizations dislike endpoint agents (hence making EDR unpalatable), and growing use of Secure Sockets Layer and other network encryption generally ruins Layer 7 traffic analysis.”

“UEBA vendors have been frequently mentioned as interesting alternatives due to their different license models. While most SIEM vendors base their price on data volumes (such as by events per second or gigabytes of data indexed), these solutions focus on the number of users being monitored irrespective of the amount of data processed. This model has been seen as a more attractive model for organizations trying to expand their data collection without necessarily changing the number of users currently being monitored. (Note that UEBA vendors offer user-based pricing even for tools addressing traditional SIEM use cases.) UEBA products have also been offered as solutions with lower content development and tuning requirements due to their promised use of analytics instead of expert-written rules. This makes them attractive to organizations looking for an SIEM tool but concerned with the resource requirements associated with its operation. The delivery of that promise will, however, strongly depend on the use cases to be deployed.”

As usual, please don’t forget to provide us feedback about the papers!

 

 

Next wave of research: SOAR, MSS and Security Monitoring use cases! Here we go :-)

 

The post Our SIEM Assessment paper update is out! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2ylZAL6
via IFTTT

From my Gartner Blog - Speaking at the Gartner Security Summit Dubai

I have a few sessions at the Gartner Security and Risk Management Summit in Dubai, October 16th and 17th. This is the wrap up of the Security Summit season for me; I’ll be presenting some content that I already presented in DC and in São Paulo, earlier this year. I also have a session on SOC that was originally presented by Anton on the other events. It’s my first time in Dubai and I’m excited to see any different perspectives from the audience there on the problems we cover. My sessions there:

Workshop: Developing, Implementing and Optimizing Security Monitoring Use Cases
Mon, 16 Oct 2017 11:00 – 12:30
An extra reason to be excited about the use cases workshop: we’ll be updating our paper from 2016 on that topic! I’m expecting to get the impressions of the attendees on our framework and potential points to improve or expand

Endpoint Detection and Response (EDR) Tool Architecture and Operations Practices

Mon, 16 Oct 2017 14:30 – 15:15

Industry Networking: FSI Sector: Responding to Changes in the Threat Landscape and the Risk Environment

Mon, 16 Oct 2017 16:30 – 17:30
How to Build and Operate a Modern SOC
Tue, 17 Oct 2017 10:30 – 11:15

Magic Quadrant: Security Information and Event Management

Tue, 17 Oct 2017 12:40 – 13:00

The post Speaking at the Gartner Security Summit Dubai appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2yhUjqh
via IFTTT

Wednesday, September 13, 2017

From my Gartner Blog - SOAR research is coming!

As Anton anticipated on this post, we’ll be writing about SOAR – Security Orchestration, Automation and Response – tools. Of course many people, seeing this coming from Gartner, will think: “oh great, here are those guys creating new fancy acronyms for silly markets with a bunch of VC powered startups”. Yes, I agree that usually that’s the feeling. But let’s consider a few FACTS:

  • Some of these new vendors have already been acquired by big players such as FireEye (Invotas), Microsoft (Hexadite) and Rapid7 (Kommand). So, it seems that what they are offering is interesting enough to be integrated into other security technologies out there.
  • We often complain about the lack of skilled manpower in security. It is a very common issue to put together SOC teams. And whenever lack of manpower becomes an issue, AUTOMATION is a potential solution.
  • We also like to complain about the ever growing number of security tools being used by organizations. How can you properly integrate them so you can actually get the full value from them? You have tools to detect threats on the network, but you need to investigate those alerts on the affected endpoints using your EDR tool; with so many moving parts in place, some ORCHESTRATION is definitely required.
  • Finally, we also keep saying organizations are not reacting fast enough to incidents. Again, one of the most common ways to do things faster is streamlining processes (WORKFLOW) and leveraging AUTOMATION.

So, the need for the capabilities is there. We may argue that they should be embedded in current tools or that they are not complex enough to require a new product, just a bunch of Python or Powershell scripts. For the first point yes, this could definitely help the integration, but if you use the automation capabilities from each tool individually you may end up with “automated spaghetti workflows”, what would become a nightmare to support, troubleshoot and maintain. A hub and spoke approach can help keeping the complexity manageable. What is that hub? SOAR! Can it be done purely with scripts? Well, I bet you can replicate a lot of these products capabilities with some clever scripting, but how many organizations have people to do that and want to have more code to support, troubleshoot and maintain?

There are other interesting things related to SOAR that we want to explore: is this the new “single pane of glass” for the SOC? Does it make sense to leverage Machine Learning on these use cases? Are organizations looking for the glue only or for content (playbooks)? Some of the things we have in our minds for this upcoming and exciting research project.

So, of you are a SOAR vendor, don’t forget to schedule a Vendor Briefing with us! You can find more details here.

The post SOAR research is coming! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2wqEF6V
via IFTTT

Wednesday, August 2, 2017

From my Gartner Blog - Our new Vulnerability Assessment Tools Comparison is out!

Vulnerability assessment is usually seen as a boring topic and most people think the scanners are all equal – reaching the “commodity” status. Well, for basic scanning capabilities, that’s certainly true. But vulnerability scanners need to stay current with the evolution of IT environments; think all the changes in corporate networks in the past 20 years due to virtualization, mobility, cloud, containers and others. Those things certainly affect vulnerability management programs and how we scan for vulnerabilities. These IT changes force scanners to adapt, and we end up seeing some interesting differences at the fringes. Our new document, “A Comparison of Vulnerability and Security Configuration Assessment Solutions”, compares the 5 leaders of this space (BeyondTrust, Qualys, Rapid7, Tenable and Tripwire) and show how and where they differ.

Some of the capabilities where we found interesting differences are:

  • Agent based scan
  • Integration with virtualization platforms
  • Integration with IaaS cloud providers
  • Mobile devices vulnerability assessment capabilities
  • VA on containers
  • Delivery models (on-prem, SaaS)

 

As we’ve been doing, please consider providing feedback on the paper; this helps us improve our research :-)

The post Our new Vulnerability Assessment Tools Comparison is out! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2f8bFho
via IFTTT

Thursday, July 27, 2017

From my Gartner Blog - SIEM, Detection & Response: Build or Buy?

As Anton already blogged (many times) and twitted about, we are working to refresh some of our SIEM research and also on a new document about SaaS SIEM. This specific one has triggered some interesting conversations about who buy services and who buy products, and how that decision is usually made.

There are usually some shortcuts to find out if the organization should look, for example, for a MDR service or for a SIEM (and related processes and team to manage/use it). They are usually related to the organization’s preference for relying on external parties or doing things internally, the availability of resources to manage and operate technology or some weird accounting strategy that moves the needle towards capital investments or operational expenses. But what if there’s no shortcut? What if there’s really no preference for either path, how should an organization decide if it should rely on services for threat detection and response, or if it should build those capabilities internally? Making things more complicated, what if the answer is a bit of each, how to define the right mix?

Initially I can see a few factors as key points for that decision:

  • Cost – What option would be cheaper?
  • Flexibility – Which option would give me more freedom to change direction, put less restrictions on how things could/should be done?
  • Control – Which option gives me more control over the outcome and results?
  • Effectiveness – Which option will provide me, for lack of a better word, “better” threat detection / response capabilities?
  • Time to value – Which option can be implemented and provide value faster?

(Yes, there are other factors, including the security of your own data, but many times those factors end up in the “shortcuts” category above. Stuff like “we don’t put our stuff in the cloud”; makes the decision really easy, but that’s not the point here.)

Some of these factors have clear winners: time to value is almost always better with services, while doing everything yourself will obviously give you more control than any type of service.

Flexibility is more contentious. Services will be less flexible as no service provider (apart from pure staff augmentation) will give you the option to define how every piece of the puzzle should work. However, building things and hiring people will often freeze your resources more than just paying a services monthly bill. If you build everything in a certain way and then decide to change everything, you’ll probably have to pay some things twice. Moving from one service provider to another can be easier when contracts are made for flexibility.

And what about the last point, which model will provide the best results? If you are a Fortune 100 company, you’ll probably be in a position, in terms of resources, context and requirements, to build something that will be better than any service provider will be able to do for you. But if you’re not in that category, the best service providers will probably be able to give you better capabilities that you would be able to build AND maintain; just think about the challenge of keeping a very good and motivated team for more than a few months!

A simple framework for deciding between outsourcing or building in house could just look at those 5 factors, but you didn’t think the problem was that easy, right? Because the decision IS NOT BINARY! Today you can fully outsource your security operations, outsource some processes or even keep processes and people and rely on tools provided in a SaaS model. The number of questions to ask yourself and factors to consider grows exponentially.

For now we are just looking at a very specific outsourcing point, the SIEM as a tool. We hope to build some type of decision framework as one of the outcomes of our current research, but I’d like to revisit the broader problem in the future. And you, how did you decide between build or buy your detection and response capabilities?

The post SIEM, Detection & Response: Build or Buy? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2w4FzpU
via IFTTT

Wednesday, July 26, 2017

From my Gartner Blog - Apresentando no Gartner Security Summit Brasil 2017

(excuse me for the post in Portuguese…)

O Gartner Security & Risk Management Summit de São paulo está chegando! Já estou no Brasil para o evento, que acontece entre os dias 8 e 9 de Agosto. Tenho algumas apresentações durante os dois dias de evento, incluindo o keynote de abertura, junto com meus colegas Claudio Neiva e Felix Gaehtgens. São estas:

Gerencie Riscos, Construa Confiança e Abrace a Mudança Tornando-se Adaptável em Todos os Lugares
08/08/2017 – 09:15AM

Augusto Barros , Claudio Neiva , Felix Gaehtgens

Neste keynote de abertura, o Gartner vai introduzir um novo capítulo para a segurança da informação, que irá transformar todas as áreas de segurança da informação a partir de então. Com base na visão de arquitetura de segurança adaptativa do Gartner, este keynote ampliará a capacidade e a necessidade de ser continuamente adaptável a todas as disciplinas de segurança da informação. Esta abordagem será a única maneira em que a segurança da informação será capaz de equilibrar as exigências em rápida mudança dos negócios digitais com a necessidade de proteger a organização de ataques avançados, mantendo níveis aceitáveis de risco e conformidade. Exploraremos essa visão futura e usaremos exemplos do mundo real sobre como essa mentalidade se aplicará à sua organização de segurança da informação e risco, processos e infraestrutura.

Mesa-redonda: Compartilhando Experiências com serviços MSS e MDR
08/08/2017 – 13:45

Muitas organizações estão confiando em Serviços Gerenciados de Segurança (Managed Security Services) e Gestão de Detecção e Resposta (Managed Detection and Response) para melhorar sua postura de segurança. O valor desses serviços, no entanto, está diretamente relacionado ao modo como a relação com o fornecedor é gerenciada. Esta discussão irá focar nas melhores práticas e eventuais armadilhas na contratação e utilização dos serviços MSS e MDR. Questões-chave:

• Quando faz sentido confiar nos provedores de serviços de segurança para detecção e resposta de ameaças?
• Como decidir entre MS SP e in house?
• Quais são os cenários de falha comuns para cada modelo?
• Quais são as melhores práticas para gerenciar o relacionamento com o provedor de serviços?

Aplicando Deception para a Detecção e Resposta a Ameaças
08/08/2017 – 16:00

Deception está surgindo como uma opção viável para melhorar recursos de detecção e resposta a ameaças. Esta apresentação tem como foco o uso de deception como um método de “baixo atrito” para detectar movimentos de ameaças laterais e como uma alternativa ou um complemento a outras tecnologias de detecção.

Workshop: Desenvolvimento, Implementação e Otimização dos Casos de Uso de Monitoramento da Segurança
09/08/2017 – 09:15

Esse workshop terá como foco, por meio da colaboração com pares, a implementação e a otimização dos casos de uso de monitoramento da segurança. Os participantes serão orientados pela estrutura do Gartner para identificar e refinar os seus requisitos a fim de produzir os seus próprios casos de uso de monitoramento da segurança com base em seus desafios e prioridades atuais.

Mesa-redonda: Lições Aprendidas Sobre Aventuras de Analytics de Segurança
09/08/2017 – 13:45

Muitas organizações se aventuraram além do SIEM e aplicaram técnicas e abordagens de análise avançada à segurança. Esta mesa redonda é uma oportunidade para as organizações com iniciativas de análise de segurança compartilhar suas descobertas e expor seus desafios atuais sobre como torná-lo efetivo.
Quais são seus casos de uso atuais?
Que ferramentas estão sendo usadas?
Quais são as habilidades envolvidas (e necessárias)?

The post Apresentando no Gartner Security Summit Brasil 2017 appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2uD9wy2
via IFTTT

Thursday, June 22, 2017

From my Gartner Blog - Update to our Vulnerability Management Guidance Doc

Our updated Vulnerability Management Guidance document has just been published. It is a refinement to the guidance framework we created a couple of years ago. The focus on this one was to include additional information on the scope of VM programs, prioritization of vulnerabilities and use of mitigation actions when remediation cannot be applied. It is very pertinent considering the whole WannaCry thing that happened a few weeks ago.

Some interesting bits from the paper:

  • Scoping:

New technologies with a high number of devices being left out of the traditional VM processes may suggest that those processes are obsolete and about to be replaced by other approaches, such as mitigation and patch-independent controls (e.g., application whitelisting or isolation). It’s important to remember, however, that legacy IT and legacy approaches are here to stay. While cloud adoption, DevOps and other IT delivery disrupters are happening, IT inertia is a powerful force, and in many regards a large chunk of the future will look just like the past. Similarly, the “scan and patch” cycle is here to stay for a diminishing but still very large share of IT.

  • Prioritization:

The definition of a prioritization method for your organization depends on a few factors: from the size and complexity of the environment to the context data available. Prioritization must allow an organization to maximize the use of the available remediation and mitigation capacity and achieve maximum possible risk reduction. For example, if 1,000 vulnerabilities are found during the latest scan and there is IT operations bandwidth to fix 100 to 150 of them (depending on the specifics of the vulnerable systems), the main reason for prioritization would be to identify the set to be acted on to reduce the risk by aiming for reduced incident likelihood and reduced potential incident cost.

  • Mitigation actions:

Given that organizations today face multiple challenges with patching vulnerabilities in software and code running on various devices (ranging from printers to mobile phones to IoT devices), mitigation measures (also sometimes called “shielding”) are growing in importance.

[…]

 Mitigation measures are often defined as temporary solutions to be used until the vulnerability is remediated, but for some scenarios, they might end up being permanent solutions. For example, a web application developed by a contractor may have vulnerabilities that simply cannot be fixed by the organization, since the original contractor may not be available anymore. In this case, a web application firewall (WAF) may become a permanent mitigation measure. Some vendors even call this “virtual patching” to hint at a permanent nature for such “fixes” at some organizations.

 

And as we’ve been doing for all our papers, please provide feedback with your thoughts/suggestions here.

The post Update to our Vulnerability Management Guidance Doc appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2rGVR6G
via IFTTT

Tuesday, April 18, 2017

From my Gartner Blog - Speaking at Gartner Security and Risk Mgmt Summit 2017

Another year, another Gartner Security and Risk Management Summit! The event will be in DC, between June 12 and 15th. I’ll be presenting two sessions this year:

  • Endpoint Detection and Response (EDR) Tool Architecture and Operations Practices – June 12th, 10:30AM
    Increased complexity and frequency of attacks elevate the need for enterprise-scale incident response, broad investigations and endpoint threat detection that goes beyond malware. This presentation will cover how endpoint detection and response tools help organizations speedily investigate security incidents and detect malicious activities and behaviors. Key points covered in this session include the following: • What are the top EDR use cases? • How to use EDR for threat detection. • What security processes are helped by EDR?
  • Applying Deception for Threat Detection and Response – June 14th, 9:45AM
    Deception is emerging as a viable option to improve threat detection and response capabilities. This presentation focuses on using deception as a “low-friction” method to detect lateral threat movement, and as an alternative or a complement to other detection technologies. This session will cover the following: • Should your organization utilize threat deception? • What tools and techniques are available for threat deception? • How to use deception to improve your current threat detection effectiveness. • How to customize and tune the deception controls. • What are the emerging operational practices around deception?

I also have a workshop and a roundtable together with Anton (who will be speaking about UEBA and SOC):

  • Workshop: Developing, Implementing and Optimizing Security Monitoring Use Cases – June 12th, 2:45PM
    This workshop will, through peer collaboration, focus on developing, implementing and optimizing security monitoring use cases. The participants will be guided through the Gartner framework to identify and refine their requirements to produce their own security monitoring use cases based on their current challenges and priorities.
  • Roundtable: Lessons Learned From Security Analytics Adventures – June 14th, 2:45PM
    Many organizations have been venturing beyond SIEM and applying advanced analytics techniques and approaches to security. This roundtable is an opportunity for organizations with security analytics initiatives to share their findings and expose their current challenges on how to make it effective.

If you’re planning to attend any of these sessions, please drop and say ‘hi’. Always nice to meet the readers of the blog :-)

 

The post Speaking at Gartner Security and Risk Mgmt Summit 2017 appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2opT4g0
via IFTTT

From my Gartner Blog - Paper on Pentesting and Red Teams is OUT!

As anticipated here, my short paper on pentesting and red teams is finally out. It was a fun paper to write as it follows a new model for us, GTP analysts: a faster cycle of research and writing, producing a “to the point” paper. This one is about clarifying the roles of pentests, vulnerability assessments and red teams in a security program, including answers of when to use each and how to work on defining scope, selecting service providers, etc.

A few nice bits from the paper:

“Organizations still don’t have a clear understanding about the different security assessment types and when each one should be utilized. Penetration tests are often contracted by organizations expecting the type of results that would come from vulnerability assessments”

“The confusion about the different types of security assessments is the most common reason for dissatisfaction with test results. Assessments differ in many aspects, from objectives to methodologies and toolsets. Thus, understanding the differences between each type of assessment is crucial to properly select the most appropriate option for each case.”

On Vulnerability Assessments:

“Vulnerability assessments (VAs) are usually the best option for organizations looking to perform their first assessment. Performing a VA first allows an organization to find obvious technical issues, such as missing patches and poor configuration items, including default passwords.”

“A vulnerability assessment doesn’t involve exploiting vulnerabilities or trying to obtain sensitive data or privileges, so it shouldn’t be used to answer the “What could happen if someone tries to break in?” question (which is a typical question answered by a pentest).”

On Pentests:

“Pentests are mostly manual in nature because exploitation usually requires more human analysis. The test also involves moving from one asset to another while looking to achieve the test objectives, so identifying how to do it and which assets to attack is by nature a manual, creative and iterative activity. During some steps of the test, the assessor may rely on automated tools, but no penetration test can be completely automated from beginning to end.”

“Pentests are often requested by organizations to identify all vulnerabilities affecting a certain environment, with the intent to produce a list of “problems to be fixed.” This is a dangerous mistake because pentesters aren’t searching for a complete list of visible vulnerabilities. They are only looking for those that can be used toward their objective”

Red Teams:

“The real benefits from having a red team are primarily linked to its continuous operation. Apart from the findings of each exercise, a healthy competition with the red team can also be used to keep the blue team alert and engaged. Organizations planning to contract point-in-time exercises instead of a continuous service should keep in mind that the continuous planning, scenario and objectives definitions for the exercises will still have to be done internally. Otherwise, contracting a red team exercise will not be any different from procuring high-quality pentests.”

Which one to use? Go there and read the paper 😉

P.S. Don’t forget to provide your feedback here!

P.S.2. This is actually my first “solo” Gartner paper! Nevertheless, Dr. Chuvakin provided a lot of good insights and feedback too :-)

 

The post Paper on Pentesting and Red Teams is OUT! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2o0bU1B
via IFTTT

Friday, March 31, 2017

From my Gartner Blog - Pentesting and Red Teams

My current research is a quick clarification paper about penetration testing, which obviously will include a discussion about red teams. I noticed during my research that there are a few general items that are generally used to differentiate between red teams and regular penetration testing. They are:

  • Objective: Some will say penetration tests are for finding vulnerabilities, while red team exercises are to test defense and response capabilities. I tend to disagree with this view, as I believe vulnerability assessments should be used if the primary goal is to find vulnerabilities, and I’ve seen (and had been part of) many pentests performed with the intent of testing defenses.
  • Scope and restrictions: Others will mention that pentests have well defined scopes, while red teams can “do anything”. I also disagree with this notion, as I’ve seen some quite unrestricted pentests and even red team exercises have some direction on focus and the methods to be used. The red team, on its continuing operation (more on this later, hold on), may have no restrictions or narrow scope, but each exercise is usually defined by a scope and objective.
  •  Point in time vs. continuing operation: Pentests are just point in time exercises, while red teams are continuous and run different exercises. Ok. Now I think we have something.

From those 3 points, I think only the third, the continuous operation, is a defining factor for a red team. The other two, IMO, can be seen as specific ways to run a pentest, or even just “high quality pentest”.

A red team should be a continuous operation to keep the blue team on its toes. With continuous operations the red team can pick opportunities and scenarios that best fit the threat landscape of the organization at each moment and also work together with the blue team to force it into a continuous improvement mode. This also answers a common question about when to implement a red team: continuous improvement is often a defining factor of the highest maturity level in any maturity scale. So, it makes sense to assemble a red team (a continuous one, not a single “red team exercise”, which is just another pentest) when you already on a reasonably high maturity and wants to move into the continuous improvement territory.

So, anyone out there strongly disagrees with this definition? If so, why?

 

The post Pentesting and Red Teams appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2mW9MaD
via IFTTT

From my Gartner Blog - SIEM Correlation is Overrated

During our research about UEBA tools, we noticed that these tools are gaining ground on SIEM solutions, with some organizations opting to focus their monitoring efforts on UEBA instead of SIEM. That brings the question, why?

The fact is, as much as we like to talk about it, Events correlation on SIEM was overrated. SIEM correlation has always been weak, too simplistic. Most cases are basic boolean chaining of events, “if this AND this AND that OR that happens, alert”. There are not many cases where this type of correlation can be written in a way that it’s not checking for a very specific attack path, one from the many thousand possibilities. In other words it is hard to generalize the use cases, so the organization needs to keep producing these rules to specific cases, with the risk of drowning in false positives if it tries to make things more productive. In the end, SIEM use cases are mostly smarter filtering and aggregation.

Yes, there are more modern rules options available. You can build rules that will have dynamic thresholds and do some smarter anomaly detection, but it is still very simplistic when compared to the generalized models from UEBA tools. They have less use cases but with a broader coverage for threats. If properly implemented, they are more effective.

Another key difference between UEBA tools and SIEM is that SIEM correlation is usually built to generate alerts for each use case. Potential threats are still looked in isolation. Some SIEMs will aggregate things based on IP and time (think the “offenses” concept from Qradar, for example), but the goal is aggregation and alert reduction, not correlation. UEBAs, on the other hand, keep risk scores (I hate the term; there’s no “risk” there, but whatever) for entities such as endpoints and users, with the use cases adding to the scores of the involved entities. The nice thing about scores is that they provide the ability to correlate things that may initially look unrelated. Different use cases involving a certain entity will raise the score to a level that makes the entity interesting and subject to investigation, without the need for an analyst to envision the possibility of those events being part of a single occurrence and implementing that as a correlation rule.

SIEM correlation is still useful, but we need to recognize its limitations and embrace the new capabilities of new tools such as UEBA to improve that. As we’ve been talking, SIEM and UEBA are getting closer every day, so now it’s just a matter of time before SIEMs move (or give the option) to track issues based on entity scores. But if you want to have that now, you should look at UEBA tools.

A good start would be our “A Comparison of UEBA Technologies and Solutions” that has just been published. If you read it, please don’t forget to provide feedback about it!

The post SIEM Correlation is Overrated appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2nrnOgD
via IFTTT