Monday, August 8, 2016

From my Gartner Blog - Arriving at a Modern SOC Model

While writing our new (and exciting) research on “how to build a SOC”, we came into a conclusion that a modern SOC has some interesting differences from the old vanilla SOC that most organizations have in place. In essence, the difference is related to the inclusion of Threat Intelligence and Hunting/Continuous IR activities. The way that a traditional SOC operates is more or less like this:

soc_1

While the “newer” model is something like:

soc_2

So far, this is not surprising or particularly exciting. That’s just plain evolution. Now, this becomes more interesting when you start to work on guidance for organizations that right now are planning to build their (new) SOC. Should they plan to build it as a modern SOC, or should they build as a traditional SOC and then move it to the modern model as it matures?

So far we haven’t seen substantial evidence to back any of those two options. I can see how “building it the right way” would make sense, as you don’t want to waste resources planning and writing processes twice, and there is no point in building a less effective model when you know there is a better way to do things. But the modern model also requires more resources (people and tools). Some of those newer processes are also frequently seen as part of organizations with mature security operations. Can they be performed by those that are not as mature? Does those processes actually work on immature organizations? This is a “do it right the first time” versus a “walk, then run” discussion.

Do you happen to have experience with a mature modern SOC? If so, how did you arrive there? Was it built like that or did it evolve from the traditional model? It would be even more interesting to hear from people with FAIL stories from one of those two approaches. Don’t be shy, let us hear your stories :-)

The post Arriving at a Modern SOC Model appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2aGNcdf
via IFTTT

Friday, July 8, 2016

From my Gartner Blog - Are Security Monitoring Alerts Becoming Obsolete?

If I ask anyone working on a SOC about a high level description of their monitoring process, the answer will most likely look like this:

“The SIEM generates an alert, the first level analyst validates it and send it to the second level. Then…”

Most SOCs today work by putting their first level analysts – the most junior analysts, usually assigned to be the 24×7 eyes on console – parsing the alerts generated by their security monitoring infrastructure and deciding if that’s something that needs action by the more experienced/skilled second level. There is usually some prioritization on the alerts with the assignment of severity levels, reminiscent from old syslog severity labels such as CRITICAL, WARNING, INFORMATIONAL, DEBUG.

Most SOCs will have far more alerts being generated than manpower resources to address all of them, so they usually put rules in place such as “let’s address all HIGHs immediately, address as much of the MEDIUMs as we can, don’t need to touch the LOWs”. It is certainly prioritization 101, but what happens when there are too many critical/high alerts? Should they prioritize inside that group as well? Also, what if many medium or low severity alerts are being generated about the same entity (an IP, or a user), isn’t that something that should be bumped up in the prioritization queue? Many teams and tools try to address those concerns in one way or another, but I have the impression that this entire model is showing signs of decline.

If we take a careful look at the interface of the newest generation of security tools we will notice the alerts are not the entities listed in the primary screens anymore; what most tools are doing now is consolidating the many different generated alerts into a numeric scoring mechanism for different entities, most commonly users and endpoints. Most of tools call those scores “risk scores” (which is awful and confusing, as it’s usually nothing related to “risk”). The idea is to show on the main screen the entities with “top scores”, those that had more signs of security issues linked to them, so the analyst can click on one of them and see all reasons, or alerts, behind the high score. This would automatically address the issue of prioritizing among the most critical and the concerns about multiple alerts about a single entity.

For a SOC using a score based view the triage process could be adapted in two different ways: on the first one the highest scores are addressed directly by the second level, removing the first level pre-assessment and allowing for a faster response for something more likely to be a serious issue, while the first level works on a second tier of scores. The second way would be to use the same method of initial parsing by the first level, but with the basic difference that they would keep picking entities from the top of the list and work as far into it as they can, sending the cases that require further actions to the second level (which can apply the same approach to the cases being forward by the L1).

This may look like a simple change (or, for the cynics, no change at all), but using scores can really be a good way to improve the prioritization of SOC efforts. But scores are not only useful for that. They are also a mechanism to improve correlation of security events, usually coming from different security monitoring systems or even from SIEM correlation rules.

What we normally see as security events correlation is something like “if you see X and Y, alert”, or “if you see n times X then Y, alert”. Recently many correlation rules have been created trying to reflect the “attack chain”: “if you find a malware infection event, following from a payload download and an established C&C, alert”. The issue with that is that you need very good normalization on the existing events in order to keep the number of rules at an acceptable level (you don’t want to write a rule for every combination where there is an event related to C&C detection, for example). You could also miss attacks where the observed events are not following the expected attack chain.

The improvement on prioritization comes from the fact that in this new model every event would increment the scores for the associated entities with a certain discrete amount. Any time a new event or event type is defined within the system, the amount of points to be added to the score is determined. Smarter systems could even dynamically define those points according to attributes of the event (more points to a data exfiltration event when more data being transferred is detected). The beauty of the score model is that scores would go up (and eventually hit a point where the entity would become a target for further investigation) by any combination of events, with no need to previously envision the full attack chain and describe it in a correlation rule. This is how most modern UEBA (User and Entity Behavior Analytics) tools work today: a set of interesting anomalies are defined within the system (either by pre-packaged content or defined by the users) and every time they are observed the scores for the affected entities is incremented.

Here is a nice example of an UEBA tool interface using scores:

scores

Score based monitoring systems can improve even further. Feedback from the analysts could be used to dynamically adapt scores from each event or event type, using something like Naive Bayes, for example. We’ve been doing that for Spam filtering for ages.

The score trend is already clear in the technologies side; SOC managers and analysts should review their processes and training to get the full benefits of that approach. How do you see your organization adopting that approach? Feasible? Just a distant dream? Maybe you think it doesn’t make sense?

Of course, if your SOC is already working on a score based approach, I’d also love to hear about that experience!

The post Are Security Monitoring Alerts Becoming Obsolete? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/29v1CPt
via IFTTT

Wednesday, July 6, 2016

From my Gartner Blog - What’s Like to Use Non-MRTI Threat Intelligence

We often hear clients asking about threat intelligence related processes: how to collect, refine and utilize it (by the way, this document is being updated; let us know if you have feedback about it!). It’s very easy to explain and visualize when we are talking about machine readable TI (MRTI for short); your tools ingest the data feed and look for the IOCs in your environment. But what about the other type of threat intelligence, the “Non-MRTI” type?

Here’s a simple example. Take a look at this post from the McAfee Labs Blog. It is a nice explanation of a somewhat new exploitation technique used by malware they recently analyzed. This is a typical “TTP” (Tactics, Techniques and Procedures) piece of TI (and by the way…did you notice it’s FREELY AVAILABLE?). It describes threat behavior. Of course, it could be more valuable if there was more information to link it to threat actors, campaigns, etc, but it is valuable nevertheless. But coming back to the point of this post: why am I talking about it?

Because you can use to check where you are in terms of processes to leverage this kind of TI. Try to answer, for example some of this questions:

  • Do I have people looking for and reading this type of information?
  • Do I have a process that takes this type of information and turns it into actionable advice for my security operations?

With that you can see if the basic processes are in place; you can further extend this small self-assessment with more detailed questions such as:

  • Would this technique work in my environment?
  • Am I currently prepared (in terms of tools and monitoring use cases) to detect this?
  • If not, what changes do I need to do on my environment and tools to detect it?

Some people expect some ethereal process or method when we talk about consuming TI; there’s nothing special about that. If you can answer “yes” to all, or even some of the questions above, you’re already doing it. Of course, there are different maturity levels, types of TI and sources of information, but all that can evolve over time. So, if you are thinking about your capabilities to consume TI, take a look at the example above. It might give you some interesting insights.

 

 

The post What’s Like to Use Non-MRTI Threat Intelligence appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/29i8dbB
via IFTTT

From my Gartner Blog - Coming to Sao Paulo for the Gartner Security Summit

I’m very excited to come back to São Paulo for the Garter Security and Risk Management Summit in August. During August 2nd and 3rd I’ll have a packed schedule there, including a shared keynote with analysts Claudio Neiva and Felix Gaehtgens. The other sessions I’ll be delivering during the event are (titles and descriptions in portuguese):

Tue, 2 Aug 13:45 – 14:30 – O Mundo está Mudando: Como Isso Afeta o Meu Programa de Gestão de Vulnerabilidade?
Esta sessão abrange os efeitos das últimas tendências tecnológicas, como a expansão da “cloud” e o uso da mobilidade, nos Programas de Gestão de Vulnerabilidade. Os desafios aos processos de GV (Gestão de Vulnerabilidade) relacionados a essas tendências são discutidos, bem como as estratégias para lidar com as suas implicações e, assim, adaptar o programa de forma a manter o risco sob controle. Temas Principais: Como realizar a avaliação da vulnerabilidade nos ambientes de “cloud”, móveis e de IoT? Como integrar a gestão de vulnerabilidade às práticas de DevOps? Como lidar com o aumento contínuo da quantidade de dados de vulnerabilidade gerados pelas avaliações de vulnerabilidade?
Wed, 3 Aug 09:15 – 10:45 – Desenvolvimento, Implementação e Otimização dos Casos de Uso de Monitoramento da Segurança
Esse workshop terá como foco, por meio da colaboração de pares, a implementação e a otimização dos casos de uso de monitoramento da segurança. Os participantes serão orientados pela estrutura do Gartner para identificar e refinar os seus requisitos a fim de produzir os seus próprios casos de uso de monitoramento da segurança com base em seus desafios e prioridades atuais. Principais questões: Como selecionar casos de uso de monitoramento da segurança? Como priorizar casos de uso para a implementação? Como otimizar casos de uso de monitoramento da segurança?
Wed, 3 Aug 13:45 – 14:30 – Mesa-redonda: Construindo e Mantendo um Programa Efetivo de Gestão de Vulnerabilidade
Os participantes devem trazer as suas experiências sobre gestão de vulnerabilidade e os desafios enfrentados para torná-la uma medida de segurança efetiva. A mesa-redonda discutirá as soluções potenciais e como os programas de gestão de vulnerabilidade efetiva podem lidar com esses desafios e permanecer relevantes como parte da estrutura de controle da segurança global.
Wed, 3 Aug 15:45 – 16:30 – Desenvolvendo Casos de Uso de Monitoramento de Segurança: Como Executar Bem
Os sistemas de Monitoramento de Segurança, como SIEM, só são efetivos quando o conteúdo apropriado é implementado e otimizado para fornecer resultados alinhados aos principais riscos e necessidades da organização. Esta sessão apresenta a estrutura do Gartner para identificar, priorizar, implementar e otimizar casos de uso de monitoramento de segurança. Principais questões: Como selecionar casos de uso de monitoramento da segurança? Como priorizar casos de uso para a implementação? Como otimizar casos de uso de monitoramento da segurança?
If you are coming to any of these sessions, please come and say “Olá” :-)

The post Coming to Sao Paulo for the Gartner Security Summit appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/29OvCDm
via IFTTT

Tuesday, July 5, 2016

From my Gartner Blog - The EDR Comparison Paper is Out!

This is old news, but the paper was published right before the maelstrom of the Gartner Security Summit. The paper compares the EDR solutions from 10 vendors (those more visible to Gartner based on number of inquiry calls specifically about EDR):

  • Carbon Black Enterprise Response
  • Cisco Advanced Malware Protection for Endpoints
  • Confer
  • CounterTack
  • CrowdStrike Falcon
  • Cybereason
  • FireEye Endpoint Security (HX Series)
  • Guidance Software’s EnCase Endpoint Security
  • RSA, The Security Division of EMC, Enterprise Compromise Assessment Tool (ECAT)
  • Tanium

 

The paper includes two major comparisons, a view of  EDR tools capabilities based on our previous paper on the subject, and another one about how well each of those tools support the 5 EDR use cases (also identified in the previous paper):

 

  • Incident data search and investigation
  • Suspicious activity detection
  • Alert triage or suspicious activity validation
  • Threat hunting or data exploration
  • Stopping malicious activity

The details of the criteria used for that comparison, as well as the results can be found in the paper (Gartner GTP subscription required). However, I can highlight a few of the key findings from our research:

  • Endpoint detection and response (EDR) vendors are often competing for the same budget used for endpoint protection platforms (EPPs) and other endpoint security tools, as well as for advanced threat and IR budgets, if available.
  • EDR is not a replacement for other endpoint security tools; it is often a detection and visibility complement to other tools providing endpoint security capabilities.
  • At end-user devices, Mac OS support is becoming more common, but some EDR solutions still don’t support it. Support for mobile devices is even more complicated and almost nonexistent.

You can also see Anton’s posts about our recent EDR research.

The post The EDR Comparison Paper is Out! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/29k6k1g
via IFTTT

From my Gartner Blog - Notes From My First Security Summit

I’ve finally found some time to collect my notes and impressions from my first Gartner Security and Risk Management Summit, back in June. I delivered one full session on Vulnerability Management and a short debate session with Anton Chuvakin about outsourcing security operations. We also hosted a roundtable on Vulnerability Management and a workshop on developing security monitoring use cases. On top of that, many one on one meetings with attendees and vendor meetings. Yes, it was a very busy week!

For those that went to the event but couldn’t catch the sessions, they are available on Gartner Events on Demand. If you find time to watch them, feel free to provide feedback on this space too, ok?

Some of my notes from the summit pointed to a couple of trends that I thought would be interesting to share:

  • Many medium organizations still on the “we’re just starting now” mode; yes, it’s 2016, but there are still organizations out there taking their first steps on a security program. It’s interesting to see some common trends from them: challenges on dealing with MSSPs, how to measure the results of their programs, finding the appropriate skills for the team.

 

  • Vulnerability scan results are still showing too many inconsistencies: yes, it’s 2016 (again) and we’re still seeing many organizations complaining that the results of their VA tools are not reliable and often plagued with false positives. This is an interesting result from a “market for lemons” scenario: it’s too hard for organizations to compare the quality of the results from the scanners available on the market, so there’s no incentive for those vendors to improve on that sense. If you are a VA tool vendor struggling to differentiate from the pack, pay attention to this: find a good way to prove your results are more reliable; there are organizations out there that could see it as a big enough reason to switch from their current solution.

The next event I’ll be presenting is in early August, the security summit in São Paulo. It’ll be fun to meet some old friends there, and a chance to dust off the Portuguese. Hope to see some of you there.

The post Notes From My First Security Summit appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/29k2O78
via IFTTT

Thursday, May 19, 2016

From my Gartner Blog - Our first EDR paper is OUT!

It’s almost impossible to get ahead of Dr. Chuvakin on blog posts and announcing new research, but I’m lucky enough he is driving at this precise moment and not able to do it before me :-)

Our first of two Endpoint Detection and Response papers, “Endpoint Detection and Response Tool Architecture and Practices”, is out.

This document should be the “starting point” to anyone trying to understand what EDR tools are, what they should be used for and what to consider before implementing this technology. Key EDR use cases are incident-related search and investigation, suspicious activity detection, alert triage and validation, threat hunting, and stopping malicious activity.

Things you can find on this paper:

  • EDR Definition
  • EDR Key Capabilities
  • Why did EDR tools appear?
  • Building a Business Case for EDR

And much more. I hope you enjoy. Then next one is a comparison of the most visible EDR tools out there, it’ll be out in a few days.

The post Our first EDR paper is OUT! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/1sD1R0J
via IFTTT