Wednesday, October 3, 2018

From my Gartner Blog - Endpoint Has Won, Why Bother With NTA?

One of my favorite blog posts from Anton is the one about the “SOC nuclear triad”. As he describes, SOCs should use logs, endpoint and network data on their threat detection and response efforts. But we also know that organizations don’t have infinite resources and will often have to decide about which tool to deploy first (or ever). Leaving logs aside for a moment, as it usually has additional drivers (i.e. Compliance), the decision eventually becomes: Endpoint vs Network.

Considering a fair comparison, I believe endpoint wins. Some of the evidence we see out there apparently confirms that. Just look at the number of EDR and NTA solutions available in the market. The number of calls I get about EDR, compared to NTA, is also higher. Not to mention that some surveys are also pointing to the same thing.

Endpoint also wins on technical aspects:

  • The network is not yours anymore: With cloud, including PaaS and SaaS, it becomes harder to find a network to plug your NTA technology too. The number of blind spots for network monitoring today are huge, and growing.
  • Encryption: To make things worse (or better, I should say), network traffic encryption is growing tremendously. Almost everything that used to be HTTP now is HTTPS. Visibility on the higher layers is very limited.
  • Endpoint has better signal to noise: This may be more contentious, but it seems that less deterministic detection seems to work better on the endpoint than on the network. What does that mean, in practical terms? That the detection approaches that go beyond simple signatures or indicators matching will generate better alerts, in terms of false positive rates, on the endpoint instead of on the network. Some people may disagree here, but that’s my impression from clients’ feedback about products from both sides.
  • You can see all network stuff on the endpoint: If you really want to see network traffic, why not capture on the endpoints? Some products have been doing that for years.

I think these are some of the reasons why MDR service providers select EDR as delivery mechanism for their services. Having an agent in place also gives them more fine-grained response capabilities, such as killing a process instead of blocking traffic to or from an IP address.

So, endpoint wins. Why would anyone still bother with NTA?

There are reasons that could reverse the preference from endpoint to network. You may prefer to rely on NTA when:

  • You need to protect IOT, OT/ICS, BYOD, mobile devices: Simply put, if you cannot install an agent on it, how would you do endpoint-based detection? There are many technologies being connected to networks that do not support or don’t have agents available. Sometimes they do, but you are not allowed to install the agent there.
  • Organizational challenges: Not all organizations are a perfectly friendly environment for endpoint monitoring. The “owners” of the endpoint technologies may simple reject the deployment of new agents. Your silo may not have enough power to force the deployment of agents but may have better access to network instrumentation. There are many situations beyond simple technical reasons that would force you to look for an alternative to endpoint technologies.
  • Price? Not sure here, but depending on the number of endpoints and the network architecture, it may be cheaper to do monitoring on the network level instead of on each endpoint. If you have a huge number of endpoints, but a network that is easy to instrument and monitor, the bill for NTA could be friendlier than the EDR bill.

So, there are two reasons to still invest in NTA. First, PERFECT visibility REQUIRES both. If you are concerned about super advanced threats disabling agents, using BIOS/EFI rootkits, you need to compensate with non-endpoint visibility too. Second, organizational or technology limitations may leave you with network as the only option.

 

Do you see any other reason why NTA would be the preferred option, instead of endpoint? Do you disagree that endpoint has won?

(this post, BTW is the result of our initial discussions on upcoming NTA research…)

 

 

 

 

 

The post Endpoint Has Won, Why Bother With NTA? appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2yce5Sd
via IFTTT

Friday, September 7, 2018

From my Gartner Blog - The “How To Build a SOC” Paper Update is OUT!

Anton and I have been probing the social media for some time about the trends related to SOC and incident response teams. All that work finally made its way into our “How to Plan, Design, Operate and Evolve a SOC” paper. It is the same paper we published a couple of years ago, but updated to reflect some things we’ve seen evolving since the first version, such as:

  • SOAR tools evolution and growth in adoption
  • Further convergence between security monitoring and incident response
  • Higher adoption of services to supplement internal capabilities

We also updated the guidance figure to include more details for each phase:

Please provide feedback if you read it via https://surveys.gartner.com/s/gtppaperfeedback

The post The “How To Build a SOC” Paper Update is OUT! appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2MaD2Cq
via IFTTT

Tuesday, July 31, 2018

From my Gartner Blog - Gartner Security and Risk Management Summit Brazil – 2018

The Gartner Security Summit Brazil is fast approaching and I’m happy to be part of it again. This time it’s even more special, for many reasons.

This is my first year as the chairman of the conference. It’s very rewarding to be work on the content that will be delivered,  selecting analysts and external speakers. I’m happy to have Anton coming this year. He has quite a fan base there, I’m sure they will all be excited to attend his sessions!

I was also able to bring two very interesting external speakers:

  • Dr. Deltan Dallagnol – One of the prosecutors working on the famous “Carwash Operation task force”
  • Dr. Jessica Barker – The human aspects of information security always fascinated me. Dr. Barker is one of the specialists in this field and she’s bringing her perspective on why things are not so simple as “users are dumb”.

We’ll also have an stellar team of Gartner analysts there. You can check who’s coming here.

Of course, I have my own share of sessions too:

TUESDAY, 14 AUGUST, 2018 / 09:15 AM – 10:15 AM – Scaling Trust and Resilience — Cut the Noise and Enable Action (The opening Keynote)

TUESDAY, 14 AUGUST, 2018 / 01:45 PM – 02:30 PM – Roundtable: How Did You Start Your Organization’s Detection and Response Capabilities?

TUESDAY, 14 AUGUST, 2018 / 03:45 PM – 04:30 PM – An Overview of the Threat Landscape in Brazil

WEDNESDAY, 15 AUGUST, 2018 / 09:15 AM – 10:00 AM – CARTA Inspired Cases in Brazil

WEDNESDAY, 15 AUGUST, 2018 / 12:15 PM – 01:15 PM – CISO Circle Lunch: Lessons Learned in the Equifax Breach and Other Incidents

WEDNESDAY, 15 AUGUST, 2018 / 01:45 PM – 02:30 PM – Roundtable: Lessons From Using Managed Security Services

 

If you’re planning to attend, please come and say hi :-)

The post Gartner Security and Risk Management Summit Brazil – 2018 appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2OwjgDR
via IFTTT

Tuesday, April 17, 2018

From my Gartner Blog - Threat Simulation Open Source Projects

It’s crazy how many (free!) OSS projects are popping up for threat and attack simulation! We are working on research about Breach and Attack Simulation (BAS) tools, and we’ll certainly mention these projects, buy I thought it would be valuable to provide a list of links on the blog as well. Here are all the projects that I’ve managed to track in the past few weeks.

So what? No excuse to not run some of these and see how your environment and your detection and response practices react. Go ahead and try some of these :-)

 Invoke-Adversary – Simulating Adversary Operations – Windows Security

The post Threat Simulation Open Source Projects appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2vqjKWA
via IFTTT

Wednesday, April 11, 2018

From my Gartner Blog - Big data And AI Craziness Is Ruining Security Innovation

I don’t care if you use Hadoop or grep+Perl scripts. If you can demonstrate enough performance to do what you claim you can do, that’s what matters to me from a backend point of view. Now, can you show me that your tool does what it should do better than your competitors?

There is a trend about the messages I’ve been hearing during vendor briefings over the past few months. They spend a lot of time talking about how great their architecture is, all those Hadoop stack components so beautifully integrated, showing how aligned to the latest data management, machine learning and analytics they are. They are proud of the stuff under the hood. But, very often, without verifiable claims on their effectiveness.

This is getting close to the insanity level. “We have AI”. “We are hadoop based”. “We do ML and Deep Learning”. It’s like the technology and techniques being used are the only thing to look for, and not the results! This may work to lure the VCs, but I cannot see how anyone would buy something that uses all this cool technology for…what exactly?

You see advanced analytics that provide “confidence levels” that do not change based on user feedback. Crazy visualizations that don’t tell you anything and could be easily replaced by a simple table view. “Deep Learning” for matching TI indicators to firewall logs. The list is endless.

My concern with this craziness is that vendors are mixing priorities here; they want to show they are using the latest techniques, but not worried about showing how effective they are. There are so many attempts to be the next “next-gen”, but not enough attempts to do help organizations solve their problems. This is killing innovation in security. I want to see how your tool makes threat detection 10x better, not that you can process 10x more data than your competitor.

There are cases where performance and capacity bottlenecks are the main pain point of an industry. Think SIEM before they started moving away from RDBMS, for example. But this is not always true. Now we see vendors happy to claim their products are based on Big Data technologies, but the use cases don’t require more than a few hundred megabytes of data stored. Stop that nonsense.

If you’re getting into this industry now, do so with a product that will work better than what organizations already have in place: findings more threats, faster and using less resources during detection and response. If your next-gen technology is not able to do so, it’s just a toy. And the message I hear from our clients is clear: We don’t want another toy, we want something that makes our lives easier.

 

The post Big data And AI Craziness Is Ruining Security Innovation appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2qn5KIm
via IFTTT

Wednesday, March 7, 2018

From my Gartner Blog - The Virtual Patch Analyst

Is there a need, or place for a “virtual patch analyst”?

If you look at our guidance on vulnerability management, you’ll see that one of the key components we suggest our clients to consider is preparing for mitigation actions, when the immediate vulnerability remediation is not possible. We often see organizations scrambling to do it because they haven’t spent time in advance to build the process, and they don’t have a menu of prepared mitigations to use. Those could include the NIPS, WAFs, etc, but how many would be comfortable to rush the implementation of a “block” signature on those?

mitigation-analyst

Normally this wouldn’t require a FTE, but big organizations could in fact have enough work on this to justify one. Keeping in mind there are many mitigation options, including NIPS, WAF, HIPS, vendor workarounds, application control, additional monitoring, etc. So, one of the challenges for such role to exist is the broad skillset required. Someone capable of understanding the implications of SMB protocol configuration tweaking on Windows and, at the same time, able to write a WAF signature? Hard, but not impossible.

Even if the complete skillset to create the mitigation actions is something hard to find on a single professional, there’s still a lot of work around coordination and process management. The virtual patch analyst may not need all those skills, just some basic understanding of what is being done on each case. The bulk of the work is maintaining the menu of options, getting the right people engaged to develop them and coordinating the process when one needs to be implemented.  Having such role as part of a vulnerability management team is something a big enterprise could do to ensure unacceptable risks are mitigated while a definitive solution for them is not available.

Is there anyone out there working on such role? I would love to hear more about it!

 

The post The Virtual Patch Analyst appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2FnCdUl
via IFTTT

Monday, February 26, 2018

From my Gartner Blog - It’s Not (Only) That The Basics Are Hard…

While working on our research for testing security practices, and also about BAS tools, I’ve noticed that a common question about adding more testing is “why not putting some real effort in doing the basics instead of yet another security test?”. After all, there is no point in looking for holes when you don’t even have a functional vulnerability management program, right?

But the problem is not about not doing the basics. It is about making sure the basics are in place! Doing the basics is ok, but making sure your basics are working is not trivial.

Think about the top 5 of the famous “20 Critical Security controls“:

  • Inventory of Authorized and Unauthorized Devices
  • Inventory of Authorized and Unauthorized Software
  • Secure Configurations for Hardware and Software
  • Continuous Vulnerability Assessment and Remediation
  • Controlled Use of Administrative Privileges

How do you know your processes to maintain devices and software inventories are working? What about the hardening, vulnerability management and privileged access management processes? How confident are you that they are working properly?

If you think about the volume and frequency of changes in the technology environment of a big organization, it’s easy to see how the basic security controls can fail. Of course, good processes are built with the verification and validation steps to catch exceptions and mistakes, but they still happen. This is a base rate problem: with the complexity and high number of changes in the environment, even the best process out there will leave a few things behind. And when it is about security…the “thing left behind” may be a badly maintained CMS exposed to the Internet, a CVSS 10 vulnerability, unpatched, a credential with excessive privileges and a weak (maybe even DEFAULT!) password.

I’ve seen many pentests where the full compromise was performed by the exploitation of those small mistakes and misconfigurations. The security team gets a report with a list of things to address that were really exceptions of processes that are doing a good job (again, you may argue that they are not doing a good job, but this is the point where I stop saying there’s no such thing as a perfect control). So they clean those things, double check the controls and think “this definitely will never happen again!”, just to be see the next test, one year after, also succeeding by exploiting a similar, but different combination of unnoticed issues.

And that’s one of the main value drivers for BAS. Choosing to deploy a tool like that is to recognize that even the good controls and processes will eventually fail, and put something that will continuously try to find those issues left behind. By doing that in an automated manner you can ensure to cover the entire* environment consistently and very frequently, reducing the time those issues will be exposed to real attackers. Is it another layer of control? Yes, it is. But an automated layer to keep the overhead to a minimum. If your basics are indeed working well the findings should also not be overwhelming to the point of becoming a distraction.

 

* – You may catch the funny gap in this rationale…you may also end up failing because the BAS tool is not checking the entire environment, due to an issue with inventory management. Or the tests are not working as intended because they are being blocked by a firewall that should have an exception rule for the tool; yes, this using BAS is also a control, so it may fail too!

 

The post It’s Not (Only) That The Basics Are Hard… appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2F82kSk
via IFTTT