Tuesday, October 15, 2019

From my Gartner Blog - Our New Research on Incident Response Has Been Published

We finally managed to publish our great new (in fact, refreshed) document on preparing for incident response, “How to Implement a Computer Security Incident Response Program”.

This is the first document of my colleague Michael Clark, who did a terrific job of modernizing some stuff from a long time ago.

Some interesting pieces from this guidance document:

 

Organizations that practice their incident response program find gaps and areas for improvement. Certain exercises also make the computer security incident response team (CSIRT) more comfortable and better equipped when an incident occurs.

Include all the locations and services where your assets and data reside in the plan. This includes SaaS and company-controlled cloud assets. Many high-profile breaches involve elements outside the organization’s perimeter

Detections that must be addressed are inevitable. Organizations are often forced into a response mode by attackers and third-party breach notifications.

As usual, we are always looking for detailed feedback on our papers. Feel free to drop some comments here if you read the doc.

The post Our New Research on Incident Response Has Been Published appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2IRhDza
via IFTTT

Monday, June 17, 2019

From my Gartner Blog - Presenting at the Gartner Security and Risk Management Summit DC 2019

This is literally a last minute blog post about my sessions at this year’s Gartner Security and Risk Management Summit. This time I have three sessions:

Tuesday 18, 2:30PM – Debate: Changing Societal Perception of Cybersecurity: This is a very fun debate with my colleague Paul Proctor, where we discuss the need to change society’s perception of security. Paul is trying his best, but I don’t think he can win this one 🙂

Wednesday 19, 5:15PM – Creating Security Monitoring Use Cases With the MITRE ATT&CK Framework: The MITRE AT&CK framework has quickly become a popular tool for many security operations practices. This session illustrates how it can be used to address some of the most common challenges of security operations centers: How to create security monitoring use cases? How do we know if we are looking for right things? What should be the starting list of use cases on our SIEM deployment?

Thursday 20, 10:45AM – Further Evolution of Modern SOC: Automation, Delegation, Analytics: This presentation provides a structured approach to plan, establish and efficiently operate a modern SOC. Gartner clients with successful SOCs put the premium on people rather than process and technology. People and process overshadow technology as predictors for SOC success or failure. Among other things, it will cover questions such as: Do I need a SOC and can I afford it? Where can I rely on automation and where do I need to outsource or delegate? Can SOAR tools really automate my SOC?

This is one of the most fun weeks of the year for us Gartner analysts. For you attending the event and the sessions above, please let me know if you like them, what could the different and how we can improve.

The post Presenting at the Gartner Security and Risk Management Summit DC 2019 appeared first on Augusto Barros.



from Augusto Barros https://gtnr.it/2Im4DSs
via IFTTT

Thursday, May 2, 2019

From my Gartner Blog - Considering Remediation Approaches For Vulnerability Prioritization

As Anton said, we are starting our work on vulnerability management this year. One of the points I’ve started to look at more carefully is how much the different patching approaches can affect how we prioritize vulnerabilities for remediation.

Expanding the prioritization of vulnerabilities to go beyond CVSS and include threat context is something we are seeing quickly moving to mainstream. Now it’s not uncommon to see organizations that don’t only look at how bad a vulnerability could be, but how much it is and even will be (great work on prioritization models by some vendors out there). This really helps reducing the noise and focus on what matters.

But this is helpful when you look at vulnerabilities individually only. When they move to other side of the fence, however, the problem has some different nuances. IT operations don’t see vulnerabilities, they see patches. And the relationship between patches and vulnerabilities are not always one-to-one, and not all patches are equal. There are those “applied-periodically-automatically-with-no-intervention” types of patches, there are also the “almost-never-released-and-when-installed-breaks-everything” types of patches. The IT Ops team may not even bother looking at the priority of the former but may want a very thorough justification for why they need to apply the latter.

Many vulnerability management programs, because they are managed by the security team, do not consider the characteristics of the patching process when applying their prioritization criteria. But if they want to be taken seriously by IT Ops, they should. So, my questions here are:

– When you prioritize vulnerabilities, do you incorporate “cost to patch” in your criteria?

– If you do so, how? Does your tool set allow you to do it? Where is that information coming from?

– If you define patching times by categories, have you considered patching characteristics for categorization? For example, do you define categories as something like “non-critical workstations” or like “windows workstations with auto-updates on”?

– Do you look at the vendors of software deployed in your environment as part of this exercise? Patching Microsoft vs. Oracle, for example? Do you take into consideration the quality of the patches or release schedule of the vendor to define the patching times?

We like to stay away from the patching problem as it seems more like an IT operations problem than a security problem. But I believe that proper prioritization (or at least one that will be useful for the goal of fixing vulnerabilities) should include something about the required patches too. If that’s correct, what are the tools available for that and how are organizations doing it?

Please jump in and leave your experiences in the comments section!

 

 

The post Considering Remediation Approaches For Vulnerability Prioritization appeared first on Augusto Barros.



from Augusto Barros https://gtnr.it/2VcMTAS
via IFTTT

Friday, February 22, 2019

From my Gartner Blog - The Deception Paper Update is Out!

Good thing about when Anton is away is I’m always able to jump and announce our new research ahead of him 🙂

So, the update to our “Applying Deception Technologies and Techniques to Improve Threat Detection and Response” paper has finally been published. This is a minor update, but as with every updated paper, it has changed for better. Some of the highlights

  • New, and more beautiful pictures (thanks to our co-author Anna Belak for making our papers look 100% better on the graphics side!)
  • Additional guidance on how to test deception tools (tip: put your Breach and Attack Simulation tool to use!)
  • A better understanding on how the Deception Platforms are evolving and what are the current “must have” features you’ll find there

We also tuned key findings and recommendations, including these:

  • Evaluate deception against alternatives like NTA, EDR, SIEM and UEBA to detect stolen-data staging, lateral movements, internal reconnaissance and other attack actions within your environment.
  • Deploy deception-based detection approaches for environments that cannot use other security controls due to technical or economic reasons. Examples include IoT, SCADA, medical environments and highly distributed networks.

We are also working on a solutions comparison on this area. A lot of exciting stuff on that one, stay tuned. Meanwhile, please check the new paper and don’t forget to provide feedback!

 

The post The Deception Paper Update is Out! appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2Xjdgm8
via IFTTT

Friday, January 4, 2019

From my Gartner Blog - More on “AI for cybersecurity”

There is a very important point to understand about the vendors using ML for threat detection.

Usually ML is used to identify known behavior, but with variable parameters. What does that mean? It means that many times we know what bad looks like, but not how exactly it looks like.

For example, we know that data exfiltration attempts will usually exploit certain protocols, such as DNS. But data exfiltration via DNS can be done in multiple ways. So, what we do to detect it is to use ML to learn the normal behavior, according to certain parameters. Things like amount of data on each query, frequency of queries, etc. Anomalies on these parameters may point to exfiltration attempts.

On that case ML helps us find something we already know about, but the definition is fuzzy enough that prevents us from using simple rules to detect it. This is an example of unsupervised ML used to detect relevant anomalies for threat detection. There are also many examples of using supervised ML to learn the fuzzy characteristics of bad behavior. But as you can see, a human had to understand the threat, how it operates, and then define the ML models that can detect the activity.

If you are about to scream “DEEP LEARNING!”, stop. You still need to know what data to look at with deep learning, and if you are using it to learn what bad looks like, you still need to tell it what is bad. We ended up at the same place.

Although ML based detection is a different detection method, the process is still very similar to how signatures are developed.

What haven’t been done yet is AI that can find threats not defined by a human. Most vendors use misleading language to lead people to think they can do it, but that doesn’t exist. Considering this reality, my favorite question to these vendors is usually “what do you do to ensure new threats are properly identified and new models developed to identify them?”. Isn’t that interesting that people buy “AI” but keep relying on the human skills from the vendor to keep it useful?

If you are a user of these technologies, you’ll usually need to know what the vendor does to keep what the tools looks for aligned to new threats. For the mature shops, you also need to know if the tool allows you to do that yourself, if you want/need.

That’s a good way to start the conversation with a “Cybersecurity AI” vendor; see how fast they fall into the trap of “we can find unknown unknowns”.

The post More on “AI for cybersecurity” appeared first on Augusto Barros.



from Augusto Barros https://gtnr.it/2AwXE4H
via IFTTT

Tuesday, November 13, 2018

From my Gartner Blog - The new (old) SIEM papers are out!

As Anton already mentioned here and here, our update of the big SIEM paper was turned into two new papers:

How to Architect and Deploy a SIEM Solution
SIEM is expected to remain a mainstay of security monitoring, but many organizations are challenged with deploying the technology. This guidance framework provides a structured approach for technical professionals working to architect and deploy a SIEM solution.
Published: 16 Oct 2018
Anton Chuvakin | Anna Belak | Augusto Barros

How to Operate and Evolve a SIEM Solution
Managing and using a SIEM is difficult, and many projects are stuck in compliance or minimal value deployments. Most SIEM challenges come from the operations side, not broken tools. This guidance supports technical professionals focused on security working to operate, tune and utilize SIEM tools.
Published: 05 Nov 2018
Augusto Barros | Anton Chuvakin | Anna Belak

 

 

We decided to split the document so we could expand on those two main activities, deploying and operating a SIEM, without the worry of building a document so big it would scare away the readers. A great secondary outcome of that is we were able to put together separate guidance frameworks for each one of those activities. Some of my favorite pieces of each doc:

Deploy

“User and entity behavior analytics (UEBA)-SIEM convergence allows organizations to also include UEBA-centric use cases and machine learning (ML) capabilities in their deployment projects.” (A hype-less way to talk about “OMG AI AI!”)

“Staff shortages and threat landscape drive many organizations to SaaS SIEM, co-managed SIEM and service-heavy models for their SIEM deployments and operation.” (Because, in case you haven’t noticed, SIEM NEEDS PEOPLE TO WORK)

“Adopt the “output-driven SIEM” model, where nothing comes into a SIEM tool unless there is a clear knowledge of how it would be used.” (I know it’s old, but hey, this is our key advice for those deploying SIEM! So, still a favorite)

“Deploy use cases requiring constant baselining and anomaly detection, such as user account compromise detection, using ML/advanced analytics functions previously associated with UEBA” (because it’s not all marketing garbage; these use cases are the perfect fit for UEBA capabilities)

Operate

“Creating and refining security monitoring use cases is critical to an effective SIEM. User-created and customized detection logic delivers the most value.” (because ongoing SIEM value REQUIRES use case management)

“Develop the key operational processes for SIEM: run, watch and adapt. When necessary, fill the gaps with services such as MSS and co-managed SIEM” (we promoted “tune” to “adapt”)

“Prepare and keep enough resources to manage and troubleshoot log collection issues. New sources will be added; software upgrades change log collection methods and formats; environment changes often cause collection disruption.” (ML capabilities, big data tech, all that is cool, but a big chunk of SIEM work is still being able to get the data in)

 

The post The new (old) SIEM papers are out! appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2OEUctf
via IFTTT

Wednesday, October 3, 2018

From my Gartner Blog - Endpoint Has Won, Why Bother With NTA?

One of my favorite blog posts from Anton is the one about the “SOC nuclear triad”. As he describes, SOCs should use logs, endpoint and network data on their threat detection and response efforts. But we also know that organizations don’t have infinite resources and will often have to decide about which tool to deploy first (or ever). Leaving logs aside for a moment, as it usually has additional drivers (i.e. Compliance), the decision eventually becomes: Endpoint vs Network.

Considering a fair comparison, I believe endpoint wins. Some of the evidence we see out there apparently confirms that. Just look at the number of EDR and NTA solutions available in the market. The number of calls I get about EDR, compared to NTA, is also higher. Not to mention that some surveys are also pointing to the same thing.

Endpoint also wins on technical aspects:

  • The network is not yours anymore: With cloud, including PaaS and SaaS, it becomes harder to find a network to plug your NTA technology too. The number of blind spots for network monitoring today are huge, and growing.
  • Encryption: To make things worse (or better, I should say), network traffic encryption is growing tremendously. Almost everything that used to be HTTP now is HTTPS. Visibility on the higher layers is very limited.
  • Endpoint has better signal to noise: This may be more contentious, but it seems that less deterministic detection seems to work better on the endpoint than on the network. What does that mean, in practical terms? That the detection approaches that go beyond simple signatures or indicators matching will generate better alerts, in terms of false positive rates, on the endpoint instead of on the network. Some people may disagree here, but that’s my impression from clients’ feedback about products from both sides.
  • You can see all network stuff on the endpoint: If you really want to see network traffic, why not capture on the endpoints? Some products have been doing that for years.

I think these are some of the reasons why MDR service providers select EDR as delivery mechanism for their services. Having an agent in place also gives them more fine-grained response capabilities, such as killing a process instead of blocking traffic to or from an IP address.

So, endpoint wins. Why would anyone still bother with NTA?

There are reasons that could reverse the preference from endpoint to network. You may prefer to rely on NTA when:

  • You need to protect IOT, OT/ICS, BYOD, mobile devices: Simply put, if you cannot install an agent on it, how would you do endpoint-based detection? There are many technologies being connected to networks that do not support or don’t have agents available. Sometimes they do, but you are not allowed to install the agent there.
  • Organizational challenges: Not all organizations are a perfectly friendly environment for endpoint monitoring. The “owners” of the endpoint technologies may simple reject the deployment of new agents. Your silo may not have enough power to force the deployment of agents but may have better access to network instrumentation. There are many situations beyond simple technical reasons that would force you to look for an alternative to endpoint technologies.
  • Price? Not sure here, but depending on the number of endpoints and the network architecture, it may be cheaper to do monitoring on the network level instead of on each endpoint. If you have a huge number of endpoints, but a network that is easy to instrument and monitor, the bill for NTA could be friendlier than the EDR bill.

So, there are two reasons to still invest in NTA. First, PERFECT visibility REQUIRES both. If you are concerned about super advanced threats disabling agents, using BIOS/EFI rootkits, you need to compensate with non-endpoint visibility too. Second, organizational or technology limitations may leave you with network as the only option.

 

Do you see any other reason why NTA would be the preferred option, instead of endpoint? Do you disagree that endpoint has won?

(this post, BTW is the result of our initial discussions on upcoming NTA research…)

 

 

 

 

 

The post Endpoint Has Won, Why Bother With NTA? appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2yce5Sd
via IFTTT