Thursday, May 2, 2019

From my Gartner Blog - Considering Remediation Approaches For Vulnerability Prioritization

As Anton said, we are starting our work on vulnerability management this year. One of the points I’ve started to look at more carefully is how much the different patching approaches can affect how we prioritize vulnerabilities for remediation.

Expanding the prioritization of vulnerabilities to go beyond CVSS and include threat context is something we are seeing quickly moving to mainstream. Now it’s not uncommon to see organizations that don’t only look at how bad a vulnerability could be, but how much it is and even will be (great work on prioritization models by some vendors out there). This really helps reducing the noise and focus on what matters.

But this is helpful when you look at vulnerabilities individually only. When they move to other side of the fence, however, the problem has some different nuances. IT operations don’t see vulnerabilities, they see patches. And the relationship between patches and vulnerabilities are not always one-to-one, and not all patches are equal. There are those “applied-periodically-automatically-with-no-intervention” types of patches, there are also the “almost-never-released-and-when-installed-breaks-everything” types of patches. The IT Ops team may not even bother looking at the priority of the former but may want a very thorough justification for why they need to apply the latter.

Many vulnerability management programs, because they are managed by the security team, do not consider the characteristics of the patching process when applying their prioritization criteria. But if they want to be taken seriously by IT Ops, they should. So, my questions here are:

– When you prioritize vulnerabilities, do you incorporate “cost to patch” in your criteria?

– If you do so, how? Does your tool set allow you to do it? Where is that information coming from?

– If you define patching times by categories, have you considered patching characteristics for categorization? For example, do you define categories as something like “non-critical workstations” or like “windows workstations with auto-updates on”?

– Do you look at the vendors of software deployed in your environment as part of this exercise? Patching Microsoft vs. Oracle, for example? Do you take into consideration the quality of the patches or release schedule of the vendor to define the patching times?

We like to stay away from the patching problem as it seems more like an IT operations problem than a security problem. But I believe that proper prioritization (or at least one that will be useful for the goal of fixing vulnerabilities) should include something about the required patches too. If that’s correct, what are the tools available for that and how are organizations doing it?

Please jump in and leave your experiences in the comments section!

 

 

The post Considering Remediation Approaches For Vulnerability Prioritization appeared first on Augusto Barros.



from Augusto Barros https://gtnr.it/2VcMTAS
via IFTTT

Friday, February 22, 2019

From my Gartner Blog - The Deception Paper Update is Out!

Good thing about when Anton is away is I’m always able to jump and announce our new research ahead of him 🙂

So, the update to our “Applying Deception Technologies and Techniques to Improve Threat Detection and Response” paper has finally been published. This is a minor update, but as with every updated paper, it has changed for better. Some of the highlights

  • New, and more beautiful pictures (thanks to our co-author Anna Belak for making our papers look 100% better on the graphics side!)
  • Additional guidance on how to test deception tools (tip: put your Breach and Attack Simulation tool to use!)
  • A better understanding on how the Deception Platforms are evolving and what are the current “must have” features you’ll find there

We also tuned key findings and recommendations, including these:

  • Evaluate deception against alternatives like NTA, EDR, SIEM and UEBA to detect stolen-data staging, lateral movements, internal reconnaissance and other attack actions within your environment.
  • Deploy deception-based detection approaches for environments that cannot use other security controls due to technical or economic reasons. Examples include IoT, SCADA, medical environments and highly distributed networks.

We are also working on a solutions comparison on this area. A lot of exciting stuff on that one, stay tuned. Meanwhile, please check the new paper and don’t forget to provide feedback!

 

The post The Deception Paper Update is Out! appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2Xjdgm8
via IFTTT

Friday, January 4, 2019

From my Gartner Blog - More on “AI for cybersecurity”

There is a very important point to understand about the vendors using ML for threat detection.

Usually ML is used to identify known behavior, but with variable parameters. What does that mean? It means that many times we know what bad looks like, but not how exactly it looks like.

For example, we know that data exfiltration attempts will usually exploit certain protocols, such as DNS. But data exfiltration via DNS can be done in multiple ways. So, what we do to detect it is to use ML to learn the normal behavior, according to certain parameters. Things like amount of data on each query, frequency of queries, etc. Anomalies on these parameters may point to exfiltration attempts.

On that case ML helps us find something we already know about, but the definition is fuzzy enough that prevents us from using simple rules to detect it. This is an example of unsupervised ML used to detect relevant anomalies for threat detection. There are also many examples of using supervised ML to learn the fuzzy characteristics of bad behavior. But as you can see, a human had to understand the threat, how it operates, and then define the ML models that can detect the activity.

If you are about to scream “DEEP LEARNING!”, stop. You still need to know what data to look at with deep learning, and if you are using it to learn what bad looks like, you still need to tell it what is bad. We ended up at the same place.

Although ML based detection is a different detection method, the process is still very similar to how signatures are developed.

What haven’t been done yet is AI that can find threats not defined by a human. Most vendors use misleading language to lead people to think they can do it, but that doesn’t exist. Considering this reality, my favorite question to these vendors is usually “what do you do to ensure new threats are properly identified and new models developed to identify them?”. Isn’t that interesting that people buy “AI” but keep relying on the human skills from the vendor to keep it useful?

If you are a user of these technologies, you’ll usually need to know what the vendor does to keep what the tools looks for aligned to new threats. For the mature shops, you also need to know if the tool allows you to do that yourself, if you want/need.

That’s a good way to start the conversation with a “Cybersecurity AI” vendor; see how fast they fall into the trap of “we can find unknown unknowns”.

The post More on “AI for cybersecurity” appeared first on Augusto Barros.



from Augusto Barros https://gtnr.it/2AwXE4H
via IFTTT