Friday, October 25, 2019

From my Gartner Blog - The New Vulnerability Management Guidance Framework

After a huge delay I can finally announce that the new version of our Vulnerability Management Guidance Framework is out! Although it is a refresh of a document that has gone through many updates (even before my Gartner time), this one has some very nice new stuff to mention. First, we refreshed our VM cycle and it’s closer to the reality of most organizations now:

This versions includes a revamped prioritization section, as well as some additional content on vulnerability assessment options. In the past we left most of the VA content for another document, but now it’s back to the VM guidance.

Some interesting pieces of this version:

  • One of the most common ways to fail at VM is by simply sending a report with thousands of vulnerabilities to the operations team to fix. Successful VM programs leverage advanced prioritization techniques and automated workflow tools to streamline the handover to the team responsible for remediation.
  • Organizations adopting DevOps practices must adopt an approach integrated to continuous integration/continuous delivery (CI/CD) cycles and addressing issues at preproduction stages.
  • Include the identification of underlying issues as one of the main objectives of the VM process. Although it is still important to find and address individual vulnerabilities, VM should also provide insight into areas that need to be improved in the organization’s security posture.
  • [On VA scanning frequency] The ultimate frequency goal should reflect the value of providing refreshed vulnerability data to consumer processes, such as patching and security monitoring. If those processes will not benefit from more frequent scans, there is really no point in trying to achieve a higher frequency.
  • Mitigation can often be the first line of defense, especially if it can be implemented quickly. However, mitigated vulnerabilities are not gone. They still need to be fixed eventually.
  • All exceptions must have an expiration date. Do not allow indefinite exceptions.

In general, it’s a far clearer document and easy to read now. Thanks Anna Belak for your magical wordsmithing powers!

We are always looking for detailed feedback on our papers. Feel free to drop some comments here if you read the doc.

The post The New Vulnerability Management Guidance Framework appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2JlGOKL
via IFTTT

Tuesday, October 15, 2019

From my Gartner Blog - Our New Research on Incident Response Has Been Published

We finally managed to publish our great new (in fact, refreshed) document on preparing for incident response, “How to Implement a Computer Security Incident Response Program”.

This is the first document of my colleague Michael Clark, who did a terrific job of modernizing some stuff from a long time ago.

Some interesting pieces from this guidance document:

 

Organizations that practice their incident response program find gaps and areas for improvement. Certain exercises also make the computer security incident response team (CSIRT) more comfortable and better equipped when an incident occurs.

Include all the locations and services where your assets and data reside in the plan. This includes SaaS and company-controlled cloud assets. Many high-profile breaches involve elements outside the organization’s perimeter

Detections that must be addressed are inevitable. Organizations are often forced into a response mode by attackers and third-party breach notifications.

As usual, we are always looking for detailed feedback on our papers. Feel free to drop some comments here if you read the doc.

The post Our New Research on Incident Response Has Been Published appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2IRhDza
via IFTTT

Monday, June 17, 2019

From my Gartner Blog - Presenting at the Gartner Security and Risk Management Summit DC 2019

This is literally a last minute blog post about my sessions at this year’s Gartner Security and Risk Management Summit. This time I have three sessions:

Tuesday 18, 2:30PM – Debate: Changing Societal Perception of Cybersecurity: This is a very fun debate with my colleague Paul Proctor, where we discuss the need to change society’s perception of security. Paul is trying his best, but I don’t think he can win this one 🙂

Wednesday 19, 5:15PM – Creating Security Monitoring Use Cases With the MITRE ATT&CK Framework: The MITRE AT&CK framework has quickly become a popular tool for many security operations practices. This session illustrates how it can be used to address some of the most common challenges of security operations centers: How to create security monitoring use cases? How do we know if we are looking for right things? What should be the starting list of use cases on our SIEM deployment?

Thursday 20, 10:45AM – Further Evolution of Modern SOC: Automation, Delegation, Analytics: This presentation provides a structured approach to plan, establish and efficiently operate a modern SOC. Gartner clients with successful SOCs put the premium on people rather than process and technology. People and process overshadow technology as predictors for SOC success or failure. Among other things, it will cover questions such as: Do I need a SOC and can I afford it? Where can I rely on automation and where do I need to outsource or delegate? Can SOAR tools really automate my SOC?

This is one of the most fun weeks of the year for us Gartner analysts. For you attending the event and the sessions above, please let me know if you like them, what could the different and how we can improve.

The post Presenting at the Gartner Security and Risk Management Summit DC 2019 appeared first on Augusto Barros.



from Augusto Barros https://gtnr.it/2Im4DSs
via IFTTT

Thursday, May 2, 2019

From my Gartner Blog - Considering Remediation Approaches For Vulnerability Prioritization

As Anton said, we are starting our work on vulnerability management this year. One of the points I’ve started to look at more carefully is how much the different patching approaches can affect how we prioritize vulnerabilities for remediation.

Expanding the prioritization of vulnerabilities to go beyond CVSS and include threat context is something we are seeing quickly moving to mainstream. Now it’s not uncommon to see organizations that don’t only look at how bad a vulnerability could be, but how much it is and even will be (great work on prioritization models by some vendors out there). This really helps reducing the noise and focus on what matters.

But this is helpful when you look at vulnerabilities individually only. When they move to other side of the fence, however, the problem has some different nuances. IT operations don’t see vulnerabilities, they see patches. And the relationship between patches and vulnerabilities are not always one-to-one, and not all patches are equal. There are those “applied-periodically-automatically-with-no-intervention” types of patches, there are also the “almost-never-released-and-when-installed-breaks-everything” types of patches. The IT Ops team may not even bother looking at the priority of the former but may want a very thorough justification for why they need to apply the latter.

Many vulnerability management programs, because they are managed by the security team, do not consider the characteristics of the patching process when applying their prioritization criteria. But if they want to be taken seriously by IT Ops, they should. So, my questions here are:

– When you prioritize vulnerabilities, do you incorporate “cost to patch” in your criteria?

– If you do so, how? Does your tool set allow you to do it? Where is that information coming from?

– If you define patching times by categories, have you considered patching characteristics for categorization? For example, do you define categories as something like “non-critical workstations” or like “windows workstations with auto-updates on”?

– Do you look at the vendors of software deployed in your environment as part of this exercise? Patching Microsoft vs. Oracle, for example? Do you take into consideration the quality of the patches or release schedule of the vendor to define the patching times?

We like to stay away from the patching problem as it seems more like an IT operations problem than a security problem. But I believe that proper prioritization (or at least one that will be useful for the goal of fixing vulnerabilities) should include something about the required patches too. If that’s correct, what are the tools available for that and how are organizations doing it?

Please jump in and leave your experiences in the comments section!

 

 

The post Considering Remediation Approaches For Vulnerability Prioritization appeared first on Augusto Barros.



from Augusto Barros https://gtnr.it/2VcMTAS
via IFTTT

Friday, February 22, 2019

From my Gartner Blog - The Deception Paper Update is Out!

Good thing about when Anton is away is I’m always able to jump and announce our new research ahead of him 🙂

So, the update to our “Applying Deception Technologies and Techniques to Improve Threat Detection and Response” paper has finally been published. This is a minor update, but as with every updated paper, it has changed for better. Some of the highlights

  • New, and more beautiful pictures (thanks to our co-author Anna Belak for making our papers look 100% better on the graphics side!)
  • Additional guidance on how to test deception tools (tip: put your Breach and Attack Simulation tool to use!)
  • A better understanding on how the Deception Platforms are evolving and what are the current “must have” features you’ll find there

We also tuned key findings and recommendations, including these:

  • Evaluate deception against alternatives like NTA, EDR, SIEM and UEBA to detect stolen-data staging, lateral movements, internal reconnaissance and other attack actions within your environment.
  • Deploy deception-based detection approaches for environments that cannot use other security controls due to technical or economic reasons. Examples include IoT, SCADA, medical environments and highly distributed networks.

We are also working on a solutions comparison on this area. A lot of exciting stuff on that one, stay tuned. Meanwhile, please check the new paper and don’t forget to provide feedback!

 

The post The Deception Paper Update is Out! appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2Xjdgm8
via IFTTT

Friday, January 4, 2019

From my Gartner Blog - More on “AI for cybersecurity”

There is a very important point to understand about the vendors using ML for threat detection.

Usually ML is used to identify known behavior, but with variable parameters. What does that mean? It means that many times we know what bad looks like, but not how exactly it looks like.

For example, we know that data exfiltration attempts will usually exploit certain protocols, such as DNS. But data exfiltration via DNS can be done in multiple ways. So, what we do to detect it is to use ML to learn the normal behavior, according to certain parameters. Things like amount of data on each query, frequency of queries, etc. Anomalies on these parameters may point to exfiltration attempts.

On that case ML helps us find something we already know about, but the definition is fuzzy enough that prevents us from using simple rules to detect it. This is an example of unsupervised ML used to detect relevant anomalies for threat detection. There are also many examples of using supervised ML to learn the fuzzy characteristics of bad behavior. But as you can see, a human had to understand the threat, how it operates, and then define the ML models that can detect the activity.

If you are about to scream “DEEP LEARNING!”, stop. You still need to know what data to look at with deep learning, and if you are using it to learn what bad looks like, you still need to tell it what is bad. We ended up at the same place.

Although ML based detection is a different detection method, the process is still very similar to how signatures are developed.

What haven’t been done yet is AI that can find threats not defined by a human. Most vendors use misleading language to lead people to think they can do it, but that doesn’t exist. Considering this reality, my favorite question to these vendors is usually “what do you do to ensure new threats are properly identified and new models developed to identify them?”. Isn’t that interesting that people buy “AI” but keep relying on the human skills from the vendor to keep it useful?

If you are a user of these technologies, you’ll usually need to know what the vendor does to keep what the tools looks for aligned to new threats. For the mature shops, you also need to know if the tool allows you to do that yourself, if you want/need.

That’s a good way to start the conversation with a “Cybersecurity AI” vendor; see how fast they fall into the trap of “we can find unknown unknowns”.

The post More on “AI for cybersecurity” appeared first on Augusto Barros.



from Augusto Barros https://gtnr.it/2AwXE4H
via IFTTT