Tuesday, November 13, 2018

From my Gartner Blog - The new (old) SIEM papers are out!

As Anton already mentioned here and here, our update of the big SIEM paper was turned into two new papers:

How to Architect and Deploy a SIEM Solution
SIEM is expected to remain a mainstay of security monitoring, but many organizations are challenged with deploying the technology. This guidance framework provides a structured approach for technical professionals working to architect and deploy a SIEM solution.
Published: 16 Oct 2018
Anton Chuvakin | Anna Belak | Augusto Barros

How to Operate and Evolve a SIEM Solution
Managing and using a SIEM is difficult, and many projects are stuck in compliance or minimal value deployments. Most SIEM challenges come from the operations side, not broken tools. This guidance supports technical professionals focused on security working to operate, tune and utilize SIEM tools.
Published: 05 Nov 2018
Augusto Barros | Anton Chuvakin | Anna Belak

 

 

We decided to split the document so we could expand on those two main activities, deploying and operating a SIEM, without the worry of building a document so big it would scare away the readers. A great secondary outcome of that is we were able to put together separate guidance frameworks for each one of those activities. Some of my favorite pieces of each doc:

Deploy

“User and entity behavior analytics (UEBA)-SIEM convergence allows organizations to also include UEBA-centric use cases and machine learning (ML) capabilities in their deployment projects.” (A hype-less way to talk about “OMG AI AI!”)

“Staff shortages and threat landscape drive many organizations to SaaS SIEM, co-managed SIEM and service-heavy models for their SIEM deployments and operation.” (Because, in case you haven’t noticed, SIEM NEEDS PEOPLE TO WORK)

“Adopt the “output-driven SIEM” model, where nothing comes into a SIEM tool unless there is a clear knowledge of how it would be used.” (I know it’s old, but hey, this is our key advice for those deploying SIEM! So, still a favorite)

“Deploy use cases requiring constant baselining and anomaly detection, such as user account compromise detection, using ML/advanced analytics functions previously associated with UEBA” (because it’s not all marketing garbage; these use cases are the perfect fit for UEBA capabilities)

Operate

“Creating and refining security monitoring use cases is critical to an effective SIEM. User-created and customized detection logic delivers the most value.” (because ongoing SIEM value REQUIRES use case management)

“Develop the key operational processes for SIEM: run, watch and adapt. When necessary, fill the gaps with services such as MSS and co-managed SIEM” (we promoted “tune” to “adapt”)

“Prepare and keep enough resources to manage and troubleshoot log collection issues. New sources will be added; software upgrades change log collection methods and formats; environment changes often cause collection disruption.” (ML capabilities, big data tech, all that is cool, but a big chunk of SIEM work is still being able to get the data in)

 

The post The new (old) SIEM papers are out! appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2OEUctf
via IFTTT

Wednesday, October 3, 2018

From my Gartner Blog - Endpoint Has Won, Why Bother With NTA?

One of my favorite blog posts from Anton is the one about the “SOC nuclear triad”. As he describes, SOCs should use logs, endpoint and network data on their threat detection and response efforts. But we also know that organizations don’t have infinite resources and will often have to decide about which tool to deploy first (or ever). Leaving logs aside for a moment, as it usually has additional drivers (i.e. Compliance), the decision eventually becomes: Endpoint vs Network.

Considering a fair comparison, I believe endpoint wins. Some of the evidence we see out there apparently confirms that. Just look at the number of EDR and NTA solutions available in the market. The number of calls I get about EDR, compared to NTA, is also higher. Not to mention that some surveys are also pointing to the same thing.

Endpoint also wins on technical aspects:

  • The network is not yours anymore: With cloud, including PaaS and SaaS, it becomes harder to find a network to plug your NTA technology too. The number of blind spots for network monitoring today are huge, and growing.
  • Encryption: To make things worse (or better, I should say), network traffic encryption is growing tremendously. Almost everything that used to be HTTP now is HTTPS. Visibility on the higher layers is very limited.
  • Endpoint has better signal to noise: This may be more contentious, but it seems that less deterministic detection seems to work better on the endpoint than on the network. What does that mean, in practical terms? That the detection approaches that go beyond simple signatures or indicators matching will generate better alerts, in terms of false positive rates, on the endpoint instead of on the network. Some people may disagree here, but that’s my impression from clients’ feedback about products from both sides.
  • You can see all network stuff on the endpoint: If you really want to see network traffic, why not capture on the endpoints? Some products have been doing that for years.

I think these are some of the reasons why MDR service providers select EDR as delivery mechanism for their services. Having an agent in place also gives them more fine-grained response capabilities, such as killing a process instead of blocking traffic to or from an IP address.

So, endpoint wins. Why would anyone still bother with NTA?

There are reasons that could reverse the preference from endpoint to network. You may prefer to rely on NTA when:

  • You need to protect IOT, OT/ICS, BYOD, mobile devices: Simply put, if you cannot install an agent on it, how would you do endpoint-based detection? There are many technologies being connected to networks that do not support or don’t have agents available. Sometimes they do, but you are not allowed to install the agent there.
  • Organizational challenges: Not all organizations are a perfectly friendly environment for endpoint monitoring. The “owners” of the endpoint technologies may simple reject the deployment of new agents. Your silo may not have enough power to force the deployment of agents but may have better access to network instrumentation. There are many situations beyond simple technical reasons that would force you to look for an alternative to endpoint technologies.
  • Price? Not sure here, but depending on the number of endpoints and the network architecture, it may be cheaper to do monitoring on the network level instead of on each endpoint. If you have a huge number of endpoints, but a network that is easy to instrument and monitor, the bill for NTA could be friendlier than the EDR bill.

So, there are two reasons to still invest in NTA. First, PERFECT visibility REQUIRES both. If you are concerned about super advanced threats disabling agents, using BIOS/EFI rootkits, you need to compensate with non-endpoint visibility too. Second, organizational or technology limitations may leave you with network as the only option.

 

Do you see any other reason why NTA would be the preferred option, instead of endpoint? Do you disagree that endpoint has won?

(this post, BTW is the result of our initial discussions on upcoming NTA research…)

 

 

 

 

 

The post Endpoint Has Won, Why Bother With NTA? appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2yce5Sd
via IFTTT

Friday, September 7, 2018

From my Gartner Blog - The “How To Build a SOC” Paper Update is OUT!

Anton and I have been probing the social media for some time about the trends related to SOC and incident response teams. All that work finally made its way into our “How to Plan, Design, Operate and Evolve a SOC” paper. It is the same paper we published a couple of years ago, but updated to reflect some things we’ve seen evolving since the first version, such as:

  • SOAR tools evolution and growth in adoption
  • Further convergence between security monitoring and incident response
  • Higher adoption of services to supplement internal capabilities

We also updated the guidance figure to include more details for each phase:

Please provide feedback if you read it via https://surveys.gartner.com/s/gtppaperfeedback

The post The “How To Build a SOC” Paper Update is OUT! appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2MaD2Cq
via IFTTT

Tuesday, July 31, 2018

From my Gartner Blog - Gartner Security and Risk Management Summit Brazil – 2018

The Gartner Security Summit Brazil is fast approaching and I’m happy to be part of it again. This time it’s even more special, for many reasons.

This is my first year as the chairman of the conference. It’s very rewarding to be work on the content that will be delivered,  selecting analysts and external speakers. I’m happy to have Anton coming this year. He has quite a fan base there, I’m sure they will all be excited to attend his sessions!

I was also able to bring two very interesting external speakers:

  • Dr. Deltan Dallagnol – One of the prosecutors working on the famous “Carwash Operation task force”
  • Dr. Jessica Barker – The human aspects of information security always fascinated me. Dr. Barker is one of the specialists in this field and she’s bringing her perspective on why things are not so simple as “users are dumb”.

We’ll also have an stellar team of Gartner analysts there. You can check who’s coming here.

Of course, I have my own share of sessions too:

TUESDAY, 14 AUGUST, 2018 / 09:15 AM – 10:15 AM – Scaling Trust and Resilience — Cut the Noise and Enable Action (The opening Keynote)

TUESDAY, 14 AUGUST, 2018 / 01:45 PM – 02:30 PM – Roundtable: How Did You Start Your Organization’s Detection and Response Capabilities?

TUESDAY, 14 AUGUST, 2018 / 03:45 PM – 04:30 PM – An Overview of the Threat Landscape in Brazil

WEDNESDAY, 15 AUGUST, 2018 / 09:15 AM – 10:00 AM – CARTA Inspired Cases in Brazil

WEDNESDAY, 15 AUGUST, 2018 / 12:15 PM – 01:15 PM – CISO Circle Lunch: Lessons Learned in the Equifax Breach and Other Incidents

WEDNESDAY, 15 AUGUST, 2018 / 01:45 PM – 02:30 PM – Roundtable: Lessons From Using Managed Security Services

 

If you’re planning to attend, please come and say hi :-)

The post Gartner Security and Risk Management Summit Brazil – 2018 appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2OwjgDR
via IFTTT

Tuesday, April 17, 2018

From my Gartner Blog - Threat Simulation Open Source Projects

It’s crazy how many (free!) OSS projects are popping up for threat and attack simulation! We are working on research about Breach and Attack Simulation (BAS) tools, and we’ll certainly mention these projects, buy I thought it would be valuable to provide a list of links on the blog as well. Here are all the projects that I’ve managed to track in the past few weeks.

So what? No excuse to not run some of these and see how your environment and your detection and response practices react. Go ahead and try some of these :-)

 Invoke-Adversary – Simulating Adversary Operations – Windows Security

The post Threat Simulation Open Source Projects appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2vqjKWA
via IFTTT

Wednesday, April 11, 2018

From my Gartner Blog - Big data And AI Craziness Is Ruining Security Innovation

I don’t care if you use Hadoop or grep+Perl scripts. If you can demonstrate enough performance to do what you claim you can do, that’s what matters to me from a backend point of view. Now, can you show me that your tool does what it should do better than your competitors?

There is a trend about the messages I’ve been hearing during vendor briefings over the past few months. They spend a lot of time talking about how great their architecture is, all those Hadoop stack components so beautifully integrated, showing how aligned to the latest data management, machine learning and analytics they are. They are proud of the stuff under the hood. But, very often, without verifiable claims on their effectiveness.

This is getting close to the insanity level. “We have AI”. “We are hadoop based”. “We do ML and Deep Learning”. It’s like the technology and techniques being used are the only thing to look for, and not the results! This may work to lure the VCs, but I cannot see how anyone would buy something that uses all this cool technology for…what exactly?

You see advanced analytics that provide “confidence levels” that do not change based on user feedback. Crazy visualizations that don’t tell you anything and could be easily replaced by a simple table view. “Deep Learning” for matching TI indicators to firewall logs. The list is endless.

My concern with this craziness is that vendors are mixing priorities here; they want to show they are using the latest techniques, but not worried about showing how effective they are. There are so many attempts to be the next “next-gen”, but not enough attempts to do help organizations solve their problems. This is killing innovation in security. I want to see how your tool makes threat detection 10x better, not that you can process 10x more data than your competitor.

There are cases where performance and capacity bottlenecks are the main pain point of an industry. Think SIEM before they started moving away from RDBMS, for example. But this is not always true. Now we see vendors happy to claim their products are based on Big Data technologies, but the use cases don’t require more than a few hundred megabytes of data stored. Stop that nonsense.

If you’re getting into this industry now, do so with a product that will work better than what organizations already have in place: findings more threats, faster and using less resources during detection and response. If your next-gen technology is not able to do so, it’s just a toy. And the message I hear from our clients is clear: We don’t want another toy, we want something that makes our lives easier.

 

The post Big data And AI Craziness Is Ruining Security Innovation appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2qn5KIm
via IFTTT

Wednesday, March 7, 2018

From my Gartner Blog - The Virtual Patch Analyst

Is there a need, or place for a “virtual patch analyst”?

If you look at our guidance on vulnerability management, you’ll see that one of the key components we suggest our clients to consider is preparing for mitigation actions, when the immediate vulnerability remediation is not possible. We often see organizations scrambling to do it because they haven’t spent time in advance to build the process, and they don’t have a menu of prepared mitigations to use. Those could include the NIPS, WAFs, etc, but how many would be comfortable to rush the implementation of a “block” signature on those?

mitigation-analyst

Normally this wouldn’t require a FTE, but big organizations could in fact have enough work on this to justify one. Keeping in mind there are many mitigation options, including NIPS, WAF, HIPS, vendor workarounds, application control, additional monitoring, etc. So, one of the challenges for such role to exist is the broad skillset required. Someone capable of understanding the implications of SMB protocol configuration tweaking on Windows and, at the same time, able to write a WAF signature? Hard, but not impossible.

Even if the complete skillset to create the mitigation actions is something hard to find on a single professional, there’s still a lot of work around coordination and process management. The virtual patch analyst may not need all those skills, just some basic understanding of what is being done on each case. The bulk of the work is maintaining the menu of options, getting the right people engaged to develop them and coordinating the process when one needs to be implemented.  Having such role as part of a vulnerability management team is something a big enterprise could do to ensure unacceptable risks are mitigated while a definitive solution for them is not available.

Is there anyone out there working on such role? I would love to hear more about it!

 

The post The Virtual Patch Analyst appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2FnCdUl
via IFTTT

Monday, February 26, 2018

From my Gartner Blog - It’s Not (Only) That The Basics Are Hard…

While working on our research for testing security practices, and also about BAS tools, I’ve noticed that a common question about adding more testing is “why not putting some real effort in doing the basics instead of yet another security test?”. After all, there is no point in looking for holes when you don’t even have a functional vulnerability management program, right?

But the problem is not about not doing the basics. It is about making sure the basics are in place! Doing the basics is ok, but making sure your basics are working is not trivial.

Think about the top 5 of the famous “20 Critical Security controls“:

  • Inventory of Authorized and Unauthorized Devices
  • Inventory of Authorized and Unauthorized Software
  • Secure Configurations for Hardware and Software
  • Continuous Vulnerability Assessment and Remediation
  • Controlled Use of Administrative Privileges

How do you know your processes to maintain devices and software inventories are working? What about the hardening, vulnerability management and privileged access management processes? How confident are you that they are working properly?

If you think about the volume and frequency of changes in the technology environment of a big organization, it’s easy to see how the basic security controls can fail. Of course, good processes are built with the verification and validation steps to catch exceptions and mistakes, but they still happen. This is a base rate problem: with the complexity and high number of changes in the environment, even the best process out there will leave a few things behind. And when it is about security…the “thing left behind” may be a badly maintained CMS exposed to the Internet, a CVSS 10 vulnerability, unpatched, a credential with excessive privileges and a weak (maybe even DEFAULT!) password.

I’ve seen many pentests where the full compromise was performed by the exploitation of those small mistakes and misconfigurations. The security team gets a report with a list of things to address that were really exceptions of processes that are doing a good job (again, you may argue that they are not doing a good job, but this is the point where I stop saying there’s no such thing as a perfect control). So they clean those things, double check the controls and think “this definitely will never happen again!”, just to be see the next test, one year after, also succeeding by exploiting a similar, but different combination of unnoticed issues.

And that’s one of the main value drivers for BAS. Choosing to deploy a tool like that is to recognize that even the good controls and processes will eventually fail, and put something that will continuously try to find those issues left behind. By doing that in an automated manner you can ensure to cover the entire* environment consistently and very frequently, reducing the time those issues will be exposed to real attackers. Is it another layer of control? Yes, it is. But an automated layer to keep the overhead to a minimum. If your basics are indeed working well the findings should also not be overwhelming to the point of becoming a distraction.

 

* – You may catch the funny gap in this rationale…you may also end up failing because the BAS tool is not checking the entire environment, due to an issue with inventory management. Or the tests are not working as intended because they are being blocked by a firewall that should have an exception rule for the tool; yes, this using BAS is also a control, so it may fail too!

 

The post It’s Not (Only) That The Basics Are Hard… appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2F82kSk
via IFTTT

Thursday, February 22, 2018

From my Gartner Blog - SOAR paper is out!

Anton beat me this time on blogging about our new research, but I’ll do it anyway :-)

Our document about Security Orchestration, Automation and Response (SOAR) tools includes some interesting findings. Anton provided some quotes on his post, but I’ll mention some of my favorites too:

  • SIEM tools are often used to aggregate multiple sources of information, but are limited in their ability to query additional data sources and verification services after an initial set of conditions are met. The usual approach is to do as much as possible with that set of conditions and then provide the alert to an analyst for triage, where those additional queries take place.
    However, when the initial conditions set (whether via rules or algorithms, such as machine learning) generate too many alerts, the use case can be infeasible due to the high cost of the manual steps analysts require for triage. The ability to automate postalert queries, such as submitting indicators of compromise (IOCs) to TI services or even artifacts to external sandboxes, allows organizations to implement more threat detection use cases with a high number of initial alerts. (Some of the noisy detection use cases actually deliver valuable insights for as long as they can be quickly triaged.) The automated triage by SOAR effectively acts as the remaining stages of the multistage detection process.

 

  • Security alert triage, investigation and response are often performed in multistep processes, with new information and evidence being gathered or generated continuously. Organizations also need to record the actions taken for each alert or incident, for reasons varying from simple operations management or knowledge management all the way to auditor requests and compliance requirements. Some small SOCs would usually try to store all that data into simple repositories as file shares or spreadsheets. However, most of them will quickly realize that a system capable of recording the data in a structured format, usually while controlling the process workflow, is required to handle the increasing volume and complexity.

 

  • Alert triage and incident response are practices that rely on multiple deployed security tools (most often SIEM and EDR tools), including external services such as sandboxes and TI service portals. Without integration between those tools, the analyst would usually resort to inefficient copy and paste from one user interface to the other, which can introduce its own kind of configuration errors. Also, when operating in an incident, analysts are pushed for time and under a lot of pressure, which also can lead to mistakes.
    Notably, such inefficiencies don’t just reduce productivity, but also increase staff burnout and make staff retention harder. SIRP tools provided guidance to the analyst about which steps to take and a centralized location to record the data. However, the tools were still essentially manual.
    With the addition of orchestration and automation to SIRP, these tools moved from records and documentation management to a more central role in security operations. The process workflow documented in the tool is no longer used only as guidance to the analysts. O&A moves these tools to an active role in performing tasks of those processes, and occasionally the entire end-to-end process. Based on Gartner for Technical Professionals inquiry data, the most visible tools covering both SIRP and O&A spaces today are Phantom Cyber, Demisto, IBM Resilient, ServiceNow SecOps and Swimlane.

 

And don’t forget to PROVIDE YOUR FEEDBACK to the paper via http://surveys.gartner.com/s/gtppaperfeedback

The post SOAR paper is out! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2BJpmwA
via IFTTT

Wednesday, February 14, 2018

From my Gartner Blog - BAS and Red Teams Will Kill The Pentest

With our research on testing security methods and Breach and Attack Simulation tools (BAS), we ended up with an interesting discussion about the role of the pentest. I think we can risk saying that pentesting, as it is today, will cease to exist (I’ll avoid the trap to say “pentesting is dead”, ok? :-)).

Let me clarify things here before everyone starts to scream! Simple pentesting, for pure vulnerability finding goals and with no intent to replicate threat behavior, will vanish. This is different from the pentest that many people will prefer to call “red team exercises”, those very high quality exercises where you really try to replicate the approach and methods of real threats. That approach is in fact growing, and that growth is one of the factors that will kill the vanilla pentest.

But to kill the pentest we need pressure from two sides. The red team is replacing the pentest from the high maturity side, but what about the low maturity side? Well, that’s where vulnerability assessments and BAS comes into play.

If you look at how pentests are performed today, discounting the red team style of exercises, you’ll see that it’s not very different than a good vulnerability assessment. But still, it’s different, because it involves exploiting vulnerabilities, and that exploitation can move the assessor to another point in the network that can be used for another round of scanning/exploitation. And that’s where BAS tools come into play.

BAS automates the simple pentest, performing the basic cycle of scan/exploit/repeat-until-everything-is-owned. If you have the ability to do that with a simple click of a button, why would you use a human to do that? The tool can ensure consistency, provide better reporting and do it faster. Not to mention requiring less skills (you don’t even need to know how to use Metasploit!). So, with BAS, you either go for human tests because you want a red team, or you use the tool for the simple style of testing.

But, you may argue, not everyone will buy and deploy those tools, so there’s still room for the service providers selling basic pentesting. Well…no! BAS will not be offered only as something you can buy and deploy on your environment. It will also, like all the other security tools, be offered as SaaS. With that, you don’t need to buy and deploy it anymore, you can “rent it” for a single exercise. This is simpler than hiring pentesters, and provides better results (again, I’m starting to sound repetitive, but excluding the really great pentests…). So, why would you hire people to do it?

pentest-killed

 

In the future, your options for testing your security will be vulnerability scanning, BAS or red teaming. Each one with specific objectives, advantages and disadvantages, but there’s no need for people running basic pentests anymore.

If you currently use those simple pentests, do you see your organization eventually moving to this new scenario? If not, I’d love to know why!

 

The post BAS and Red Teams Will Kill The Pentest appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2EtszTs
via IFTTT

Tuesday, January 30, 2018

From my Gartner Blog - The “working with an MSSP” Tome Is Here

As Anton just posted, the new version of the famous “How to Work With an MSSP to Improve Security” has just been published. I’m very happy to become a co-author (together with Anton and Mike Wonham) on this document, as it is usually one of our documents that I most frequently refer to clients during inquiry calls. After all, it’s very common to start a call about SIEM, UEBA, NTA or EDR and end it talking about MSS, after the client realizes that using those tools require people – a lot of people – on their side.

Among lots of exciting new content (this is indeed a looooong document :-)), a new guidance framework for those looking for (and eventually hiring) an MSSP:

343485_0001

You’ll notice that we added “joint content development” as part of the Operating phase. This is something we also added to the recently updated Use Cases document. After all, there’s no reason to believe the MSSP knows everything you want them to detect for you; so, how do you tell them that? If you hired an MSSP, do you know if you still have people on your side capable of working with them to develop content?

There is also an important reminder for organizations expecting to have the entire security monitoring process managed by the service provider:

“When customers perform triage, they will often find cases of false positives. Many organizations don’t report these back to the MSSP, only to complain later that they keep receiving the same false-positive alerts repeatedly! Although the MSSP is responsible for tuning the detection systems it manages, such tuning typically requires feedback from the customer. This feedback goes beyond a simple statement like, “This alert is a false positive.” Adding context about why the alert is a false positive will allow the MSSP to perform the appropriate tuning. It will also avoid cases where entire classes of alerts are disabled due to uncertainty around what type of activity is causing them”

I had countless conversations with organizations complaining about the false positives sent by the MSSP. But it’s impressive how many of them are not prepared to report back those events to the provider in a way that would allow them to tune their systems and avoid a similar occurrence in the future. This is a recurrent theme in this document: You MUST WORK WITH THE MSSP, not expect them to figure everything out alone.

We have this and far more in the doc; please read it and don’t forger to provide feedback: http://surveys.gartner.com/s/gtppaperfeedback

 

 

The post The “working with an MSSP” Tome Is Here appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2EmY5iN
via IFTTT

Wednesday, January 17, 2018

From my Gartner Blog - Security Monitoring Use Cases, the UPDATE!

Posting about updated documents is often boring, but this time I’m talking about my favorite Gartner document, as usual, co-authored with Anton“How to Develop and Maintain Security Monitoring Use Cases”!

This document described an approach to identity, prioritize, implement and manage security monitoring use cases. Of course, it has a lot on SIEM, as it’s usually the chosen tool for implementation of those use cases, but we revised to ensure we are also covering technologies such as UEBA, EDR and even SOAR. If we consider that detection can often be implemented as multi-stage process, that’s a natural evolution!

The major changes are:

  • Revamping the main graphic of the document to better illustrate how the process works (below)
  • Putting more emphasis on some of the artifacts generated by the process, such as use case lists
  • Evolving the language around about doing use case development as software development to say “doing it as AGILE software development”
  • Reinforcing the types of use cases that are usually managed by this process: threat, controls and asset oriented
  • Including tips for use case management when working with a MSSP (we are writing more about this in our upcoming MSSP doc, BTW)

The summary diagram for the framework can be seen below:Enlarge Image

Again, we are always looking for feedback on our research. If you have anything to say about this document, please use this page to do it.

The post Security Monitoring Use Cases, the UPDATE! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2mOTUoo
via IFTTT

Friday, January 12, 2018

From my Gartner Blog - Automation – Why Only Now?

As we ramp up our research on SOAR and start looking at some interesting tools for automated security testing, something crossed my mind: Why are we only seeing security operations automation and security testing automation technologies come to market now? I mean, automating workflows is not new technology, so why are these specific workflows only being automated now?

I believe the answer includes multiple reasons, but I see two as key:

The need: Of course it would be great to automate SOC tasks back in 2005, but at that time the environments were more stable, and the volume of threat activity lower. Because virtualization was still not everywhere, the number of systems running was also smaller. The smaller pace of change and size of the technology environments, as well as a less aggressive threat landscape were still compatible with mostly manual security operations. With cloud, devops, crazy state sponsored threats and very short breach to impact scenarios like ransomware it is imperative for organizations to be able to adapt and react faster. At the required scale, that’s only possible with more automation.

The tools: Yes, the ability to write an automated workflow was already there, but integration was still painful! There were only some APIs available from the different security (or even general IT) tools, and most of the time they were not standardized and not simple as the abundant REST APIs we see today. In the past, if you wanted to fully automate a SOC playbook you would probably need to include all required capabilities in a single tool, without the option to orchestrate the work of multiple independent solutions. So, it is not that automation tools were not available; the broad menu of tools to be centrally orchestrated didn’t exist.

 

The increase in need is independent of how the security industry evolves, but I see the second reason in a very positive way. We are constantly bashing the vendor community on the release of new tools based on exaggerated marketing claims, but we should also acknowledge this movement of making the tools friendlier to integration as a positive evolution of the industry. There have been many standards and attempts to create common languages and protocols to integrate tools, but apparently opening them for integration via REST APIs has provided far more benefits than initiatives like IF-MAP, DXL, CIF, IODEF, IDMEF.

What else do you think is driving this automation trend in security?

 

The post Automation – Why Only Now? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2mwWa3o
via IFTTT

Tuesday, January 9, 2018

From my Gartner Blog - Threat Simulation – How real does it have to be?

We are starting our research on “Testing Security”. So far we’ve been working with a fairly broad scope, as Anton’s post on the topic explained. One of the things we are looking at is the group of tools that has been called “breach and attack simulation tools”.

Tools that automate exploitation have been around for years; we can mention things like Metasploit, Core Impact, CANVAS and others. Those are tools used by pentesters so they don’t need to rewrite their exploits for each specific condition they find during their test. So what’s different in the new wave of tools?

The idea of using these tools is to have a consistent way to continuously test your controls, from prevention to detection (and even response). They are not focused on making exploitation easier, but to run an entire intrusion scenario, end to end, to check how the controls in the attacked environment react to each step. They go beyond exploitation and include automation of the many steps in the attack chain, including command and control, lateral movement, resource access and exfiltration. They also add a layer of reporting and visualization that allows the users to see how each attack  is performed and what the tool was (or was not) able to accomplish.

We are just starting to talk to some of the vendors in this space, but I noticed there’s one point they seem to argue about: how much real should these tests be? Some of the vendors in this space noticed there is a strong resistance from many organizations in running automated exploitation tools in their production environments, so they built their tools to only simulate the more aggressive steps of the attacks. Some of these tools even take the approach of “assuming compromise”, bypassing the exploitation phase and focusing on the later stages of the attack chain.

It is an interesting strategy to provide something acceptable to the more conservative organizations, but there are some limitations that come with that approach. In fact, I see two:

First, many prevention and detection technologies are focused on the exploitation actions. If there are no exploitation actions, they just won’t be tested. So, if the tool replicates a “lateral movement scenario” on an endpoint using a arbitrary fake credential that mimics the outcome of successfully running mimikatz, no tool or approach that looks for the signs (or prevents) of that attack technique being used (like this) will be tested. If the organization uses deception breadcrumbs, for example, they wouldn’t be touched, so there’s no way to check if they would actually be extracted and used during an attack. Same thing for monitoring signs of the exploit or even preventing them from working using exploitation mitigation technologies. So, the testing scenarios would be, in a certain way, incomplete.

Second, the fact that exploitation is not necessarily something that happens only in the beginning of the attack chain. It is often used as one of the first steps to get code running into the target environment, but many exploitation actions could come later as part of the attack chain for privilege elevation, lateral movement and resource access. So, assuming that exploitation has only a small role, at the beginning of the attack chain, is a very risky approach when you are looking for what needs to be tested in the entire control set.

Looking at these two points in isolation suggests that breach and attack simulation tools should perform real exploitation to properly test the existing controls. But apart from the concerns of disrupting production systems, there are other challenges with incorporating exploits in the automated tests. The vendor or the organization using the systems now needs to the ability to incorporate new exploits as new vulnerabilities come up, check if each one of those are safe or if they could damage the ability of the real systems to protect themselves after the test is completed (some exploits disable security controls permanently, so using them during a test could actually reduce the security of the environment). The approach of avoiding exploitation eliminates those concerns.

If both approaches are valid, it is important to the organization to understand the limitations of the tests and what still needs to be tested manually or through alternative means (such as good and old checklist?). This also brings another question we should look at during this research: how to integrate the findings of all these testing approaches to provide a unique view of the state of the security controls? That’s something for another post.

The post Threat Simulation – How real does it have to be? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2CZj4dv
via IFTTT