Tuesday, July 31, 2018

From my Gartner Blog - Gartner Security and Risk Management Summit Brazil – 2018

The Gartner Security Summit Brazil is fast approaching and I’m happy to be part of it again. This time it’s even more special, for many reasons.

This is my first year as the chairman of the conference. It’s very rewarding to be work on the content that will be delivered,  selecting analysts and external speakers. I’m happy to have Anton coming this year. He has quite a fan base there, I’m sure they will all be excited to attend his sessions!

I was also able to bring two very interesting external speakers:

  • Dr. Deltan Dallagnol – One of the prosecutors working on the famous “Carwash Operation task force”
  • Dr. Jessica Barker – The human aspects of information security always fascinated me. Dr. Barker is one of the specialists in this field and she’s bringing her perspective on why things are not so simple as “users are dumb”.

We’ll also have an stellar team of Gartner analysts there. You can check who’s coming here.

Of course, I have my own share of sessions too:

TUESDAY, 14 AUGUST, 2018 / 09:15 AM – 10:15 AM – Scaling Trust and Resilience — Cut the Noise and Enable Action (The opening Keynote)

TUESDAY, 14 AUGUST, 2018 / 01:45 PM – 02:30 PM – Roundtable: How Did You Start Your Organization’s Detection and Response Capabilities?

TUESDAY, 14 AUGUST, 2018 / 03:45 PM – 04:30 PM – An Overview of the Threat Landscape in Brazil

WEDNESDAY, 15 AUGUST, 2018 / 09:15 AM – 10:00 AM – CARTA Inspired Cases in Brazil

WEDNESDAY, 15 AUGUST, 2018 / 12:15 PM – 01:15 PM – CISO Circle Lunch: Lessons Learned in the Equifax Breach and Other Incidents

WEDNESDAY, 15 AUGUST, 2018 / 01:45 PM – 02:30 PM – Roundtable: Lessons From Using Managed Security Services

 

If you’re planning to attend, please come and say hi :-)

The post Gartner Security and Risk Management Summit Brazil – 2018 appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2OwjgDR
via IFTTT

Tuesday, April 17, 2018

From my Gartner Blog - Threat Simulation Open Source Projects

It’s crazy how many (free!) OSS projects are popping up for threat and attack simulation! We are working on research about Breach and Attack Simulation (BAS) tools, and we’ll certainly mention these projects, buy I thought it would be valuable to provide a list of links on the blog as well. Here are all the projects that I’ve managed to track in the past few weeks.

So what? No excuse to not run some of these and see how your environment and your detection and response practices react. Go ahead and try some of these :-)

 Invoke-Adversary – Simulating Adversary Operations – Windows Security

The post Threat Simulation Open Source Projects appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2vqjKWA
via IFTTT

Wednesday, April 11, 2018

From my Gartner Blog - Big data And AI Craziness Is Ruining Security Innovation

I don’t care if you use Hadoop or grep+Perl scripts. If you can demonstrate enough performance to do what you claim you can do, that’s what matters to me from a backend point of view. Now, can you show me that your tool does what it should do better than your competitors?

There is a trend about the messages I’ve been hearing during vendor briefings over the past few months. They spend a lot of time talking about how great their architecture is, all those Hadoop stack components so beautifully integrated, showing how aligned to the latest data management, machine learning and analytics they are. They are proud of the stuff under the hood. But, very often, without verifiable claims on their effectiveness.

This is getting close to the insanity level. “We have AI”. “We are hadoop based”. “We do ML and Deep Learning”. It’s like the technology and techniques being used are the only thing to look for, and not the results! This may work to lure the VCs, but I cannot see how anyone would buy something that uses all this cool technology for…what exactly?

You see advanced analytics that provide “confidence levels” that do not change based on user feedback. Crazy visualizations that don’t tell you anything and could be easily replaced by a simple table view. “Deep Learning” for matching TI indicators to firewall logs. The list is endless.

My concern with this craziness is that vendors are mixing priorities here; they want to show they are using the latest techniques, but not worried about showing how effective they are. There are so many attempts to be the next “next-gen”, but not enough attempts to do help organizations solve their problems. This is killing innovation in security. I want to see how your tool makes threat detection 10x better, not that you can process 10x more data than your competitor.

There are cases where performance and capacity bottlenecks are the main pain point of an industry. Think SIEM before they started moving away from RDBMS, for example. But this is not always true. Now we see vendors happy to claim their products are based on Big Data technologies, but the use cases don’t require more than a few hundred megabytes of data stored. Stop that nonsense.

If you’re getting into this industry now, do so with a product that will work better than what organizations already have in place: findings more threats, faster and using less resources during detection and response. If your next-gen technology is not able to do so, it’s just a toy. And the message I hear from our clients is clear: We don’t want another toy, we want something that makes our lives easier.

 

The post Big data And AI Craziness Is Ruining Security Innovation appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2qn5KIm
via IFTTT

Wednesday, March 7, 2018

From my Gartner Blog - The Virtual Patch Analyst

Is there a need, or place for a “virtual patch analyst”?

If you look at our guidance on vulnerability management, you’ll see that one of the key components we suggest our clients to consider is preparing for mitigation actions, when the immediate vulnerability remediation is not possible. We often see organizations scrambling to do it because they haven’t spent time in advance to build the process, and they don’t have a menu of prepared mitigations to use. Those could include the NIPS, WAFs, etc, but how many would be comfortable to rush the implementation of a “block” signature on those?

mitigation-analyst

Normally this wouldn’t require a FTE, but big organizations could in fact have enough work on this to justify one. Keeping in mind there are many mitigation options, including NIPS, WAF, HIPS, vendor workarounds, application control, additional monitoring, etc. So, one of the challenges for such role to exist is the broad skillset required. Someone capable of understanding the implications of SMB protocol configuration tweaking on Windows and, at the same time, able to write a WAF signature? Hard, but not impossible.

Even if the complete skillset to create the mitigation actions is something hard to find on a single professional, there’s still a lot of work around coordination and process management. The virtual patch analyst may not need all those skills, just some basic understanding of what is being done on each case. The bulk of the work is maintaining the menu of options, getting the right people engaged to develop them and coordinating the process when one needs to be implemented.  Having such role as part of a vulnerability management team is something a big enterprise could do to ensure unacceptable risks are mitigated while a definitive solution for them is not available.

Is there anyone out there working on such role? I would love to hear more about it!

 

The post The Virtual Patch Analyst appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2FnCdUl
via IFTTT

Monday, February 26, 2018

From my Gartner Blog - It’s Not (Only) That The Basics Are Hard…

While working on our research for testing security practices, and also about BAS tools, I’ve noticed that a common question about adding more testing is “why not putting some real effort in doing the basics instead of yet another security test?”. After all, there is no point in looking for holes when you don’t even have a functional vulnerability management program, right?

But the problem is not about not doing the basics. It is about making sure the basics are in place! Doing the basics is ok, but making sure your basics are working is not trivial.

Think about the top 5 of the famous “20 Critical Security controls“:

  • Inventory of Authorized and Unauthorized Devices
  • Inventory of Authorized and Unauthorized Software
  • Secure Configurations for Hardware and Software
  • Continuous Vulnerability Assessment and Remediation
  • Controlled Use of Administrative Privileges

How do you know your processes to maintain devices and software inventories are working? What about the hardening, vulnerability management and privileged access management processes? How confident are you that they are working properly?

If you think about the volume and frequency of changes in the technology environment of a big organization, it’s easy to see how the basic security controls can fail. Of course, good processes are built with the verification and validation steps to catch exceptions and mistakes, but they still happen. This is a base rate problem: with the complexity and high number of changes in the environment, even the best process out there will leave a few things behind. And when it is about security…the “thing left behind” may be a badly maintained CMS exposed to the Internet, a CVSS 10 vulnerability, unpatched, a credential with excessive privileges and a weak (maybe even DEFAULT!) password.

I’ve seen many pentests where the full compromise was performed by the exploitation of those small mistakes and misconfigurations. The security team gets a report with a list of things to address that were really exceptions of processes that are doing a good job (again, you may argue that they are not doing a good job, but this is the point where I stop saying there’s no such thing as a perfect control). So they clean those things, double check the controls and think “this definitely will never happen again!”, just to be see the next test, one year after, also succeeding by exploiting a similar, but different combination of unnoticed issues.

And that’s one of the main value drivers for BAS. Choosing to deploy a tool like that is to recognize that even the good controls and processes will eventually fail, and put something that will continuously try to find those issues left behind. By doing that in an automated manner you can ensure to cover the entire* environment consistently and very frequently, reducing the time those issues will be exposed to real attackers. Is it another layer of control? Yes, it is. But an automated layer to keep the overhead to a minimum. If your basics are indeed working well the findings should also not be overwhelming to the point of becoming a distraction.

 

* – You may catch the funny gap in this rationale…you may also end up failing because the BAS tool is not checking the entire environment, due to an issue with inventory management. Or the tests are not working as intended because they are being blocked by a firewall that should have an exception rule for the tool; yes, this using BAS is also a control, so it may fail too!

 

The post It’s Not (Only) That The Basics Are Hard… appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2F82kSk
via IFTTT

Thursday, February 22, 2018

From my Gartner Blog - SOAR paper is out!

Anton beat me this time on blogging about our new research, but I’ll do it anyway :-)

Our document about Security Orchestration, Automation and Response (SOAR) tools includes some interesting findings. Anton provided some quotes on his post, but I’ll mention some of my favorites too:

  • SIEM tools are often used to aggregate multiple sources of information, but are limited in their ability to query additional data sources and verification services after an initial set of conditions are met. The usual approach is to do as much as possible with that set of conditions and then provide the alert to an analyst for triage, where those additional queries take place.
    However, when the initial conditions set (whether via rules or algorithms, such as machine learning) generate too many alerts, the use case can be infeasible due to the high cost of the manual steps analysts require for triage. The ability to automate postalert queries, such as submitting indicators of compromise (IOCs) to TI services or even artifacts to external sandboxes, allows organizations to implement more threat detection use cases with a high number of initial alerts. (Some of the noisy detection use cases actually deliver valuable insights for as long as they can be quickly triaged.) The automated triage by SOAR effectively acts as the remaining stages of the multistage detection process.

 

  • Security alert triage, investigation and response are often performed in multistep processes, with new information and evidence being gathered or generated continuously. Organizations also need to record the actions taken for each alert or incident, for reasons varying from simple operations management or knowledge management all the way to auditor requests and compliance requirements. Some small SOCs would usually try to store all that data into simple repositories as file shares or spreadsheets. However, most of them will quickly realize that a system capable of recording the data in a structured format, usually while controlling the process workflow, is required to handle the increasing volume and complexity.

 

  • Alert triage and incident response are practices that rely on multiple deployed security tools (most often SIEM and EDR tools), including external services such as sandboxes and TI service portals. Without integration between those tools, the analyst would usually resort to inefficient copy and paste from one user interface to the other, which can introduce its own kind of configuration errors. Also, when operating in an incident, analysts are pushed for time and under a lot of pressure, which also can lead to mistakes.
    Notably, such inefficiencies don’t just reduce productivity, but also increase staff burnout and make staff retention harder. SIRP tools provided guidance to the analyst about which steps to take and a centralized location to record the data. However, the tools were still essentially manual.
    With the addition of orchestration and automation to SIRP, these tools moved from records and documentation management to a more central role in security operations. The process workflow documented in the tool is no longer used only as guidance to the analysts. O&A moves these tools to an active role in performing tasks of those processes, and occasionally the entire end-to-end process. Based on Gartner for Technical Professionals inquiry data, the most visible tools covering both SIRP and O&A spaces today are Phantom Cyber, Demisto, IBM Resilient, ServiceNow SecOps and Swimlane.

 

And don’t forget to PROVIDE YOUR FEEDBACK to the paper via http://surveys.gartner.com/s/gtppaperfeedback

The post SOAR paper is out! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2BJpmwA
via IFTTT

Wednesday, February 14, 2018

From my Gartner Blog - BAS and Red Teams Will Kill The Pentest

With our research on testing security methods and Breach and Attack Simulation tools (BAS), we ended up with an interesting discussion about the role of the pentest. I think we can risk saying that pentesting, as it is today, will cease to exist (I’ll avoid the trap to say “pentesting is dead”, ok? :-)).

Let me clarify things here before everyone starts to scream! Simple pentesting, for pure vulnerability finding goals and with no intent to replicate threat behavior, will vanish. This is different from the pentest that many people will prefer to call “red team exercises”, those very high quality exercises where you really try to replicate the approach and methods of real threats. That approach is in fact growing, and that growth is one of the factors that will kill the vanilla pentest.

But to kill the pentest we need pressure from two sides. The red team is replacing the pentest from the high maturity side, but what about the low maturity side? Well, that’s where vulnerability assessments and BAS comes into play.

If you look at how pentests are performed today, discounting the red team style of exercises, you’ll see that it’s not very different than a good vulnerability assessment. But still, it’s different, because it involves exploiting vulnerabilities, and that exploitation can move the assessor to another point in the network that can be used for another round of scanning/exploitation. And that’s where BAS tools come into play.

BAS automates the simple pentest, performing the basic cycle of scan/exploit/repeat-until-everything-is-owned. If you have the ability to do that with a simple click of a button, why would you use a human to do that? The tool can ensure consistency, provide better reporting and do it faster. Not to mention requiring less skills (you don’t even need to know how to use Metasploit!). So, with BAS, you either go for human tests because you want a red team, or you use the tool for the simple style of testing.

But, you may argue, not everyone will buy and deploy those tools, so there’s still room for the service providers selling basic pentesting. Well…no! BAS will not be offered only as something you can buy and deploy on your environment. It will also, like all the other security tools, be offered as SaaS. With that, you don’t need to buy and deploy it anymore, you can “rent it” for a single exercise. This is simpler than hiring pentesters, and provides better results (again, I’m starting to sound repetitive, but excluding the really great pentests…). So, why would you hire people to do it?

pentest-killed

 

In the future, your options for testing your security will be vulnerability scanning, BAS or red teaming. Each one with specific objectives, advantages and disadvantages, but there’s no need for people running basic pentests anymore.

If you currently use those simple pentests, do you see your organization eventually moving to this new scenario? If not, I’d love to know why!

 

The post BAS and Red Teams Will Kill The Pentest appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/2EtszTs
via IFTTT