Friday, October 9, 2020

Monitoring and Vulnerability Management

 (Cross posted from the Securonix Blog)

Vulnerability management is one of the most basic security hygiene practices organizations must have in place to avoid being hacked. However, even being a primary security control doesn't make it simple to successfully implement. I used to cover VM in my Gartner days, and it was sad to see how many organizations were not doing it properly.

Many security professionals see VM as a boring topic, usually seeing it simply as a "scan and patch" cycle. Although the bulk of a typical VM program may indeed be based on the processes of scanning for vulnerabilities and applying patches, there are many other things that need to be done so it can deliver the expected results.

One of the most important pieces of it is the prioritization of findings. It is clear to most organizations that patching every open vulnerability is just not feasible. If you can't patch everything, what should you patch first? There are many interesting advancements in this area. What used to be based only on the severity of the vulnerabilities (the old CVSS value) is now a more sophisticated process that leverages multiple data points, including threat intelligence. The EPSS research by Kenna Security is a great example of how evolved the practice of prioritizing vulnerabilities is now when compared to the old CVSS times.

But even when you are able to decide what to patch first, there are also cases where the remediation is not simply applying a patch. Some vulnerabilities involve not only a bug, but also other issues such as the existence of legacy software and protocols in the environment. These situations usually require a more complex approach, and that's where an additional component of the VM process, the compensating controls, become important.

Compensating controls are used to address the risk of a vulnerability while the full remediation cannot be applied. Using an IPS, for example, is a typical compensating control. You can use them when you cannot apply the remediation, such as when a patch is not available, or to mitigate the risk until you are comfortable enough (usually after testing is done, during a maintenance window) to apply it. We usually see some security controls that can avoid or reduce the impact of vulnerability exploitation as the ideal candidates for compensating risk, but there is something I always like to bring up during this discussion: Monitoring.

Think about it for a second. You have an open vulnerability that you still cannot patch. The exploit is available, as well as a lot of information about how it is used. Even if you cannot avoid it, you can use all this information to build a security monitoring use case focused on the exploitation of this specific vulnerability. You it is there, and that there is a chance for it being exploited, so why not put something together to look for that exploitation? You can prioritize the alerts generated by this use case, as you know you are currently vulnerable to that type of attack.

A great example of using security monitoring as part of the VM process is what is happening with the new Windows Zerologon EP (ZEP) vulnerability (CVE-2020-1472). The issue is complex and requires more than just applying a patch. Our VP of Threat Research, Oleg Kolesnikov, produced a great write-up about the details and also variants of exploitation and detection. In summary, Microsoft has provided a patch for the immediate problem, but some third-party systems may still use an older, vulnerable version of Netlogon secure channel connections. To avoid breaking functionality of existing systems, Microsoft has introduced new events in their logs to identify the use of these older versions, and signaled they will move to an enforcement mode that will not accept them anymore after February, 2021.

This is where aligning monitoring with the remediation process becomes so important. The new events added by Microsoft can help identify attack attempts and track other vulnerable systems on the network. A pre-established process to coordinate the use of monitoring tools and infrastructure as an additional compensating control for VM can help in situations like this, where the plan to handle a vulnerability also requires monitoring activities.

Monday, September 21, 2020

DDLC - Detection Development Life Cycle

Dr. Chuvakin has recently delivered another great blog post about "detection as code". I was glad to read it because it was the typical discussion we used have in our brainstorming conversations at Gartner. It had a nice nostalgic feeling :-). But it also reminded me of my favorite paper from those times, "How To Develop and Maintain Security Monitoring Use Cases".

That paper describes a process framework for organizations to identify and develop use cases for security monitoring. It was intentionally designed to be tool neutral, so it could be used to develop SIEM rules, IDS signatures or any other type of content used by security monitoring tools. It was also built to mimic Agile development processes, to avoid the capital mistake of killing the required agility to adapt to threats by too much process. I had fun discussions with great minds like Alex Sieira and Alex Teixeira (what's this with Alexes and security?) when developing some of the ideas for that paper.

Reading the philosophical musings from Anton on "detection as code" (DaaC?), I realized that most of threat detection is code already. All the "content" covered by our process framework is developed and maintained as code, so I believe we are quite close, from a technology perspective, to DaaC. What I think we really need is a DDLC - Detection Development Life Cycle. In retrospect I believe our paper would be more popular if we used that as a catchy title. Here's a free tip for the great analysts responsible for future updates ;-)

Anyway, I believe there are a few things missing to get to real DaaC and DDLC. Among them:
  • Testing and QA. We suck at effectively testing detection content. Most detection tools have no capabilities to help with it. Meanwhile, the software development world has robust processes and tools to test what is developed. There are, however, some interesting steps in that direction for detection content. BAS tools are becoming more popular and integrated to detection tools, so the development of new content can be connected to testing scenarios performed by those tools. Just like automated test cases for apps, but for detection content. Proper staging of content from development to production must also be possible. Full UAT or QA environment are not very useful for threat detection, as it's very hard and expensive to replicate the telemetry flowing through production systems just for testing. But the production tools can have embedded testing environments for content. The Securonix platform, for example, has introduced the Analytics Sandbox, a great way to test content without messing with existing production alerts and queues. 
  • Effective requirements gathering processes. Software development is plagued by developers envisioning capabilities and driving the addition of new features. It's a well-known problem in that realm and they have developed roles and practices to properly move the gathering of requirements to the real users of the software. Does it work for detection content? I'm not sure. We see "SIEM specialists" writing rules, but are they writing rules that generate the alerts the SOC analysts are looking for? Or looking for the activities the red team has performed in their exercises? Security operations groups still operate with loosely defined roles and for many organizations the content developers are the same people looking at the alerts, so the problem may not be that evident for everyone. But as teams grow and roles become more distributed, it will become a big deal. This is also important when so much content is provided by the tools vendors or even content vendors. Some content does not need direct input from each individual organization; we do not have many opportunities to provide our requirements for OS developers, for example, but OS users requirements are generic enough to work that way. Detection content for commodity threats is similar. But when dealing with threats more specific to the business, the right people to provide the requirements must be identified and connected to the process. Doing this continuously and efficiently is challenging and very few organizations have consistent practices to do it.
  • Finally, embedding the toolset and infrastructure into DDLC to make it really DaaC. Here's where my post is very aligned to what Anton initially raised. Content for each tool is already code, but the setup and placement of the tools themselves is not. There's still a substantial amount of manual work to define and deploy log collection, network probes and endpoint agents. And that setup is usually brittle, static and detached from content development. Imagine you need to deploy some network-based detection content and find out there's no traffic capture setup for that network; someone will have to go there and add a tap, or configure something to start capturing the data you need for your content to work. With more traditional IT environments the challenge is still considerable, but as we move to cloud, devops managed environments, these pre-requisite setting can also be incorporated as code in the DDLC.
There's still a lot to make full DaaC and comprehensive DDLC a reality. But there's a lot of interesting stuff in this sense going on, pushed by the need for security operations to align with the DevOps environments in need to be monitored and protected. Check the Analytics Sandbox as a good example. We'll certainly see more like this coming up as we move closer to the vision of threat detection becoming more like software development.

Friday, September 11, 2020

NG SIEM?

An interesting result from changing jobs is seeing how people interpret your decision and how they view the company you’re moving to. I was happy to hear good feedback from many people regarding Securonix, reinforcing my pick for the winning car in the SIEM race.

But there was a question that popped up a few times that indicates an interesting trend in the market: “A SIEM? Isn’t it old technology?”. No, it is not. It may be an old concept, but definitely not “old technology”.

Look at these two pictures below? What do they show?

 


Both show cars. But can we say the Tesla is “old technology”? Notice that the basic idea behind both is essentially the same: Transportation. But this, and the fact they have four wheels, is probably the only thing in common. This is the same for the many SIEMs we’ve seen in the market in twenty or so many years.

Here is the barebones concept of a SIEM:

 


 












How this is accomplished, as well as the scale of things, have changed dramatically since ArcSight, Intellitactics and netforensics days. Some of the main changes:
  • Architecture. Old SIEMs were traditional software stacks running on relational databases and with big and complex fat clients for UI. Compare this with the modern, big data powered SaaS systems with sleek web interfaces. Wow!
  • Use cases. What were we doing with the SIEMs in the past? Some reports, such as “top 10 failed connection attempts” or some other compliance driven report. Many SIEMs had been deployed as an answer to SOX, HIPAA and PCI DSS requirements. Now, most SIEMs are used for threat detection. Reporting, although still a thing, is far less important than the ability to find the needle in the haystack and provide an alert about it.
  • Volume. SIEM sizing used to be a few EPS, Gigabytes exercise. With the need to monitor chatty sources such as EDR, NDR and cloud applications the measures are orders of magnitude higher. This changes the game in terms of architecture (cloud is the new normal) and also drive the need for better analytics; we can’t handle the old false positive rates with the current base rates of events.
  • Threats. It was so easy to detect threats in the past. It was common to find single events that could be used to detect malicious actions. But attacks have evolved to a point where multiple events may be assessed, in isolation and together as a pattern, to determine the existence of malicious intent.
  • Analytics. Driven by the changes to threats, volume and use cases, the analytics capabilities of SIEM have also changed in a huge manner. While old SIEMs would give us some regex capabilities and simple AND/OR correlation, modern solutions will do that and far, far more. Enriched data is analyzed with modern statistics and ML algorithms, providing a way to identify the stealthiest threat actions.

With all that in mind, does it still make sense to call these new Teslas of threat detection a “SIEM”? Well, if we still call a Tesla a car, why not keep the SIEM name?

 

However, differentiating between the old rusty SQL-based tool and the advanced analytics SaaS tools of modern days is also important. In my previous life as an analyst I would frequently laugh at the “Next Gen” fads created by vendors trying to differentiate. But I also have to say it was useful to provide a distinction between the old Firewall and what we now call NGFW. People know the implied difference in capabilities when we say NGFW. With that in mind, I believe saying NG-SIEM is not really a bad thing, if you consider all those differences I mentioned before. Sorry Gartner, I did it! :-)

So, old SIEM dead, long live the NG-SIEM? No, I don’t think we need to do that. But in conversations where you need to highlight the newer capabilities and more modern architecture, it’s certainly worth throwing the NG there.

Tesla owners can’t stop talking about how exciting their cars are. For us, cybersecurity nerds, deploying and using a Next-gen SIEM gives a similar thrill.

Monday, August 31, 2020

I'm Joining Securonix

 I’m very happy to announce today I’m starting my journey with Securonix!


I’ve spent the last five years working as an industry analyst, talking to thousands of clients and vendors about their challenges and solutions on security operations. During this time I was able to identify many of common pain points and what vendors have been doing to address them. Some with success, some not much.


Helping clients as an analyst is a great job. It gives you tremendous visibility into their challenges. But it is also somewhat limited into how much you can help them. So I ended up with many ideas and things I’d like to do, but with no right channel to provide them.


That’s why I chose to join Securonix. Securonix has a great platform to deliver many capabilities that organizations need to tackle their threat detection and response problems. I first came into contact with Securonix before my Gartner life, and have been watching it grow and evolve since then. When we produced an UEBA solutions comparison, back in 2016, it was the best one of the batch. But it didn’t stop there.


A few years ago Gartner said SIEM and UEBA would eventually converge. Securonix didn’t miss the trend. Actually, it was one of the main drivers. UEBA vendors first appeared in the SIEM Magic Quadrant back in 2017. Securonix was already there as a Visionary. Actually it was the vendor with the most complete vision at that time. Since then it managed to improve its ability to execute, becoming one of the leaders in the space. It hasn’t missed the major trends since then, adding important capabilities and quickly adapting to offer a great cloud SIEM solution.


Good tools are extremely important to anyone who wants to make a dent on the incredible threat detection and response challenges we face. I’m excited to help with the evolution of the best security operations and analytics platform available today. You can watch this great journey here, , on Linkedin and on Twitter (@apbarros).





Friday, August 28, 2020

From my Gartner Blog - Goodbye!

I’m sadly writing this as my last Gartner blog post! I’m moving to a new challenge. After years as an analyst, I decided it was time to get closer to delivering the initiatives that have been the focus of my research.

I’m immensely grateful for my time with Gartner. It has been a great experience and I had the opportunity to work with many bright people. I leave a special thank you to my mentor and main co-author, my great manager (thanks boss!) and my KIL (“Key Initiative Leader”, internal Gartner lingo).

Working as a Gartner analyst gives you the opportunity to go through incredible experiences. During the past five years, I was able to:

  • Write groundbreaking research on my favorite topics in cybersecurity. It was very rewarding to find people out there building their strategic plans using some of my own words and adding the figures I drew to their slides.
  • Deliver presentations to full audience rooms in many different places in the world. 
  • Provide advice to some of the major vendors in this industry, having very interesting conversations with their main executives.
  • Discuss challenges and solutions with clients from all over the world and from many different industries. You just can’t imagine the crazy types of challenges they are facing out there! From exotic legal requirements to some very particular business characteristics, I have had many memorable calls during these years.
  • Collaborate with very smart colleagues and have exciting (and, how can I say? “Lively”, maybe…) discussions about the future of cybersecurity.
  • Chair the Security Summit in Brazil for two years, working with amazing people and putting together unforgettable events. I will definitely miss the experience to prepare and deliver the opening keynote there!

What I will miss mostly is experiencing those moments when you hear your client saying things like “that was the best advice I’ve ever heard”. Those are the moments that give an analyst a clear view of their sense of purpose. I’m really grateful for being able to go through that as a Gartner analyst.

Thank you Gartner. Thank you my reader. And I hope you follow me back to my personal blog. I’ll still be there.

  

The post Goodbye! appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2EK5WLs
via IFTTT

Friday, April 17, 2020

From my Gartner Blog - New Research: Open Source Tools!

After finishing the wave of research that covered pentesting, monitoring use cases, SOAR and TI, I’m excited to start research for a net new document covering an exciting topic rarely covered in Gartner research: Open source tools! The intent is to look at the most popular open source tools used by security operations teams out there. Things like the ELK stack, Osquery, MISP and Zeek. What I’d like to cover in this new paper is:

  • Why is the tool being used? Why not a commercial alternative?
  • How is it being used? What is the role of the tool in the overall security operations toolset, what are the integrations in place?
  • How much effort was put to implement the tool? What about maintaining it?
  • Is it just about using it or is there some active participation on the development of tool as well?
  • What are requirements to get value from this tool? Skills? Anything specific in terms of infrastructure, or processes?

It is a fascinating topic, which bring a high risk of scope creep, so the lists of questions answered and tools covered are still quite fluid.

In the meantime, it would be nice to hear stories from the trenches; what are you using out there? Why? Was that picked just because it was free (I know, TCO, etc, but the software IS free….) ? Or is it a cultural aspect of your organization? Do you believe it is actually better than the commercial alternatives? Why?

Lots of questions indeed. Please help me provide some answers 🙂

The post New Research: Open Source Tools! appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2Kbxglh
via IFTTT

Thursday, April 9, 2020

From my Gartner Blog - Developing and Maintaining Security Monitoring Use Cases

My favorite Gartner paper has just been updated to its 3rd version! “How to Develop and Maintain Security Monitoring Use Cases” was originally published in 2016 as a guidance framework for organizations trying to identify what their security tools should be looking for, and how to turn these ideas into signatures, rules and other content. This update brings even more ATT&CK references and a new batch of eye candy graphics! So much different than the original Visio built graphics!

This is the anchor diagram from the doc, summarizing our framework:

Some nice quotes from doc:

“Some organizations create too much process overhead around use cases — agility and predictability are required. Processes must not be too complex because security monitoring requires fast and constant changes to align with evolving threats.”

“The efficiency and effectiveness of security monitoring are directly related to the appropriate implementation and optimization of the right use cases on the right security monitoring tools.”

“Do not simply enable everything that comes with the tools. A considerable part of that content may not be aligned with the organization’s priorities, or may not be applicable to its environment.”

“Make use case development similar to agile software development by being able to quickly implement or modify a use case to adapt to changing threat and business conditions.”

I hope you enjoy it, and let me know if you have the framework implemented in your organization. Please don’t forget to provide feedback about the paper here.

Next wave of research is about Open Source tools for threat detection and response, in parallel with interesting stuff on Breach and Attack Simulation.

The post Developing and Maintaining Security Monitoring Use Cases appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2JQhigf
via IFTTT

Tuesday, March 31, 2020

From my Gartner Blog - New Research on Threat Intelligence and SOAR

Since my blogging whip was gone I haven’t been posting as frequently as I’d like, but I realized we had recently published new versions of some of our coolest research and I completely missed announcing them here! So let me talk a bit about them:

The first one is a big update to our Threat Intelligence research, conducted by Michael Clark. The paper now is called “How to Use Threat Intelligence for Security Monitoring and Incident Response”. It has a more specific scope and is more prescriptive in its guidance, providing a nice framework for those planning to start using TI on their detection and response processes:

The other one is a refresh on our paper about SOAR – Security Orchestration, Automation and Response, conducted by Eric Ahlm. It provides an overview of SOAR and how to assess your readiness for this technology according to your use cases:

I hope you enjoy the new papers.  I’m also working on an update to my security monitoring use cases paper, it will hit the streets soon. Meanwhile, feel free to provide feedback about the papers above here.

The post New Research on Threat Intelligence and SOAR appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2JzgjAV
via IFTTT

Wednesday, January 29, 2020

From my Gartner Blog - Updated Paper on Penetration Testing and Red Teams

I finally managed to publish the update to my paper on pentesting, “Using Penetration Testing and Red Teams to Assess and Improve Security”. It has some small tweaks from the previous version, including some additional guidance around Breach and Attack Simulation tools role.

Questions about how to define the scope of penetration tests are very common in my conversations with clients. I always tell them it should be driven primarily by their objective for running the test. Surprisingly, many have problems articulating why they are doing it.

The discussion about comparing pentests with other forms of assessments is there too, although we also published a paper focused on the multiple test methods some time ago.

A few good pieces from the document:

“Research the characteristics and applicability of penetration tests and other types of security assessments before selecting the most appropriate one for the organization. Select a vulnerability assessment if the goal is to find easily identifiable vulnerabilities.”

“Definitions for security assessments vary according to the source, with a big influence from marketing strategies and the buzzword of the day. Some vendors will define their red team service in a way that may be identified as a pentest in this research, while vulnerability assessment providers will often advertise their services as a penetration test. Due to the lack of consensus, organizations hiring a service provider to perform one of the tests described below should ensure their definition matches the one used by the vendor”

“Pentests are often requested by organizations to identify all vulnerabilities affecting a certain environment, with the intent to produce a list of “problems to be fixed.” This is a dangerous mistake because pentesters aren’t searching for a complete list of visible vulnerabilities.”

Next on the queue is the monitoring use cases paper. That’s my favorite paper and excited to refresh it again. You’ll see it here soon!

The post Updated Paper on Penetration Testing and Red Teams appeared first on Augusto Barros.



from Augusto Barros https://ift.tt/2Gx5wWq
via IFTTT