As I was looking for an old email in my archives, I stumbled on discussions about a security incident that happened almost 13 years ago. That was that time when, well, there's no other way of saying it....I was hacked.
Thursday, March 4, 2021
Friday, October 9, 2020
(Cross posted from the Securonix Blog)Vulnerability management is one of the most basic security hygiene practices organizations must have in place to avoid being hacked. However, even being a primary security control doesn't make it simple to successfully implement. I used to cover VM in my Gartner days, and it was sad to see how many organizations were not doing it properly.
Many security professionals see VM as a boring topic, usually seeing it simply as a "scan and patch" cycle. Although the bulk of a typical VM program may indeed be based on the processes of scanning for vulnerabilities and applying patches, there are many other things that need to be done so it can deliver the expected results.
One of the most important pieces of it is the prioritization of findings. It is clear to most organizations that patching every open vulnerability is just not feasible. If you can't patch everything, what should you patch first? There are many interesting advancements in this area. What used to be based only on the severity of the vulnerabilities (the old CVSS value) is now a more sophisticated process that leverages multiple data points, including threat intelligence. The EPSS research by Kenna Security is a great example of how evolved the practice of prioritizing vulnerabilities is now when compared to the old CVSS times.
But even when you are able to decide what to patch first, there are also cases where the remediation is not simply applying a patch. Some vulnerabilities involve not only a bug, but also other issues such as the existence of legacy software and protocols in the environment. These situations usually require a more complex approach, and that's where an additional component of the VM process, the compensating controls, become important.
Compensating controls are used to address the risk of a vulnerability while the full remediation cannot be applied. Using an IPS, for example, is a typical compensating control. You can use them when you cannot apply the remediation, such as when a patch is not available, or to mitigate the risk until you are comfortable enough (usually after testing is done, during a maintenance window) to apply it. We usually see some security controls that can avoid or reduce the impact of vulnerability exploitation as the ideal candidates for compensating risk, but there is something I always like to bring up during this discussion: Monitoring.
Think about it for a second. You have an open vulnerability that you still cannot patch. The exploit is available, as well as a lot of information about how it is used. Even if you cannot avoid it, you can use all this information to build a security monitoring use case focused on the exploitation of this specific vulnerability. You it is there, and that there is a chance for it being exploited, so why not put something together to look for that exploitation? You can prioritize the alerts generated by this use case, as you know you are currently vulnerable to that type of attack.
A great example of using security monitoring as part of the VM process is what is happening with the new Windows Zerologon EP (ZEP) vulnerability (CVE-2020-1472). The issue is complex and requires more than just applying a patch. Our VP of Threat Research, Oleg Kolesnikov, produced a great write-up about the details and also variants of exploitation and detection. In summary, Microsoft has provided a patch for the immediate problem, but some third-party systems may still use an older, vulnerable version of Netlogon secure channel connections. To avoid breaking functionality of existing systems, Microsoft has introduced new events in their logs to identify the use of these older versions, and signaled they will move to an enforcement mode that will not accept them anymore after February, 2021.
This is where aligning monitoring with the remediation process becomes so important. The new events added by Microsoft can help identify attack attempts and track other vulnerable systems on the network. A pre-established process to coordinate the use of monitoring tools and infrastructure as an additional compensating control for VM can help in situations like this, where the plan to handle a vulnerability also requires monitoring activities.
Monday, September 21, 2020
- Testing and QA. We suck at effectively testing detection content. Most detection tools have no capabilities to help with it. Meanwhile, the software development world has robust processes and tools to test what is developed. There are, however, some interesting steps in that direction for detection content. BAS tools are becoming more popular and integrated to detection tools, so the development of new content can be connected to testing scenarios performed by those tools. Just like automated test cases for apps, but for detection content. Proper staging of content from development to production must also be possible. Full UAT or QA environment are not very useful for threat detection, as it's very hard and expensive to replicate the telemetry flowing through production systems just for testing. But the production tools can have embedded testing environments for content. The Securonix platform, for example, has introduced the Analytics Sandbox, a great way to test content without messing with existing production alerts and queues.
- Effective requirements gathering processes. Software development is plagued by developers envisioning capabilities and driving the addition of new features. It's a well-known problem in that realm and they have developed roles and practices to properly move the gathering of requirements to the real users of the software. Does it work for detection content? I'm not sure. We see "SIEM specialists" writing rules, but are they writing rules that generate the alerts the SOC analysts are looking for? Or looking for the activities the red team has performed in their exercises? Security operations groups still operate with loosely defined roles and for many organizations the content developers are the same people looking at the alerts, so the problem may not be that evident for everyone. But as teams grow and roles become more distributed, it will become a big deal. This is also important when so much content is provided by the tools vendors or even content vendors. Some content does not need direct input from each individual organization; we do not have many opportunities to provide our requirements for OS developers, for example, but OS users requirements are generic enough to work that way. Detection content for commodity threats is similar. But when dealing with threats more specific to the business, the right people to provide the requirements must be identified and connected to the process. Doing this continuously and efficiently is challenging and very few organizations have consistent practices to do it.
- Finally, embedding the toolset and infrastructure into DDLC to make it really DaaC. Here's where my post is very aligned to what Anton initially raised. Content for each tool is already code, but the setup and placement of the tools themselves is not. There's still a substantial amount of manual work to define and deploy log collection, network probes and endpoint agents. And that setup is usually brittle, static and detached from content development. Imagine you need to deploy some network-based detection content and find out there's no traffic capture setup for that network; someone will have to go there and add a tap, or configure something to start capturing the data you need for your content to work. With more traditional IT environments the challenge is still considerable, but as we move to cloud, devops managed environments, these pre-requisite setting can also be incorporated as code in the DDLC.
Friday, September 11, 2020
But there was a question that popped up a few times that indicates an interesting trend in the market: “A SIEM? Isn’t it old technology?”. No, it is not. It may be an old concept, but definitely not “old technology”.
Look at these two pictures below? What do they show?
Both show cars. But can we say the Tesla is “old technology”? Notice that the basic idea behind both is essentially the same: Transportation. But this, and the fact they have four wheels, is probably the only thing in common. This is the same for the many SIEMs we’ve seen in the market in twenty or so many years.
Here is the barebones concept of a SIEM:
How this is accomplished, as well as the scale of things, have changed dramatically since ArcSight, Intellitactics and netforensics days. Some of the main changes:
- Architecture. Old SIEMs were traditional software stacks running on relational databases and with big and complex fat clients for UI. Compare this with the modern, big data powered SaaS systems with sleek web interfaces. Wow!
- Use cases. What were we doing with the SIEMs in the past? Some reports, such as “top 10 failed connection attempts” or some other compliance driven report. Many SIEMs had been deployed as an answer to SOX, HIPAA and PCI DSS requirements. Now, most SIEMs are used for threat detection. Reporting, although still a thing, is far less important than the ability to find the needle in the haystack and provide an alert about it.
- Volume. SIEM sizing used to be a few EPS, Gigabytes exercise. With the need to monitor chatty sources such as EDR, NDR and cloud applications the measures are orders of magnitude higher. This changes the game in terms of architecture (cloud is the new normal) and also drive the need for better analytics; we can’t handle the old false positive rates with the current base rates of events.
- Threats. It was so easy to detect threats in the past. It was common to find single events that could be used to detect malicious actions. But attacks have evolved to a point where multiple events may be assessed, in isolation and together as a pattern, to determine the existence of malicious intent.
- Analytics. Driven by the changes to threats, volume and use cases, the analytics capabilities of SIEM have also changed in a huge manner. While old SIEMs would give us some regex capabilities and simple AND/OR correlation, modern solutions will do that and far, far more. Enriched data is analyzed with modern statistics and ML algorithms, providing a way to identify the stealthiest threat actions.
With all that in mind, does it still make sense to call these new Teslas of threat detection a “SIEM”? Well, if we still call a Tesla a car, why not keep the SIEM name?
However, differentiating between the old rusty SQL-based tool and the advanced analytics SaaS tools of modern days is also important. In my previous life as an analyst I would frequently laugh at the “Next Gen” fads created by vendors trying to differentiate. But I also have to say it was useful to provide a distinction between the old Firewall and what we now call NGFW. People know the implied difference in capabilities when we say NGFW. With that in mind, I believe saying NG-SIEM is not really a bad thing, if you consider all those differences I mentioned before. Sorry Gartner, I did it! :-)
So, old SIEM dead, long live the NG-SIEM? No, I don’t think we need to do that. But in conversations where you need to highlight the newer capabilities and more modern architecture, it’s certainly worth throwing the NG there.
Tesla owners can’t stop talking about how exciting their cars are. For us, cybersecurity nerds, deploying and using a Next-gen SIEM gives a similar thrill.
Monday, August 31, 2020
I’m very happy to announce today I’m starting my journey with Securonix!
I’ve spent the last five years working as an industry analyst, talking to thousands of clients and vendors about their challenges and solutions on security operations. During this time I was able to identify many of common pain points and what vendors have been doing to address them. Some with success, some not much.
Helping clients as an analyst is a great job. It gives you tremendous visibility into their challenges. But it is also somewhat limited into how much you can help them. So I ended up with many ideas and things I’d like to do, but with no right channel to provide them.
That’s why I chose to join Securonix. Securonix has a great platform to deliver many capabilities that organizations need to tackle their threat detection and response problems. I first came into contact with Securonix before my Gartner life, and have been watching it grow and evolve since then. When we produced an UEBA solutions comparison, back in 2016, it was the best one of the batch. But it didn’t stop there.
A few years ago Gartner said SIEM and UEBA would eventually converge. Securonix didn’t miss the trend. Actually, it was one of the main drivers. UEBA vendors first appeared in the SIEM Magic Quadrant back in 2017. Securonix was already there as a Visionary. Actually it was the vendor with the most complete vision at that time. Since then it managed to improve its ability to execute, becoming one of the leaders in the space. It hasn’t missed the major trends since then, adding important capabilities and quickly adapting to offer a great cloud SIEM solution.
Good tools are extremely important to anyone who wants to make a dent on the incredible threat detection and response challenges we face. I’m excited to help with the evolution of the best security operations and analytics platform available today. You can watch this great journey here, , on Linkedin and on Twitter (@apbarros).
Friday, August 28, 2020
I’m sadly writing this as my last Gartner blog post! I’m moving to a new challenge. After years as an analyst, I decided it was time to get closer to delivering the initiatives that have been the focus of my research.
I’m immensely grateful for my time with Gartner. It has been a great experience and I had the opportunity to work with many bright people. I leave a special thank you to my mentor and main co-author, my great manager (thanks boss!) and my KIL (“Key Initiative Leader”, internal Gartner lingo).
Working as a Gartner analyst gives you the opportunity to go through incredible experiences. During the past five years, I was able to:
- Write groundbreaking research on my favorite topics in cybersecurity. It was very rewarding to find people out there building their strategic plans using some of my own words and adding the figures I drew to their slides.
- Deliver presentations to full audience rooms in many different places in the world.
- Provide advice to some of the major vendors in this industry, having very interesting conversations with their main executives.
- Discuss challenges and solutions with clients from all over the world and from many different industries. You just can’t imagine the crazy types of challenges they are facing out there! From exotic legal requirements to some very particular business characteristics, I have had many memorable calls during these years.
- Collaborate with very smart colleagues and have exciting (and, how can I say? “Lively”, maybe…) discussions about the future of cybersecurity.
- Chair the Security Summit in Brazil for two years, working with amazing people and putting together unforgettable events. I will definitely miss the experience to prepare and deliver the opening keynote there!
What I will miss mostly is experiencing those moments when you hear your client saying things like “that was the best advice I’ve ever heard”. Those are the moments that give an analyst a clear view of their sense of purpose. I’m really grateful for being able to go through that as a Gartner analyst.
Thank you Gartner. Thank you my reader. And I hope you follow me back to my personal blog. I’ll still be there.
from Augusto Barros https://ift.tt/2EK5WLs
Friday, April 17, 2020
After finishing the wave of research that covered pentesting, monitoring use cases, SOAR and TI, I’m excited to start research for a net new document covering an exciting topic rarely covered in Gartner research: Open source tools! The intent is to look at the most popular open source tools used by security operations teams out there. Things like the ELK stack, Osquery, MISP and Zeek. What I’d like to cover in this new paper is:
- Why is the tool being used? Why not a commercial alternative?
- How is it being used? What is the role of the tool in the overall security operations toolset, what are the integrations in place?
- How much effort was put to implement the tool? What about maintaining it?
- Is it just about using it or is there some active participation on the development of tool as well?
- What are requirements to get value from this tool? Skills? Anything specific in terms of infrastructure, or processes?
It is a fascinating topic, which bring a high risk of scope creep, so the lists of questions answered and tools covered are still quite fluid.
In the meantime, it would be nice to hear stories from the trenches; what are you using out there? Why? Was that picked just because it was free (I know, TCO, etc, but the software IS free….) ? Or is it a cultural aspect of your organization? Do you believe it is actually better than the commercial alternatives? Why?
Lots of questions indeed. Please help me provide some answers
from Augusto Barros https://ift.tt/2Kbxglh