- Testing and QA. We suck at effectively testing detection content. Most detection tools have no capabilities to help with it. Meanwhile, the software development world has robust processes and tools to test what is developed. There are, however, some interesting steps in that direction for detection content. BAS tools are becoming more popular and integrated to detection tools, so the development of new content can be connected to testing scenarios performed by those tools. Just like automated test cases for apps, but for detection content. Proper staging of content from development to production must also be possible. Full UAT or QA environment are not very useful for threat detection, as it's very hard and expensive to replicate the telemetry flowing through production systems just for testing. But the production tools can have embedded testing environments for content. The Securonix platform, for example, has introduced the Analytics Sandbox, a great way to test content without messing with existing production alerts and queues.
- Effective requirements gathering processes. Software development is plagued by developers envisioning capabilities and driving the addition of new features. It's a well-known problem in that realm and they have developed roles and practices to properly move the gathering of requirements to the real users of the software. Does it work for detection content? I'm not sure. We see "SIEM specialists" writing rules, but are they writing rules that generate the alerts the SOC analysts are looking for? Or looking for the activities the red team has performed in their exercises? Security operations groups still operate with loosely defined roles and for many organizations the content developers are the same people looking at the alerts, so the problem may not be that evident for everyone. But as teams grow and roles become more distributed, it will become a big deal. This is also important when so much content is provided by the tools vendors or even content vendors. Some content does not need direct input from each individual organization; we do not have many opportunities to provide our requirements for OS developers, for example, but OS users requirements are generic enough to work that way. Detection content for commodity threats is similar. But when dealing with threats more specific to the business, the right people to provide the requirements must be identified and connected to the process. Doing this continuously and efficiently is challenging and very few organizations have consistent practices to do it.
- Finally, embedding the toolset and infrastructure into DDLC to make it really DaaC. Here's where my post is very aligned to what Anton initially raised. Content for each tool is already code, but the setup and placement of the tools themselves is not. There's still a substantial amount of manual work to define and deploy log collection, network probes and endpoint agents. And that setup is usually brittle, static and detached from content development. Imagine you need to deploy some network-based detection content and find out there's no traffic capture setup for that network; someone will have to go there and add a tap, or configure something to start capturing the data you need for your content to work. With more traditional IT environments the challenge is still considerable, but as we move to cloud, devops managed environments, these pre-requisite setting can also be incorporated as code in the DDLC.
Monday, September 21, 2020
Friday, September 11, 2020
But there was a question that popped up a few times that indicates an interesting trend in the market: “A SIEM? Isn’t it old technology?”. No, it is not. It may be an old concept, but definitely not “old technology”.
Look at these two pictures below? What do they show?
Both show cars. But can we say the Tesla is “old technology”? Notice that the basic idea behind both is essentially the same: Transportation. But this, and the fact they have four wheels, is probably the only thing in common. This is the same for the many SIEMs we’ve seen in the market in twenty or so many years.
Here is the barebones concept of a SIEM:
How this is accomplished, as well as the scale of things, have changed dramatically since ArcSight, Intellitactics and netforensics days. Some of the main changes:
- Architecture. Old SIEMs were traditional software stacks running on relational databases and with big and complex fat clients for UI. Compare this with the modern, big data powered SaaS systems with sleek web interfaces. Wow!
- Use cases. What were we doing with the SIEMs in the past? Some reports, such as “top 10 failed connection attempts” or some other compliance driven report. Many SIEMs had been deployed as an answer to SOX, HIPAA and PCI DSS requirements. Now, most SIEMs are used for threat detection. Reporting, although still a thing, is far less important than the ability to find the needle in the haystack and provide an alert about it.
- Volume. SIEM sizing used to be a few EPS, Gigabytes exercise. With the need to monitor chatty sources such as EDR, NDR and cloud applications the measures are orders of magnitude higher. This changes the game in terms of architecture (cloud is the new normal) and also drive the need for better analytics; we can’t handle the old false positive rates with the current base rates of events.
- Threats. It was so easy to detect threats in the past. It was common to find single events that could be used to detect malicious actions. But attacks have evolved to a point where multiple events may be assessed, in isolation and together as a pattern, to determine the existence of malicious intent.
- Analytics. Driven by the changes to threats, volume and use cases, the analytics capabilities of SIEM have also changed in a huge manner. While old SIEMs would give us some regex capabilities and simple AND/OR correlation, modern solutions will do that and far, far more. Enriched data is analyzed with modern statistics and ML algorithms, providing a way to identify the stealthiest threat actions.
With all that in mind, does it still make sense to call these new Teslas of threat detection a “SIEM”? Well, if we still call a Tesla a car, why not keep the SIEM name?
However, differentiating between the old rusty SQL-based tool and the advanced analytics SaaS tools of modern days is also important. In my previous life as an analyst I would frequently laugh at the “Next Gen” fads created by vendors trying to differentiate. But I also have to say it was useful to provide a distinction between the old Firewall and what we now call NGFW. People know the implied difference in capabilities when we say NGFW. With that in mind, I believe saying NG-SIEM is not really a bad thing, if you consider all those differences I mentioned before. Sorry Gartner, I did it! :-)
So, old SIEM dead, long live the NG-SIEM? No, I don’t think we need to do that. But in conversations where you need to highlight the newer capabilities and more modern architecture, it’s certainly worth throwing the NG there.
Tesla owners can’t stop talking about how exciting their cars are. For us, cybersecurity nerds, deploying and using a Next-gen SIEM gives a similar thrill.
Monday, August 31, 2020
I’m very happy to announce today I’m starting my journey with Securonix!
I’ve spent the last five years working as an industry analyst, talking to thousands of clients and vendors about their challenges and solutions on security operations. During this time I was able to identify many of common pain points and what vendors have been doing to address them. Some with success, some not much.
Helping clients as an analyst is a great job. It gives you tremendous visibility into their challenges. But it is also somewhat limited into how much you can help them. So I ended up with many ideas and things I’d like to do, but with no right channel to provide them.
That’s why I chose to join Securonix. Securonix has a great platform to deliver many capabilities that organizations need to tackle their threat detection and response problems. I first came into contact with Securonix before my Gartner life, and have been watching it grow and evolve since then. When we produced an UEBA solutions comparison, back in 2016, it was the best one of the batch. But it didn’t stop there.
A few years ago Gartner said SIEM and UEBA would eventually converge. Securonix didn’t miss the trend. Actually, it was one of the main drivers. UEBA vendors first appeared in the SIEM Magic Quadrant back in 2017. Securonix was already there as a Visionary. Actually it was the vendor with the most complete vision at that time. Since then it managed to improve its ability to execute, becoming one of the leaders in the space. It hasn’t missed the major trends since then, adding important capabilities and quickly adapting to offer a great cloud SIEM solution.
Good tools are extremely important to anyone who wants to make a dent on the incredible threat detection and response challenges we face. I’m excited to help with the evolution of the best security operations and analytics platform available today. You can watch this great journey here, , on Linkedin and on Twitter (@apbarros).
Friday, August 28, 2020
I’m sadly writing this as my last Gartner blog post! I’m moving to a new challenge. After years as an analyst, I decided it was time to get closer to delivering the initiatives that have been the focus of my research.
I’m immensely grateful for my time with Gartner. It has been a great experience and I had the opportunity to work with many bright people. I leave a special thank you to my mentor and main co-author, my great manager (thanks boss!) and my KIL (“Key Initiative Leader”, internal Gartner lingo).
Working as a Gartner analyst gives you the opportunity to go through incredible experiences. During the past five years, I was able to:
- Write groundbreaking research on my favorite topics in cybersecurity. It was very rewarding to find people out there building their strategic plans using some of my own words and adding the figures I drew to their slides.
- Deliver presentations to full audience rooms in many different places in the world.
- Provide advice to some of the major vendors in this industry, having very interesting conversations with their main executives.
- Discuss challenges and solutions with clients from all over the world and from many different industries. You just can’t imagine the crazy types of challenges they are facing out there! From exotic legal requirements to some very particular business characteristics, I have had many memorable calls during these years.
- Collaborate with very smart colleagues and have exciting (and, how can I say? “Lively”, maybe…) discussions about the future of cybersecurity.
- Chair the Security Summit in Brazil for two years, working with amazing people and putting together unforgettable events. I will definitely miss the experience to prepare and deliver the opening keynote there!
What I will miss mostly is experiencing those moments when you hear your client saying things like “that was the best advice I’ve ever heard”. Those are the moments that give an analyst a clear view of their sense of purpose. I’m really grateful for being able to go through that as a Gartner analyst.
Thank you Gartner. Thank you my reader. And I hope you follow me back to my personal blog. I’ll still be there.
from Augusto Barros https://ift.tt/2EK5WLs
Friday, April 17, 2020
After finishing the wave of research that covered pentesting, monitoring use cases, SOAR and TI, I’m excited to start research for a net new document covering an exciting topic rarely covered in Gartner research: Open source tools! The intent is to look at the most popular open source tools used by security operations teams out there. Things like the ELK stack, Osquery, MISP and Zeek. What I’d like to cover in this new paper is:
- Why is the tool being used? Why not a commercial alternative?
- How is it being used? What is the role of the tool in the overall security operations toolset, what are the integrations in place?
- How much effort was put to implement the tool? What about maintaining it?
- Is it just about using it or is there some active participation on the development of tool as well?
- What are requirements to get value from this tool? Skills? Anything specific in terms of infrastructure, or processes?
It is a fascinating topic, which bring a high risk of scope creep, so the lists of questions answered and tools covered are still quite fluid.
In the meantime, it would be nice to hear stories from the trenches; what are you using out there? Why? Was that picked just because it was free (I know, TCO, etc, but the software IS free….) ? Or is it a cultural aspect of your organization? Do you believe it is actually better than the commercial alternatives? Why?
Lots of questions indeed. Please help me provide some answers
from Augusto Barros https://ift.tt/2Kbxglh
Thursday, April 9, 2020
My favorite Gartner paper has just been updated to its 3rd version! “How to Develop and Maintain Security Monitoring Use Cases” was originally published in 2016 as a guidance framework for organizations trying to identify what their security tools should be looking for, and how to turn these ideas into signatures, rules and other content. This update brings even more ATT&CK references and a new batch of eye candy graphics! So much different than the original Visio built graphics!
This is the anchor diagram from the doc, summarizing our framework:
Some nice quotes from doc:
“Some organizations create too much process overhead around use cases — agility and predictability are required. Processes must not be too complex because security monitoring requires fast and constant changes to align with evolving threats.”
“The efficiency and effectiveness of security monitoring are directly related to the appropriate implementation and optimization of the right use cases on the right security monitoring tools.”
“Do not simply enable everything that comes with the tools. A considerable part of that content may not be aligned with the organization’s priorities, or may not be applicable to its environment.”
“Make use case development similar to agile software development by being able to quickly implement or modify a use case to adapt to changing threat and business conditions.”
I hope you enjoy it, and let me know if you have the framework implemented in your organization. Please don’t forget to provide feedback about the paper here.
Next wave of research is about Open Source tools for threat detection and response, in parallel with interesting stuff on Breach and Attack Simulation.
The post Developing and Maintaining Security Monitoring Use Cases appeared first on Augusto Barros.
from Augusto Barros https://ift.tt/2JQhigf
Tuesday, March 31, 2020
Since my blogging whip was gone I haven’t been posting as frequently as I’d like, but I realized we had recently published new versions of some of our coolest research and I completely missed announcing them here! So let me talk a bit about them:
The first one is a big update to our Threat Intelligence research, conducted by Michael Clark. The paper now is called “How to Use Threat Intelligence for Security Monitoring and Incident Response”. It has a more specific scope and is more prescriptive in its guidance, providing a nice framework for those planning to start using TI on their detection and response processes:
The other one is a refresh on our paper about SOAR – Security Orchestration, Automation and Response, conducted by Eric Ahlm. It provides an overview of SOAR and how to assess your readiness for this technology according to your use cases:
I hope you enjoy the new papers. I’m also working on an update to my security monitoring use cases paper, it will hit the streets soon. Meanwhile, feel free to provide feedback about the papers above here.
from Augusto Barros https://ift.tt/2JzgjAV