The research on EDR tools and practices renders some very interesting discussions on tools capabilities. While many EDR vendors will focus on their fast searching and automated IOC checking capabilities, the “Detection” piece is always a fun piece to talk about. Especially when you discard the basic “blacklist” approach which, by the way, may not be as simple as we think (malware polymorphism makes it far more challenging than most people think it is).
What would you expect from an EDR tool regarding “Detection”, considering we are not including there the basic IOC matching? Write down you answer, then look at it. Isn’t that something you would expect, for example, from your antivirus (or “Endpoint Protection Platform”, the grown-up name)? What kind of detection capabilities should we expect from an EDR tool but not from an EPP?
Most EDR tools trying to do something beyond EPP are taking a “behavior” based approach. Identifying exactly what the vendors refer to as “behavior based detection” is another interesting challenge. If you hard code on your tool something considered a malicious behavior (something like “disabling AV”, “setting up hidden persistence”, “establishing contact with C&C server” or “search for data files or memory pages containing credit card numbers”), is it “behavior based” detection or just a fancy signature (or “rule”)?
There are no strong definitions and descriptions for capabilities such as “behavior based detection”, “anomaly detection” (isn’t it funny that some tools doing that define what an “anomaly” is just like a…signature?), etc. Add to it the claims about Machine Learning, AI, etc, and we have the perfect storm of inflated claims and, unfortunately, expectations. It also makes the lives of those comparing solutions a big nightmare.
To be fair with all those tools, identifying malicious activity, or just malware (malware as the main vehicle for malicious activity is so big now that we often forget that it is not a requirement), is very hard. Computers can do anything and it’s hard to understand when some instructions are part of malicious activities and when they are not. Some powershell use, for example, would be expected from system administrators and power users, but is often a good indication of malicious activity when done by a “regular” user. Only the context (that sometimes is only different from a human point of view) will tell if it’s good or bad. A malware dropper behaves almost exactly like an installer or auto-update component of regular software.
Removing the inflated claims, the existing capabilities for detection are not that bad. If it’s so hard to identify what is malicious and what is not, we may need to keep explaining that to the tools. The real risk of not meeting expectations is in believing that the tool doesn’t need to learn, or when you don’t fully understand who has the role of teacher. It might be primarily the vendor, but you still need to be able to assess if they are doing that appropriately.
What does that mean? It means tools need to be tested before buying and constantly after implemented too. Understanding how the existing and emerging threats behave and how the tools would react to them is crucial to ensure they will keep detecting bad stuff. If you have resources that can obtain that information (here’s where that “other” Threat Intelligence comes into play) and translate it into the right questions (or test scenarios) to the vendors, you’ll be able stay aware of your tools capabilities and limitations. And of course, identify the snake oil when you see it in that booth at RSA
from Augusto Barros http://ift.tt/1TQSMLh