From my Gartner Blog - Demonstrating Value of Security Analytics
An interesting aspect of covering Threat Monitoring and Detection is the chance to be exposed to every new vendor in this field, what currently means a lot of “analytics” stuff. In the same way as it is nice to see all the new stuff being created in this space, it is also painful trying to understand the value these new products are supposed to provide. This is happening partially due to the old issue of excessive marketing and buzzword abuse, but also from something related to the techniques being used by some of these products: Machine Learning, advances statistics and data science stuff. These techniques and methods can certainly provide value on threat detecting, but it’s very hard not only for clients (and analysts!) to understand but also for the vendors to explain how their products are able to do that.
Anton Chuvakin covered some of that issue in a blog post a few months ago: The Rise of Non-Deterministic Security. A fun quote from that post:
“Today we are – for realz! – on the cusp of seeing some security tools that are based on non-deterministic logic (such as select types of machine learning) and thus are unable to ever explain their decisions to alert or block. Mind you, they cannot explain them not because their designers are sloppy, naïve or unethical, but because the tools are build on the methods and algorithms that inherently unexplainable [well, OK, a note for the data scientist set reading this: the overall logic may be explainable, but each individual decision is not].”
The problem goes beyond the fact related to the lack of explanation for algorithmic decisions. It is also related to the complexity of the technology and even protecting Intellectual Property. Let’s look at those three points:
Explaining algorithm decisions: Anton summarized well on his blog post. This issue affects Machine Learning use not only for security but for many other cases. Actually, it goes beyond ML, it is related to algorithmic decision in general. A few years ago someone managed to create models that could predict extremely well the decisions from a certain US Supreme Court judge. If you look at the model it will be clear that some decisions were extremely biased and ideological, but probably that judge was never aware that his behavior was following those rules. Extracting algorithms and models expose biases and sometimes decisions that are far simpler that we would expect. Imagine, for example, if you ML-based security systems tells you “block all PDF files where the PDF standard is lower than 2.0 and bigger than 15MB, as 99.999% of those cases were found to be malware”. Does it sound like a right way to find malware? The ML system doesn’t know what is “right” or “wrong”, it will simply find the combination of factors that best predict a certain outcome. If you look at that combination it might not make sense from a causal point of view, but it does what it’s supposed to do: Predict the outcome.
Technology complexity: If the outcome of the ML system is already hard to explain, the technology itself might be much more complex. Many vendors opt to generalize the explanation about their systems with something like “proprietary advanced analytics and machine learning algorithms” not because they are trying to sell you snake oil, but because the average buyer would not understand the real explanation. This is one of the points where vendors could probably do a better job. After all, as Einstein (supposedly) said, if you cannot explain something to a 6 years old, you don’t understand that subject well enough”.
Protecting Intellectual Property: Lastly, the always present fear of spilling the (magic) beans. On the security analytics case I think this is combination of two factors. One, the fact that some products do stuff far simpler that the marketing says (naive Bayes has been around for years and it’s still something extremely useful under the security analytics context). The other is that this field is right now exploding with new entrants (we have briefings with a new one every week) and the full Silicon Valley-esque style of competition is going on, fuelling the paranoia and precautious posture of the vendors.
If you think the second and last points are contradictory, you are right. Those two reasons would normally not be affecting the same vendor. Vendors with something actually valuable usually can’t explain what they are doing because of complexity, while vendors selling marginally valuable stuff are often concerned about protecting their IP – they know that it would be very easy for a competitor to do the same.
With all those reasons combined we end up on a situation where vendors just can’t explain the value of their solutions, and we can’t find out if they are useful or not. And what can organizations do about it? I can suggest a few things:
Test, test, test. Proof of Concepts are extremely important on this area. The vendors are aware of that and most of them (at least those that have something useful to offer) will push for that. Plan for a PoC, or better, a bake-off, with conditions closest to real-life as possible.
Understand well your requirements. Why are you considering this tool? Do you understand what you are trying to achieve or is it just a “let’s see what this stuff can do” thing? Alice (in Wonderland) once asked the Cheshire cat, “Would you tell me, please, which way I ought to go from here?”, for what it replies: “That depends a good deal on where you want to get to.”
Prepare yourself to talk about the subject. Data sciences are coming to stay. Security professionals must learn at least the basics about it so they can distinguish between real stuff and snake oil. You’ll be surprised at the reactions of vendors when you challenge or ask for more details on their claims.
The post Demonstrating Value of Security Analytics appeared first on Augusto Barros.
from Augusto Barros http://ift.tt/20dQmsy
via IFTTT