Thursday, October 29, 2015

From my Gartner Blog - Demonstrating Value of Security Analytics

An interesting aspect of covering Threat Monitoring and Detection is the chance to be exposed to every new vendor in this field, what currently means a lot of “analytics” stuff. In the same way as it is nice to see all the new stuff being created in this space, it is also painful trying to understand the value these new products are supposed to provide. This is happening partially due to the old issue of excessive marketing and buzzword abuse, but also from something related to the techniques being used by some of these products: Machine Learning, advances statistics and data science stuff. These techniques and methods can certainly provide value on threat detecting, but it’s very hard not only for clients (and analysts!) to understand but also for the vendors to explain how their products are able to do that.

Anton Chuvakin covered some of that issue in a blog post a few months ago: The Rise of Non-Deterministic Security. A fun quote from that post:

Today we are for realz! on the cusp of seeing some security tools that are based on non-deterministic logic (such as select types of machine learning) and thus are unable to ever explain their decisions to alert or block. Mind you, they cannot explain them not because their designers are sloppy, naïve or unethical, but because the tools are build on the methods and algorithms that inherently unexplainable [well, OK, a note for the data scientist set reading this: the overall logic may be explainable, but each individual decision is not].”

The problem goes beyond the fact related to the lack of explanation for algorithmic decisions. It is also related to the complexity of the technology and even protecting Intellectual Property. Let’s look at those three points:

  • Explaining algorithm decisions: Anton summarized well on his blog post. This issue affects Machine Learning use not only for security but for many other cases. Actually, it goes beyond ML, it is related to algorithmic decision in general. A few years ago someone managed to create models that could predict extremely well the decisions from a certain US Supreme Court judge. If you look at the model it will be clear that some decisions were extremely biased and ideological, but probably that judge was never aware that his behavior was following those rules. Extracting algorithms and models expose biases and sometimes decisions that are far simpler that we would expect. Imagine, for example, if you ML-based security systems tells you “block all PDF files where the PDF standard is lower than 2.0 and bigger than 15MB, as 99.999% of those cases were found to be malware”. Does it sound like a right way to find malware? The ML system doesn’t know what is “right” or “wrong”, it will simply find the combination of factors that best predict a certain outcome. If you look at that combination it might not make sense from a causal point of view, but it does what it’s supposed to do: Predict the outcome.
  • Technology complexity: If the outcome of the ML system is already hard to explain, the technology itself might be much more complex. Many vendors opt to generalize the explanation about their systems with something like “proprietary advanced analytics and machine learning algorithms” not because they are trying to sell you snake oil, but because the average buyer would not understand the real explanation. This is one of the points where vendors could probably do a better job. After all, as Einstein (supposedly) said, if you cannot explain something to a 6 years old, you don’t understand that subject well enough”.
  • Protecting Intellectual Property: Lastly, the always present fear of spilling the (magic) beans. On the security analytics case I think this is combination of two factors. One, the fact that some products do stuff far simpler that the marketing says (naive Bayes has been around for years and it’s still something extremely useful under the security analytics context). The other is that this field is right now exploding with new entrants (we have briefings with a new one every week) and the full Silicon Valley-esque style of competition is going on, fuelling the paranoia and precautious posture of the vendors.

If you think the second and last points are contradictory, you are right. Those two reasons would normally not be affecting the same vendor. Vendors with something actually valuable usually can’t explain what they are doing because of complexity, while vendors selling marginally valuable stuff are often concerned about protecting their IP – they know that it would be very easy for a competitor to do the same.

With all those reasons combined we end up on a situation where vendors just can’t explain the value of their solutions, and we can’t find out if they are useful or not. And what can organizations do about it? I can suggest a few things:

  • Test, test, test. Proof of Concepts are extremely important on this area. The vendors are aware of that and most of them (at least those that have something useful to offer) will push for that. Plan for a PoC, or better, a bake-off, with conditions closest to real-life as possible.
  • Understand well your requirements. Why are you considering this tool? Do you understand what you are trying to achieve or is it just a “let’s see what this stuff can do” thing? Alice (in Wonderland) once asked the Cheshire cat, “Would you tell me, please, which way I ought to go from here?”, for what it replies: “That depends a good deal on where you want to get to.”
  • Prepare yourself to talk about the subject. Data sciences are coming to stay. Security professionals must learn at least the basics about it so they can distinguish between real stuff and snake oil. You’ll be surprised at the reactions of vendors when you challenge or ask for more details on their claims.

The post Demonstrating Value of Security Analytics appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/20dQmsy
via IFTTT

Wednesday, October 28, 2015

From my Gartner Blog - Research on Security Monitoring Use Cases Coming Up

As Anton Chuvakin recently mentioned on his blog, we are starting some research on the work around security monitoring use cases: from the basic identification of candidates to prioritization, implementation, tuning and even retiring use cases.  It is a crucial component of security monitoring practices but there are not many places you can get good information or best practices about it.

 

I have seen very different approaches on that topic by different people and organizations. It is very common to see, for example, organizations relying on out of box content from their SIEM, but most of those cases are related to “checkbox mentality” of managers – they just want to check the “we are using a SIEM” box. Others expand that a bit by enabling rules and content related to specific regulations, such as PCI DSS, to make the auditors happy. Of course, none of those are getting much value from their tools and monitoring teams.

 

There are also others that understand the need for customized content and go through the effort of creating their own use cases and related content, but end up building over engineered processes, killing the agility and dynamics required by today’s constantly evolving threats. Those organizations are usually very thorough with processes and procedures. However, when the use case implementation process includes the same level of change management formality (and bureaucracy) that IT operations, it’s time to simplify.

(I find hardly surprising that organizations that have fallen into that trap are now looking for things that “apparently” don’t require use case development work, such as UEBA tools. I say “apparently” because they soon find out that even if those tools don’t require something like developing SIEM rules, all work related to alert triage and investigation processes, log (or other input) requirements identification and tuning still exists. Wanna guess what happens next? More cumbersome process development for all that, killing the usefulness of yet another layer of tools. Well, maybe the next cool stuff will be easier….)

 

Of course there are also the opposite, the cowboys. Those chaotic environments where tools are implemented with no planning or oversight, in a very “just do it” approach. Those are the environments where the good stuff is usually created and maintained by a few heroes, just to die in abandon when they move to another job.

(and yes, these guys may also try to get the next gen of tools hoping that this time things won’t be as chaotic as the existing stuff…making all the same mistakes again and again)

 

And finally, there are those that do things right!! And interestingly enough, I’ve seen that happening with different approaches, but a few things can be found in all successful cases:

  • Good people: You can’t create good use cases without people that knows how to do it. You may get some external help, but if you don’t have your own good resources things will get stale and quickly lose value after the consultants are out of the door.
  • Simple, but clear processes: Chaos can provide some help by pumping up the creative juices, but it’s very hard to move from good ideas to real (and fully deployed) use cases without processes. Optimization is a constant need too, and without processes there is always the tendency of leaving things to a slow death while pursuing new cool shiny objects.
  • Top down and lateral support: The security team may have good people and processes to put together the use cases, but they are not an island. They will need continuous support to bring in new log sources, context data and the knowledge about the business and the environment required for implementation and optimization. They will need other people’s (IT ops, business applications specialists) time and commitment, and that’s only possible with top down support and empowerment.

You may ask about technology; having the best tools around certainly helps, but it’s interesting to see how many organizations achieve great results by putting together a few open source tools and custom scripts while others fail miserably with the latest SIEM and UEBA technology in their hands. Security technologies are just like any other tool, they need someone who is prepared and knows how to use them.

 

So, we know some of the key success factors, but are there any others? For those starting now, what should they do? Is there an optimal way to establish processes, roles and responsibilities? What is the best way to identify candidate use cases? From those, which ones to implement? How to prioritize them? Lots of interesting questions! Stay tuned as we proceed to find those answers.

(and of course, if you have something interesting to say about that….let us know 😉 )

The post Research on Security Monitoring Use Cases Coming Up appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/1jRojic
via IFTTT

Tuesday, October 6, 2015

From my Gartner Blog - Security Analytics Tools

There’s no doubt “Security Analytics” is one of the hot buzzwords of the moment. Many organizations are looking for Security Analytics tools and expecting to get immediate value from them. However, as Anton said on this tweet:

 

It’s still not magic; if you want value from those tools, you must have the right people to operate and use the data coming from them. I was discussing this with Alexandre Sieira (@AlexandreSieira) earlier today and he said something great about that: these tools should not be known as “security analytics”, but as “security analytics support”.

Just like when you buy a fishing rod; you are not buying fish, or even “fishing”. You are just buying the tool used when fishing. Keep this in mind when you shop for “security analytics [support] tools”.

The post Security Analytics Tools appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/1FRTMdW
via IFTTT

Thursday, October 1, 2015