Tuesday, December 8, 2015

From my Gartner Blog - Is It Really Failing That Bad?

One of Gartner’s 2016 predicts documents include a very interesting finding about vulnerabilities being exploited:

 Existing vulnerabilities remain prevalent throughout the threat landscape, as 99.99% of exploits are based on vulnerabilities already known for at least one year.

 

Ok, so if known vulnerabilities are the target of basically all exploits, does it mean Vulnerability Management is a perfect example of FAIL? Should we just stop trying it and do something else? It is a tempting (and somewhat) easy conclusion, but I have to say this may not be the case.

First, let’s carefully examine the finding above and try to just look at the reported fact: exploits are based on vulnerabilities known for more than a year. That’s it. Now, let’s see how some natural lines of thought that could come from that:

– Vulnerability Management’s goal is to reduce risk from existing known vulnerabilities. If known vulnerabilities are being exploited, it has failed its main purpose.

– As VM is not working, there is no point in trying to improve it, as we’ve been trying that for a long time and we are still seeing breaches via known vulnerabilities.

– If VM is not working, we should avoid being breached via the exploitation of known vulnerabilities in a different manner. Alternatives would either eliminate the sources of vulnerabilities (such as software from vendors with a bad record on writing secure code), make exploitation harder or impossible (via additional security controls, such as EMET, for example) or reduce the impact of exploitation (via architectural approaches such as microsegmentation, sandboxing, etc).

 

The first point, on VM having failed; even if there are many organizations doing a great job on VM, there’s still plenty of those doing it very bad or not doing it at all. So, even if the population for 0-day attacks is bigger, the population vulnerable to conventional attacks is still big. Let’s say, being very optimistic, that 70% of organizations have perfect VM; it still means that 30% are vulnerable to old known vulnerabilities.

On top of that, it’s cheaper to attack known vulnerabilities: research, tools, PoCs are already available, so you don’t need the skills and time to find new vulnerabilities and produce exploits against them. There is a cheap method with plenty of vulnerable targets; why try anything different? So, attackers exploiting known vulnerabilities is not necessarily incompatible with good VM being done by many organizations.

The second point, on not making sense to improve VM. The overall result from a process like VM is not black and white. If you manage thousands of systems and you manager to move from 100% vulnerable systems to 10%, it is a quite good result, even if you still need to do something else to handle the successful attack attempts against those 10%. Yes, you don’t eliminate the problem, but it brings numbers down to a level where your other security processes, such as Incident Response and Detection, have a chance to be useful.

So, VM won’t make you incident free, but it can move incidents to a manageable number.

Last, but not least, because it could still be a valid point even considering the aspects above. If we can’t reach that perfect level with VM, can’t we try an alternative approach that does it? Like what?

[HERE YOU TELL ME ABOUT THE INCREDIBLE WAY TO BE COMPLETELY IMMUNE TO ATTACKS]

Now, let’s look at that idea and assess it considering:

– Sentient attackers: you know, those bad guys evolve! They adapt! After you deploy your magic solution, what would they do to still be able to reach their goals? They won’t just give up and leave, so your solution should be threat-evolution proof.

– Changing IT environment: Great, you found a magic solution that makes all your desktops and servers hacker-proof. And then your users all migrate to vulnerable mobile devices. Or your data suddenly moves to the cloud. Yes, we are constantly dealing with a moving target, so as much as VM suffers from that, your solution most likely will also feel the impact of the ever changing IT environment. It will be even worse if your solution makes it harder to change, as users will rebel against you and find neat ways to bypass your controls.

– Legacy: We keep dealing with untouchable stuff. Systems that you can’t install new things, can’t migrate to a new (and better platform), remove vulnerable pieces. This is a strong reason to limit what we can achieve with VM, and it will also affect how well your solution performs. Does it require a move to a different technology or platform? If so, high chances of leaving a piece of the environment behind (and vulnerable)

 

If your solution passed those three considerations and still delivers better value than VM, it might be worth trying. However, I’m skeptic that you could find something that would work for many different organizations, independent of their size and culture. There may be something that works perfectly for you, but the chances of that being a good candidate to replace VM all over the world are very, very slim.

I didn’t add Vulnerability Management to the title of this post for a reason. I believe it applies to many other security practices. They have their value, but you shouldn’t expect perfect results because they are just not achievable. Just like that old but still very valid quote from Marcus Ranum, “Will the future be more secure? It’ll be just as insecure as it possibly can, while still continuing to function. Just like it is today.”

The post Is It Really Failing That Bad? appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/1HTwu8n
via IFTTT

Friday, November 27, 2015

From my Gartner Blog - Base Rates And Security Monitoring Use Cases

As we continue to work on our research about security monitoring use cases, a few interesting questions around the technology implementation and optimization arise. Any threat detection system designed to generate alerts (new “analytics” products such as UEBA tools have been moving away from simple alert generation to using “badness level” indicators – that’s an interesting evolution and I’ll try to write more about that in the future) will have an effectiveness level that indicates how precise it is, in terms of false positives and false negatives. Many people believe that getting those rates to something like “lower than 1%” would be enough, but the truth is that the effectiveness of an alert generation system includes more than just those numbers.

One thing that makes this analysis more complicated than it looks is something known as “base rate fallacy”. There are many interesting examples that illustrate the concept. I’ll reproduce one of those here:

“In a city of 1 million inhabitants let there be 100 terrorists and 999,900 non-terrorists. To simplify the example, it is assumed that all people present in the city are inhabitants. Thus, the base rate probability of a randomly selected inhabitant of the city being a terrorist is 0.0001, and the base rate probability of that same inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automatic facial recognition software.

The software has two failure rates of 1%:

  • The false negative rate: If the camera scans a terrorist, a bell will ring 99% of the time, and it will fail to ring 1% of the time.
  • The false positive rate: If the camera scans a non-terrorist, a bell will not ring 99% of the time, but it will ring 1% of the time.

Suppose now that an inhabitant triggers the alarm. What is the chance that the person is a terrorist? In other words, what is P(T | B), the probability that a terrorist has been detected given the ringing of the bell? Someone making the ‘base rate fallacy’ would infer that there is a 99% chance that the detected person is a terrorist. Although the inference seems to make sense, it is actually bad reasoning, and a calculation below will show that the chances they are a terrorist are actually near 1%, not near 99%.

The fallacy arises from confusing the natures of two different failure rates. The ‘number of non-bells per 100 terrorists’ and the ‘number of non-terrorists per 100 bells’ are unrelated quantities. One does not necessarily equal the other, and they don’t even have to be almost equal. To show this, consider what happens if an identical alarm system were set up in a second city with no terrorists at all. As in the first city, the alarm sounds for 1 out of every 100 non-terrorist inhabitants detected, but unlike in the first city, the alarm never sounds for a terrorist. Therefore 100% of all occasions of the alarm sounding are for non-terrorists, but a false negative rate cannot even be calculated. The ‘number of non-terrorists per 100 bells’ in that city is 100, yet P(T | B) = 0%. There is zero chance that a terrorist has been detected given the ringing of the bell.

Imagine that the city’s entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm—and so will about 9,999 of the 999,900 non-terrorists. Therefore, about 10,098 people will trigger the alarm, among which about 99 will be terrorists. So, the probability that a person triggering the alarm actually is a terrorist, is only about 99 in 10,098, which is less than 1%, and very, very far below our initial guess of 99%.

The base rate fallacy is so misleading in this example because there are many more non-terrorists than terrorists.”

From <http://ift.tt/1FWn6vf>

What makes this extremely important to our security monitoring systems is that almost all of them are analyzing data, such as log events, network connections, files, etc, that have a very low base rate probability of being related to malicious activity. Consider all your web proxy logs, for example. You can find requests there related to malware activity from your users computers, such as C&C traffic. However, the number of those events, comparing to the overall number of requests, is extremely low. For a security system to detect that malicious activity only based on those logs it must have extremely low FP and FN rates in order to be usable by a SOC.

You don’t need to do a full statistical analysis of every detection use case to make use of this concept. Here are three things you can do to avoid being caught in the base rate fallacy:

  • Be conservative with the data you send to your detection system, such as your SIEM. Apply the “output driven SIEM” concept and try to ingest only the data you know is relevant for your use cases.
  • At the design phase of each use case, do a ballpark estimate of the base rate probability of the condition you are trying to detect. When possible, try to combine more than one condition to leverage the power of Bayesian probability (e.g. “the chance of an individual http request being malicious is 0.0001%, but the chance of a request being malicious given it is to an IP listed in a Threat Intelligence feed is 0.1%”).
  • During tuning and optimization of use cases, evaluate each use individually and according to its own parameters. As mentioned before, a 0.01% false positives rates can mean something very different for each use case depending on how much data is being analyzed. Some people try to fix a golden rate or number of acceptable false positives, what could be too strict for one use case and too lax to another.

That was all about base rates; there are other things to take into account when designing and optimizing use cases, such as the importance of the event being detected and the operational processes triggered by the alerts. But that’s something for another post (and, of course, for that research report coming soon!)

The post Base Rates And Security Monitoring Use Cases appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/1llrkHO
via IFTTT

Tuesday, November 17, 2015

From my Gartner Blog - It’s Here! Our New VM And VA Papers Have Been Published

I’m very happy to announce that my first research papers have just been published on Gartner.com! These documents are the result of the work Anton and I did on Vulnerability Management and Vulnerability Assessment. The documents are (GTP access required):

These documents are based on updated and reviewed content from previous document by Anton. We did some serious work on reorganizing them to make everything more useful and actionable. I hope you enjoy the reading!

(And don’t forget to let us know what you think :-))

The post It’s Here! Our New VM And VA Papers Have Been Published appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/1H7pGUt
via IFTTT

Friday, November 6, 2015

From my Gartner Blog - Discovering New Monitoring Use Cases

We’ve been thinking about the multiple processes around monitoring use cases for our next research project. This week, the focus was on the use case discovery process. So you have the ability/technology to implement use cases; but how to find out which ones?

 As Anton explained in his post, the process is a mix of compliance regulations mining, threat and risk assessments, etc. The use cases are then assessed and prioritized from a relevance and “doability” point view. But exploring this a bit further, what kind of use cases we can get? It seems that they would be classified in three big buckets:

  • Control Oriented Use Cases: those use cases required as a control from a framework or other regulatory document, such as PCI DSS. The use case can be the control itself (such as “investigate all unauthorized access attempts”) or a way to demonstrate a control presence or effectiveness (denied events, antivirus signature update events, etc).
  • Threat Oriented Use Cases: the UCs implemented to identify a specific threat or threat actor. Those are the use cases where you try to find activities related specific sources and destinations (that content you’re getting from your Threat Intelligence provider?) or specific activities related to Tactics, Techniques and Procedures (TTPs). Lots of interesting stuff to look for here: network events similar to C&C activity, executables running from user profile folders, DLL injection attempts, crazy stuff detected by the malware sandbox, etc.
  • Asset Oriented Use Cases: We know a lot of malicious activity we want to detect, but hopefully you also want to know about activities touching specific data assets – payment card data, for example. Those are the UCs looking at events from DLP systems, File Integrity or Activity Monitoring or even business applications.

 It is expected to have use cases from all those buckets; it doesn’t make sense to “select” one of those as the right one. If you are only putting in UCs from one of those it might be time to stop and think if you really shouldn’t be doing anything else related to the other two.

 We are having a lot of fun finding ways to “slice and dice” use cases and use case selection and development processes. As usual, another call to action: Let us know how you select (and classify) monitoring use cases!

The post Discovering New Monitoring Use Cases appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/1PtcM3Y
via IFTTT

Monday, November 2, 2015

From my Gartner Blog - We are hiring!

My team (Gartner for Technical Professionals) is hiring again. We are looking for an analyst to cover network security; firewalls, IDS, that kind of stuff. Here are the details of the job:

Research Director, Network Security Expert
POSITION ACCOUNTABILITIES AND SPECIFIC DUTIES
  • Create and maintain high quality, accurate, and in depth documents or architecture positions in information security, infrastructure security, network security, and/or related coverage areas;
  • Prepare for and respond to customer questions (inquiries/dialogues) during scheduled one hour sessions with accurate information and actionable advice, subject to capacity and demand;
  • Prepare and deliver analysis in the form of presentation(s) delivered at one or more of the company’s Catalyst conferences, Summit, Symposium, webinars, or other industry speaking events;
  • Participate in industry conferences and vendor briefings, as required to gather research and maintain a high level of knowledge and expertise;
  • Perform limited analyst consulting subject to availability and management approval;
  • Support business development for GTP by participating in sales support calls/visits subject to availability and management approval;
  • Contribute to research planning and development by participating in planning meetings, contributing to peer reviews, and research community meetings;
  • Other duties and roles as assigned that complement the primary analysis and research role.

And what kind of candidate are we looking for? Here it is:

  • At least 15 years of progressively senior technical IT security and architecture experience gained in an end user or vendor organization, consulting and/or research roles as a technical expert in two or more of the following topics;
    • Infrastructure security for networks, computing, and storage systems
    • Network security architecture and zoning
    • Firewalls
    • Intrusion prevention/detection systems
    • Software-defined data center/network security architecture
    • Network virtualization security
  • Excellent writing and research skills coupled with strong analytical skills
  • Excellent presentation skills, including large audiences (300+ people)
  • Bachelors degree in Computer Science, Electrical Engineering, or related area
  • Ability to take a position, based on facts, and support that position to clients, both external and internal, with clear analysis
  • Broad knowledge of IT security and risk management industry trends and emerging technologies
  • Ability to identify how changing technologies will impact technology choices in architectural decisions
  • Ability to travel approximately 20 to 25% of the time

 Do you think you would be a good fit for the job? Apply Here!

The post We are hiring! appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/1iyH8oN
via IFTTT

Thursday, October 29, 2015

From my Gartner Blog - Demonstrating Value of Security Analytics

An interesting aspect of covering Threat Monitoring and Detection is the chance to be exposed to every new vendor in this field, what currently means a lot of “analytics” stuff. In the same way as it is nice to see all the new stuff being created in this space, it is also painful trying to understand the value these new products are supposed to provide. This is happening partially due to the old issue of excessive marketing and buzzword abuse, but also from something related to the techniques being used by some of these products: Machine Learning, advances statistics and data science stuff. These techniques and methods can certainly provide value on threat detecting, but it’s very hard not only for clients (and analysts!) to understand but also for the vendors to explain how their products are able to do that.

Anton Chuvakin covered some of that issue in a blog post a few months ago: The Rise of Non-Deterministic Security. A fun quote from that post:

Today we are for realz! on the cusp of seeing some security tools that are based on non-deterministic logic (such as select types of machine learning) and thus are unable to ever explain their decisions to alert or block. Mind you, they cannot explain them not because their designers are sloppy, naïve or unethical, but because the tools are build on the methods and algorithms that inherently unexplainable [well, OK, a note for the data scientist set reading this: the overall logic may be explainable, but each individual decision is not].”

The problem goes beyond the fact related to the lack of explanation for algorithmic decisions. It is also related to the complexity of the technology and even protecting Intellectual Property. Let’s look at those three points:

  • Explaining algorithm decisions: Anton summarized well on his blog post. This issue affects Machine Learning use not only for security but for many other cases. Actually, it goes beyond ML, it is related to algorithmic decision in general. A few years ago someone managed to create models that could predict extremely well the decisions from a certain US Supreme Court judge. If you look at the model it will be clear that some decisions were extremely biased and ideological, but probably that judge was never aware that his behavior was following those rules. Extracting algorithms and models expose biases and sometimes decisions that are far simpler that we would expect. Imagine, for example, if you ML-based security systems tells you “block all PDF files where the PDF standard is lower than 2.0 and bigger than 15MB, as 99.999% of those cases were found to be malware”. Does it sound like a right way to find malware? The ML system doesn’t know what is “right” or “wrong”, it will simply find the combination of factors that best predict a certain outcome. If you look at that combination it might not make sense from a causal point of view, but it does what it’s supposed to do: Predict the outcome.
  • Technology complexity: If the outcome of the ML system is already hard to explain, the technology itself might be much more complex. Many vendors opt to generalize the explanation about their systems with something like “proprietary advanced analytics and machine learning algorithms” not because they are trying to sell you snake oil, but because the average buyer would not understand the real explanation. This is one of the points where vendors could probably do a better job. After all, as Einstein (supposedly) said, if you cannot explain something to a 6 years old, you don’t understand that subject well enough”.
  • Protecting Intellectual Property: Lastly, the always present fear of spilling the (magic) beans. On the security analytics case I think this is combination of two factors. One, the fact that some products do stuff far simpler that the marketing says (naive Bayes has been around for years and it’s still something extremely useful under the security analytics context). The other is that this field is right now exploding with new entrants (we have briefings with a new one every week) and the full Silicon Valley-esque style of competition is going on, fuelling the paranoia and precautious posture of the vendors.

If you think the second and last points are contradictory, you are right. Those two reasons would normally not be affecting the same vendor. Vendors with something actually valuable usually can’t explain what they are doing because of complexity, while vendors selling marginally valuable stuff are often concerned about protecting their IP – they know that it would be very easy for a competitor to do the same.

With all those reasons combined we end up on a situation where vendors just can’t explain the value of their solutions, and we can’t find out if they are useful or not. And what can organizations do about it? I can suggest a few things:

  • Test, test, test. Proof of Concepts are extremely important on this area. The vendors are aware of that and most of them (at least those that have something useful to offer) will push for that. Plan for a PoC, or better, a bake-off, with conditions closest to real-life as possible.
  • Understand well your requirements. Why are you considering this tool? Do you understand what you are trying to achieve or is it just a “let’s see what this stuff can do” thing? Alice (in Wonderland) once asked the Cheshire cat, “Would you tell me, please, which way I ought to go from here?”, for what it replies: “That depends a good deal on where you want to get to.”
  • Prepare yourself to talk about the subject. Data sciences are coming to stay. Security professionals must learn at least the basics about it so they can distinguish between real stuff and snake oil. You’ll be surprised at the reactions of vendors when you challenge or ask for more details on their claims.

The post Demonstrating Value of Security Analytics appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/20dQmsy
via IFTTT

Wednesday, October 28, 2015

From my Gartner Blog - Research on Security Monitoring Use Cases Coming Up

As Anton Chuvakin recently mentioned on his blog, we are starting some research on the work around security monitoring use cases: from the basic identification of candidates to prioritization, implementation, tuning and even retiring use cases.  It is a crucial component of security monitoring practices but there are not many places you can get good information or best practices about it.

 

I have seen very different approaches on that topic by different people and organizations. It is very common to see, for example, organizations relying on out of box content from their SIEM, but most of those cases are related to “checkbox mentality” of managers – they just want to check the “we are using a SIEM” box. Others expand that a bit by enabling rules and content related to specific regulations, such as PCI DSS, to make the auditors happy. Of course, none of those are getting much value from their tools and monitoring teams.

 

There are also others that understand the need for customized content and go through the effort of creating their own use cases and related content, but end up building over engineered processes, killing the agility and dynamics required by today’s constantly evolving threats. Those organizations are usually very thorough with processes and procedures. However, when the use case implementation process includes the same level of change management formality (and bureaucracy) that IT operations, it’s time to simplify.

(I find hardly surprising that organizations that have fallen into that trap are now looking for things that “apparently” don’t require use case development work, such as UEBA tools. I say “apparently” because they soon find out that even if those tools don’t require something like developing SIEM rules, all work related to alert triage and investigation processes, log (or other input) requirements identification and tuning still exists. Wanna guess what happens next? More cumbersome process development for all that, killing the usefulness of yet another layer of tools. Well, maybe the next cool stuff will be easier….)

 

Of course there are also the opposite, the cowboys. Those chaotic environments where tools are implemented with no planning or oversight, in a very “just do it” approach. Those are the environments where the good stuff is usually created and maintained by a few heroes, just to die in abandon when they move to another job.

(and yes, these guys may also try to get the next gen of tools hoping that this time things won’t be as chaotic as the existing stuff…making all the same mistakes again and again)

 

And finally, there are those that do things right!! And interestingly enough, I’ve seen that happening with different approaches, but a few things can be found in all successful cases:

  • Good people: You can’t create good use cases without people that knows how to do it. You may get some external help, but if you don’t have your own good resources things will get stale and quickly lose value after the consultants are out of the door.
  • Simple, but clear processes: Chaos can provide some help by pumping up the creative juices, but it’s very hard to move from good ideas to real (and fully deployed) use cases without processes. Optimization is a constant need too, and without processes there is always the tendency of leaving things to a slow death while pursuing new cool shiny objects.
  • Top down and lateral support: The security team may have good people and processes to put together the use cases, but they are not an island. They will need continuous support to bring in new log sources, context data and the knowledge about the business and the environment required for implementation and optimization. They will need other people’s (IT ops, business applications specialists) time and commitment, and that’s only possible with top down support and empowerment.

You may ask about technology; having the best tools around certainly helps, but it’s interesting to see how many organizations achieve great results by putting together a few open source tools and custom scripts while others fail miserably with the latest SIEM and UEBA technology in their hands. Security technologies are just like any other tool, they need someone who is prepared and knows how to use them.

 

So, we know some of the key success factors, but are there any others? For those starting now, what should they do? Is there an optimal way to establish processes, roles and responsibilities? What is the best way to identify candidate use cases? From those, which ones to implement? How to prioritize them? Lots of interesting questions! Stay tuned as we proceed to find those answers.

(and of course, if you have something interesting to say about that….let us know 😉 )

The post Research on Security Monitoring Use Cases Coming Up appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/1jRojic
via IFTTT

Tuesday, October 6, 2015

From my Gartner Blog - Security Analytics Tools

There’s no doubt “Security Analytics” is one of the hot buzzwords of the moment. Many organizations are looking for Security Analytics tools and expecting to get immediate value from them. However, as Anton said on this tweet:

 

It’s still not magic; if you want value from those tools, you must have the right people to operate and use the data coming from them. I was discussing this with Alexandre Sieira (@AlexandreSieira) earlier today and he said something great about that: these tools should not be known as “security analytics”, but as “security analytics support”.

Just like when you buy a fishing rod; you are not buying fish, or even “fishing”. You are just buying the tool used when fishing. Keep this in mind when you shop for “security analytics [support] tools”.

The post Security Analytics Tools appeared first on Augusto Barros.



from Augusto Barros http://ift.tt/1FRTMdW
via IFTTT

Thursday, October 1, 2015

Tuesday, July 28, 2015

Some changes

So this is my first post here since I've joined Gartner! However, this is a short post to inform that from now on I'll be blogging about security on my own Gartner.com blog!

I'll set up an automatic IFTTT recipe to add links to my posts there, but if you want you can point your RSS readers there directly.

I hope to keep that space a little more active than this has been; good news is that my new role allows me to see more things that are interesting (and worth) to blog about, so you can expect seeing more from me now.

Thanks everybody that still see this little blog as a valid source of infosec content. I hope you like my new home ;-)

Thursday, April 30, 2015

2015 RSA Conference impressions

After a few days settling back to day to day routine and recovering from past week, I believe I can finally put together some thoughts about what I saw at RSA this year.

First, I'm happy with the results from my talk, "The Art of Thinking Security Clearly", about the effect of cognitive biases on information security. My research on the topic is evolving and based on some feedback and how the content was received I'll focus my work on some specific areas of the subject, specially risk assessments and user behavior change. I hope I can bring the results to another conference in the future. If it is at RSA, I just wish I won't get a Friday slot again; hard to see so many people interested in the subject not attending due to early departing flights.

Now, on the other stuff, I believe the key points I noticed were:

  • FireEye leading: interesting to see how many vendors are either comparing their products or services against FireEye or announcing integration with them. It clearly shows the name recognition that those guys were able to achieve. However...
  • Advanced malware detection is ripe for absorption: Let's face it, this capability fits perfectly as a feature to many other products, such as Next Generation Firewalls and IPS, UTM. In fact, many of those vendors are already building similar solutions on their platforms. Of course there is some secret sauce in FireEye and they also managed to get to a comfortable position where a lot of people in the field see them as best of breed. But nothing prevents the others from catching up, and some recent independent tests have shown that they may not be so much better than the others as it looks like.
  • Cloud and Big Data are now forbidden buzzwords: Due to over use during the past couple of years, every one now is trying to avoid those terms. Even during the key notes it was fun to see the speakers acting apologetically every time they had to use those words.
  • Analytics, analytics, machine learning, behavior analytics, analytics...: There must be a buzzword of the year. Analytics it is. All new products now are "analytics", and it's getting harder and harder to understand what they are actually doing and how they operate. Honestly, some vendor material, slogans, etc, looks exactly like stuff from Silicon Valley (the HBO show).
  • New way to lock-in clients - Threat Intelligence: Threat Intelligence is also a strong buzzword from this year. But the most interesting aspect from TI is seeing how a lot of vendors are trying to use their TI infrastructure to lock customers in their products. So you see that the strategy is usually to provide a platform very good to be your main TI provider. However, ask the vendor about putting TI from other sources there or integrating directly (and based on open standards) to your other tools and you'll see some funny faces. It's not only "my TI feed is bigger than yours" anymore; it's also "my TI sharing cloud is better than yours".

And, what for me is the funniest thing to notice: the huge 'cognitive dissonance' from vendors who are simultaneously telling you to rely on their uber Threat Intelligence content AND that attacks are now so 'sophisticated' that everything is tailored, from C&C infrastructure to malware pieces and phishing messages. That's right, they are telling you to look for things others have seen so you can find that stuff that was built only for you ;-)

Monday, April 27, 2015

Slides from my RSA Conference session

My slides from the "Art of Thinking Security Clearly" session at RSA are now available for download. You can find them here.

Friday, February 27, 2015

Breach costs and impact

This week has seen a lot of interesting discussions around the real cost of breaches such as those from Target, Home Depot, etc. Especially about how those companies are performing after those events. Check this post from Gunnar Peterson on the Securosis Blog.

Fact is that breaches are not putting companies out of business. In fact, they are not crippling the organizations in any way, as you can see from their stock prices evolution.

So what? Does it mean that security is irrelevant? Unnecessary? No, it means that the impacts from those incidents have not been as big as many in the field have forecast. But nevertheless, they are not small change. There are many other bad things that happen to companies that don't put them out of business but affect their bottom line. The Target breach was very material (that's they key word I think we need to have in mind) and it made its way to their financial results report.

During fourth quarter 2013, Target experienced a data breach in which an intruder gained unauthorized access to its network and stole certain payment card and other guest information. The Company incurred breach-related expenses of $4 million in fourth quarter 2014 and fullyear net expense of $145 million, which reflects $191 million of gross expense partially offset by the recognition of a $46 million insurance receivable. Fourth quarter and full-year 2013 net expense related to the data breach was $17 million, reflecting $61 million of gross expense partially offset by the recognition of a $44 million insurance receivable

Hundreds of millions of breach-related expenses. That is material enough for them to be mentioned in the report, even if it didn't put them out of business. If you were the executive in charge there you would probably look at how much you were spending on security in comparison to that number.

Security investments can be justified by reasonable expectations about breach costs. No need to paint an unrealistic scenario for that. I'm certain that most CISOs would be happy with a budget that was just a small share of those costs. No need to exaggerate on the doomsday scenario.

Friday, February 6, 2015

Risk and Impact

As much as I believe that a risk based approach for cybersecurity is the way to go, I still feel a chill down the spine when I see the results of some risk assessments. I believe we are getting increasingly better with the overall estimation of the likelihood of an event. The impact side of the equation, however, quite often looks too way off and the results of the exercise end up being a nice piece of wishful thinking.

Risk assessments are usually performed on limited scopes, such as specific applications, projects or technology environments. The impact assessment for those usually limits the impact to losses related to that scope and the data flowing through or stored in that environment. The most conscious assessors will also consider indirect losses like reputation impact (Secondary Loss Factors in FAIR). Still, I have a strong feeling (in fact, I'm basing this whole point on anecdotal evidence) that those assessments grossly underestimate the interconnectivity and cross exposure that currently exists between technology environments.

If we look at recent high impact breaches, such as what happened with HBGary, Target and Sony Pictures, initial compromise is usually related to areas or systems considered of low business value or risk. From a low importance Content Management System to HVAC systems, the list of good examples to illustrate the point keeps growing. Nevertheless, I keep wondering what would have happened (or had happened) if those systems were subject to risk assessments by the average risk assessor, using the most common methodologies? I wouldn't be surprised to see a lot of green or 'Low' labels used in the final reports.

My point is that risk assessments are vastly underestimating the interconnectivity aspects of today's networks and technology environments. From obvious interconnection aspects to more subtle cases of administrative passwords reuse, the fact is that seeing low business value assets being compromised as a way to reach more interesting targets shouldn't be an unexpected story or a 'black swan' to the victims. However, it seems that due to the way that risk assessments are being conducted, we are deemed to see it happening over and over again. We need to fix how those risk assessments are being done.

The solution involves many aspects. First, some organizations are still using risk assessment methodologies that don't support or can not incorporate more refined information about impact. Some of those just use a simple number for the impact, without consideration of ranges or even for the fact that the distribution of potential impact value won't necessarily be an uniform distribution. When the full impact of a breach is considered and it includes the worst case scenario, it's still important to understand that potential impact values have a likelihood themselves. Certain values or ranges of values are more likely than others. When methodologies consider only an average or a worst case scenario they ignore very important information that should be used to properly reflect the resulting risk. Picture it as seeing the potential impact as a single dot on a chart versus a curve line (a bell curve, for example).

The second important aspect is about the people behind the assessment. Risk assessors are usually blind to the worst case scenarios and the technology components that make them possible. To make things more complicated, some assessors can see those scenarios but are not capable of understanding the subtle components that affect the likelihood of each case. It is impressive how risk assessors are often unaware of how a breach or intrusion actually happens. It would be very important to those professionals to learn those aspects by performing or watching penetration tests and red team exercises. The difference in the understanding of how things can escalate between those with pentesting experience and those without it is impressive.

Risk based security is the way to do things; I’m not trying to suggest something different here. However, the most important part of the process, the risk assessment, has to be fixed so we won’t keep seeing foreseeable events and as ‘black swans’.

Friday, January 23, 2015

New book

So this week has been full of good news for me. I've been working with an amazing group of professionals on a InfoSec book (Portuguese) for which I wrote the Risk Management chapter. Well, the book has just been released this month, including a Kindle edition that you can get from Amazon.

http://amzn.com/B00S8CQJ20

By the way... the book is composed of a series of small chapters on different security aspects. I was lucky of getting my chapter as the second one in the book, and the book sample available from Amazon on their website happens to include the whole chapter! Feel free to go there if you want to read it. Reminder, the book is entirely in Portuguese.

Tuesday, January 20, 2015

The Art of Thinking Security Clearly - RSA Conference 2015

My work with behavioral economics and security is becoming even more interesting! I've just got the confirmation that my session at RSA Conference this year has been accepted:


HUM-F03

The Art of Thinking Security Clearly

Augusto Barros, CIBC, Security Architect

Friday, Apr 24, 11:20 AM

West|2022

50 minutes

A cognitive bias is deviation from thinking or acting rationally due to unconscious inferences about other people and situations. Information Security is full of situtions where cognitive biases affect our judgement. This session will cover the most common cognitive biases, how they relate to information security and what can be done to avoid or reduce their impact on our actions and decisions.

Human Element



The longer session will allow me to go deeper into some cognitive biases that I wasn't able to cover during the BSidesTO talk. I'm excited about this as it's the first time I'll be speaking at RSA. Hope to see you all there, I know it's that Friday morning when everyone is either destroyed from partying the whole week or flying back home, but if you're still planning to attend sessions that day, please consider this one for your schedule :-)

Thursday, January 15, 2015

Groups, Security and Behavior Economics

I'm currently reading a book by behavior economics authors Cass Sunstein and Reid Hastie, Wiser: Getting Beyond Groupthink to Make Groups Smarter. Cass Sunstein is one of the authors of "Nudge", which is seen by many as a seminal work on the idea of "Choice Architecture". All this is related to my currently favorite research topic, Behavior Economics on Information Security.

Wiser is interesting for us because a lot of decisions and processes in security involve groups. There are groups working around risk assessments, deciding about security controls and measures and also doing incident response. The way that groups fail to behave in an optimal manner and how to correct that is thus important to infosec. A good example on this just came up in a recent Twitter exchange.



Richard Bejtlich was talking the use of a "red team" to mitigate the risk of groupthink during an attribution exercise. This is a perfect example of techniques to improve group work being used on security related processes. He followed up on the twitter exchange with a nice post on his blog.

(I understand Zanero's point from a logical point of view; the fact that you can't prove A doesn't necessarily means that B is truth is the universe of possibilities is bigger than A+B. However, I don't think that's the objective of the red team in that context. The red team is there to reduce the trend of the group to rapidly converge to a decision without properly considering the alternatives. This is a decision making aid tool, not a logical argument)