Monday, March 30, 2009

Blind spots

I was reading this post from Richard Bejtlich today and I found this quote from the Verizon Security Blog:"With the exception of new customers who have engaged our Incident Response team specifically in response to a Conficker infection, Verizon Business customers have reported only isolated or anecdotal Conficker infections with little or no broad impact on operations. A very large proportion of systems we have studied, which were infected with Conficker in enterprises, were “unknown or unmanaged” devices. Infected systems were not part of those enterprise’s configuration, maintenance, or patch processes.In one study a large proportion of infected machines were simply discarded because a current user of the machines did not exist. This corroborates data from our DBIR which showed that a significant majority of large impact data breaches also involved “unknown, unknown” network, systems, or data."Last year I wrote an article for the ISSA Journal, "Security Blind Spots". It was exactly about "unknown or unmanaged" stuff in the network. Windows boxes that can be infected by worms like Conficker are an easy example of blind spots. Last week I was talking to a vendor who offers one of his solutions in an "appliance version". The other option, the software version, runs on a Windows Server. When I asked for more information about the OS on the appliance I found that what they were calling "an appliance" was nothing more than a regular Windows box. Almost not hardening at all, with the negative impact that Microsoft patches would be distributed together with the vendor's quarterly updates. Now, if you are in charge of a patch management process, do you prefer to deal with an additional regular server (that perfectly integrates into you patch management systems) or with a "black box" that will become a unpatched Windows box that nobody is aware that is there?This is starting to be common for Linux and Windows based "appliances". Beware of the "lower support cost" options like that, if you have processes and tools in place to deal with those OSes in your network they may be more a problem than a solution.

Intrusion detection - not only network IDS

Sometimes we spend so much time discussion network based IDS that we end up not looking at other interesting places to look for intrusion signs. There is a very nice post on SANS ISC Diary today about an organization that has one of its border routers compromised and detected it through a periodical configuration file check. I'll put the whole post here as it is very valuable to illustrate not only the need to look for problems in more than one place but also how you can improve your response process by being prepared for those situations:
"ISC reader Nick contacted us to share information about an Internet router at his workplace that got hacked this weekend. There's several nuggets to learn from in this story, so here goes.3/28/2009 8:34:02 Authen OK test3/28/2009 8:34:04 test Default Group where <cr>3/28/2009 8:34:05 test Default Group who <cr>3/28/2009 8:34:13 test Default Group who <cr>3/28/2009 8:34:19 test Default Group show version <cr>3/28/2009 8:34:23 test Default Group who <cr>A successful login of a user "test" is definitely not a welcome sight in the TACACS authentication log of an Internet router. And the commands that follow are a clear indication that something sinister is going on. We know since Cliff Stoll's experience that somebody who needs to constantly look over his shoulder while connected (issuing the "who" command) isn't up to any good.At this time though, Nick's firm didn't know this yet ... And the command log continues3/28/2009 8:38:38 test Default Group show configuration <cr>3/28/2009 8:38:59 test Default Group show interfaces <cr>3/28/2009 8:39:48 test Default Group configure terminal <cr>3/28/2009 8:39:50 test Default Group interface Tunnel 128 <cr>3/28/2009 8:39:57 test Default Group show interfaces <cr>3/28/2009 8:41:48 test Default Group configure terminal <cr>3/28/2009 8:41:49 test Default Group access-list 20 permit <cr>3/28/2009 8:41:50 test Default Group ip nat pool new [removed] netmask <cr>3/28/2009 8:41:51 test Default Group ip nat inside source list 20 pool new overload <cr>3/28/2009 8:41:52 test Default Group ip nat inside source static tcp 113 [removed] 113 extendable3/28/2009 8:41:52 test Default Group interface Serial 1/0 <cr>3/28/2009 8:41:53 test Default Group ip nat outside <cr>3/28/2009 8:41:53 test Default Group interface Tunnel 128 <cr>3/28/2009 8:41:53 test Default Group ip nat inside <cr>3/28/2009 8:41:54 test Default Group ip address <cr>3/28/2009 8:41:54 test Default Group ip tcp adjust-mss 1400 <cr>3/28/2009 8:41:55 test Default Group tunnel source Serial 1/0 <cr>3/28/2009 8:41:55 test Default Group tunnel destination [removed] <cr>Whoa! The bad guy is not wasting any time. Barely five minutes after connecting, and he has configured a network tunnel back to his home base.3/28/2009 8:47:23 test Default Group configure terminal <cr>3/28/2009 8:47:26 test Default Group line console 0 <cr>3/28/2009 8:47:32 test Default Group password *****3/28/2009 8:47:45 test Default Group who <cr>3/28/2009 8:47:55 test Default Group configure terminal <cr>3/28/2009 8:48:01 test Default Group line vty 0 1052 <cr>3/28/2009 8:48:06 test Default Group password *****3/28/2009 8:49:12 test Default Group no transport input <cr>3/28/2009 8:49:26 test Default Group transport input ssh <cr>As a next step, the bad guy changes the locally configured passwords. This doesn't make much of a difference, since these accounts only are used when the central TACACS database is not reachable. While the hacker shows quite some familiarity with setting up an IP tunnel on a Cisco router, he doesn't seem to fully grasp the significance of the TACACS entries in the configuration: since TACACS includes accounting logs, all his commands get recorded.At 08:52, the bad guy logs off, and Nick's firm is still completely unaware that their perimeter router has just been subverted. But not for long: At 09:00, their "RANCID" script kicks in, pulls the current configuration off the router, compares it with the "last known good" configuration, and immediately e-mails the changes to the network admin. Luckily, the admin understands the significance of what he sees right away, and alerts the incident response team. A while later, the "test" user is removed, the config is cleaned up again, and the bad guy is locked out.Nick's own "lessons learned" that he shared with us are:- Disable outside management of Internet routers unless 100% required- Log!! Log!! Log!!- Review logs, review logs, review logs.- Dont use easy usersnames/passwords.- Talk to people, this includes ISP's. Get the word out of wrong doing.- Dont hack back...(we didnt, but people sometimes feel the need to retaliate). This is against the law.- Keep router firmware upgraded.To which we at SANS ISC would like to add our own- What saved the day here is the use of "RANCID", which acted like a trip wire. Something the bad guy clearly didn't expect- Having a privileged user named "test" with a guessable password is of course unwise. But mistakes happen all the time - that's why we security folks all strive to build our defenses in a way that one single mistake isn't enough to sink the ship. Defense in depth works!Thanks to Nick for sharing the logs and information about the attack!"

Friday, March 20, 2009

Patching the cloud - Azure failure

Hoff posted some nice comments on the Azure's failure regarding patching the infrastructure used by cloud services. An interesting conclusion about it is that future patching mechanisms will have to be integrated to VMotion-like features, in a way that when you apply an OS patch to the infrastructure it can dynamically deal with that without disrupting the service. It would be something like this:

  1. Move the virtualized hosts from one server to the others

  2. Patch it the "idle" server

  3. Check if it comes back properly

  4. Gradually puts back the load on that server and checks if there is any impact from the patch

  5. If everything is ok, go back to step #1 for the next server - repeat until all servers are patched
I wonder if the guys from Microsoft Update are talking with the Azure team - big challenge for team integration ahead, and business opportunity for patch management companies.

Tuesday, March 17, 2009

Cognitive Dissonance? I must disagree

I like the spin that Pete Lindstrom gives to some classical security discussions, but I think he is completely missing the point here:"If finding vulnerabilities makes software more secure, why do we assert that software with the highest vulnerability count is less secure (than, e.g., a competitor)?"If we agree with him we could also say that cities where more criminals are caught and sent to jail are more secure than those that catch less criminals. I could then argue that in order to become more secure a city should stop putting criminals into jail.There are two separate problems. One is to avoid new criminals (or to avoid adding vulnerabilities to code). The other is to deal with those that are already there (finding bugs). Dealing with the first problem is the best approach as you will spend less with the second, but you cannot just let the current criminals "working" until they "retire".With crime we can know how effective the measures to prevent the creation of new criminals without necessarily working to put the current ones into jail. You just need to keep numbers on crime occurrences. But for vulnerabilities we need discover them in order to know if the developer is doing a good job on avoiding them. We can accept the fact that an unknown vulnerability has no risk, but I don't think it's a good idea to wait until people with malicious intent start finding holes in the software I use to know if that developer is good on writing secure code or not. At that time, it's too late.

Thursday, March 12, 2009

Attack Vector Risk Management

I read this post from Michael Dahn and I really liked what he called "Attack Vector Risk Management". Today I saw that the guys from Sensepost also noted the post for the same reasons, and even showed some of their work under the same concept, calling it "Corporate Threat Modeling".

During the last months my main interest is enterprise security planning. How should an organization define how to spend its security resources, what should be done and in what order? Risk Management is usually the answer for that (please DON'T SAY COMPLIANCE!), but IMHO the risk assessment methodologies out there just don't scale to a point where they can be used to drive security decisions in an enterprise level. You start using so many "educated guesses" that the end result is just not intellectually honest, everything is extremely biased to what people believe that are their major risks that just a simple brainstorm would probably generate the same results. Have you ever seen the results of an enterprise level RA being a surprise to anyone (except for dumb as hell CISOs!)? I haven't.

I don't think that Sensepost approach escalates well too, but it seems better than regular RA for me. I believe we can come tom something that is "threat oriented" than can generate a better understanding of an organization security requirements and help the development of a security strategy. After that we will finally be able to bury ROI/ROSI stuff and stop pretending that those beautiful tables with numbers, "high/medium/low"s  or "green/yellow/red"s are something more our minds tricking us into believing that there is a mathematical explanation behind our intuitive perception.

Until there, you can read "Blink", from Malcolm Gladwell (yes, the guy from the current best seller, "Outliers"), to see that simply trusting our intuitive side is not that bad, although I just can't see a CISO telling an auditor that his security strategy is "intuition based" :-)


Web Application Security, what about your logs?

As usual, another very nice post from Mike Rothman, this time about application security. He is mentioning the BSI-MM model, that I mentioned here too in the context of measuring the outcome of security measures.Mike also mentioned, again, the need to REACT FASTER (have I said how nice his "Pragmatic CSO" stuff is?) and linked it to the application security world. As I'm working a lot with log management these days I noticed that I'm not seeing people talking about what to do with their Web and application server logs. A lot of attacks against web applications can be identified in the logs, and yet we don't see people collecting and analyzing them. Is there anybody out there with good results on "web log" correlation? I'd like to see how evolved this is and how can it help as an early warning system for attacks against web applications.

Saturday, March 7, 2009

Pseudo-random algorithms use by malware

Back in 2007 I noticed (together with Fucs and Victor) that botnet creators had to solve a very important issue to keep controlling the infected computers: how to update the location of the controller?Until then they were including the controller location inside the bot code, so it was easy to find to identify it and block/take it down. Updates could be used to turn existing bots to a new controller, but new infections wouldn't be able to find the original controller to get the updates. We predicted (and we really nailed it!) that pseudo-random algorithms would be the natural choice to avoid including URLs (or other location-type info) in the malware code.The difference from our original work and what is happening today is that most botnet authors are implementing that to generate DNS names. The problem (for them) on that they create the need to register the names that will be created.  There are usually costs and a process to be followed to register new domain names, so I really don't think they are being very effective. We envisioned that they would use one (or some) of those new applications like P2P protocols, Skype, and general Web 2.0 stuff that includes search capabilities to drop information from the controller to the bots anonymously on the web and just let them search for it. We presented a proof of concept based on Skype at that time. We went far enough to say that they could even eliminate the need for a centralized command and control host by directly dropping the commands to the bots instead of the C&C location. Digital signatures would be used to reduce the risk of someone hijacking their botnet.Since then I've seen a lot of new possibilites to implement those concepts. Twitter, Wikipedia, Facebook, there are lots of new applications than can be used as reliable communication channels between the controller and his bots. There's not doubt that botnet creators are skilled programmers, but I think they still lack some creativity on the design part. As we said on our 2007 preso, things are not half as nasty as they can be. I can see that in a very short time we may see botnets that have their C&C entirely "Cloud based". Yet, we haven't evolved at all in our detection capabilities. How should we react to new threats if they get a boost on design?We need to start to think about how to design a next generation world-wide distributed monitoring solution, an "in the cloud behaviour anomaly intrusion detection system". Is there anybody out there working on something like this?

Friday, March 6, 2009

CAG, BSIMM and field-assessed security

One of the best blog posts I read from last week was the "Consensus Audit Guidelines are still controls" from Richard Bejtlich. I really like that he is looking at some suggestions (in this case, the CAG) and pointing that's just controls, there is nothing about measuring the outputs. That goes directly to the heart of the metrics issue, it's still very hard to measure success in information security. For instance, each control from the CAG should also have a related metric to be produced to determine how effective it is.That was based on CAG. Today I also found about the "Building Security in Maturity Model", from Garry McGraw, Brian Chess and Sammy Migues.  That's another very nice set of controls (ok, "activities", but we can still see them as controls, i.e., something that needs to be deployed in order to mitigate a risk), produced through a very nice approach (putting together the practices from organizations that are doing well). When I was reading the model description the post from Bejtlich immediately came to my mind. There was another nice set of controls, lacking a good set of measurements to ensure it is actually producing the expected result.Whenever a set of controls  is written, be it "guidelines", "standards", or a "model", it should point to which problem it tries to solve and how one should check if it's actually happening. That would helps us to have a clear understanding of what works and what does not work on information security. Besides the fact that most of those frameworks/control sets are developed according to a different scope, there's really no way to measure which ones are more effective.Honestly, except for compliance requirements, how would you answer an auditor if he asks "why did you choose this specific control framework to develop your security program?"?

Tuesday, March 3, 2009

Encryption and the 5th amendment

This is a very interesting twist on the interpretation of handing over encryption keys according to the 5th amendment.I had the opportunity to work on a case a long time ago where a suspect of intentional data leak refused to provide the PGP passphrase to a encrypted volume on his computer. I don't know how the case evolved in the Brazilian courts, but it was very similar to this one. On that situation, the guy simply provided a wrong password and said he didn't know why it wasn't working. Quite clever.Some years later I asked a "highly respected computer forensics specialist", during a roundtable in a security conference, about his thoughts on the situation. He promptly said, doing a "I'm smarter than that" face, that they probably wouldn't need the passphrase anyway as forensic specialists could break the crypto without that. Ha! Five years later, here is the same case, in a more "developed country", going through the same issue. Does that guy still think that the passphrase is not necessary?There is also a high profile corruption case being handled by the Brazilian federal police where a set of encrypted hard drives were found. They didn't manage to break the encryption, and some journalists promptly started to accuse the police of intentionally pushing back the investigations. Ok, they sent the hard drives to the "magical US FBI" (some people really believes on those things from CSI tv shows) in order to get the data. Results? None.It's funny how people believe that encryption really works, but when law enforcement cannot break it, they immediately think there is something behind it. Conspiracy theories are much more fun than just admitting that, sometimes, encryption does its job :-)