Thursday, February 25, 2010

MitB attacks still haven't reached full potential yet

I'm surprised that most of the MitB attacks are still just stealing credentials instead of changing transaction contents on the fly. I can see that credentials have an intrinsic value on the "black market", but the attack model of stealing credentials and then using them to log into the victim account to perform transactions seems too complex for me. Once in the browser, the malware can just change the transaction being performed by the victim, in a way that all the traces (such as IP addresses) would point to his/her computer and not the attacker's. There's also no need to transfer the stolen data from one place to another, so it reduces even more the places where the attacker leaves his tracks.I can see two reasons why they are still not doing that:

  • The malware developers are not closely related to the "money criminals" - They are building software to be used by different "clients", and the best way to implement that portability is to sell credentials only.

  • Stealing credentials just work and can be used multiple times, and people just understand the model.
If any of those conditions change, more sophisticated versions of the attack will probably start to detected too. By now, it is important to note that fighting the "stolen credentials" threat doesn't necessarily mean you are also solving the MitB threat. For that, transaction authentication is necessary.

Very nice tool for pentests

I don't hide it from anybody; when doing pentests, my favorite approach was to simply browse information in open shares until I could find some user credentials there (yes, in big organizations, they are always there: scripts, source code, ini files...). With those in hands, try to see what else I was able to have access to; repeat the process until the whole network is owned. No big hack or exploit here, just basic "low hanging fruit detection".I just noticed a tool that makes that process thousands of times easier: keimpx.The description, from Darknet:keimpx is an open source tool, released under a modified version of Apache License 1.1. It can be used to quickly check for the usefulness of credentials across a network over SMB. Credentials can be:

  • Combination of user / plain-text password.

  • Combination of user / NTLM hash.

  • Combination of user / NTLM logon session token.
If any valid credentials has been discovered across the network after its attack phase, the user is asked to choose which host to connect to and which valid credentials to use, then he will be prompted with an interactive SMB shell where the user can:

  • Spawn an interactive command prompt.

  • Navigate through the remote SMB shares: list, upload, download files, create, remove files, etc.

  • Deploy and undeploy his own service, for instance, a backdoor listening on a TCP port for incoming connections.

  • List users details, domains and password policy.

Wednesday, February 24, 2010

Sure, it is THAT easy!

Two posts in a day...I'm probably sick or something like that :-)I was reading an interesting article by Bill Brenner on CSO Online, "Five Security Missteps Made in the Name of Compliance". Although I don't disagree with what is listed as missteps (in fact I think they are quite correct), something in the last paragraph caught my eye:"The best advice against all these missteps, experts said, is to simply slow down and take careful stock of where the company's greatest risks are. From there, companies need to take careful study of the security tools available to them and figure out before buying them if compatibility with the rest of the network will be an issue."Sure, it is THAT easy! Honestly, he just listed some of the hardest things to do in security. Ok, he is not saying that it's easy, but c'mon! Can you really say that in your business environment you have the option to "simply slow down"? i would love to, but that's something that is not always possible to do. just like checking "where the company's greatest risks are". This one is huge. And I must say that my perception about organization-wide risk assessments is ETI - Expensive, Time consuming and Ineffective. So, you'll have an idea of where those big risks are coming from, not a "careful stock of". There's too much uncertainty ou there and it's better to live knowing that there's a lot of things you don't know instead of dying trying to figure them out.You can conduct careful studies of the tools available, but the "corporate truth" is that in a lot of occasions you will simply work to deploy something that someone else bought or will have to deal with things that are not best of breed because they were part of a bigger deal/suite or simply cheaper. Finally, on checking compatibility with your network before buying, you'll only succeed 100% on that if you run a PoC in your entire environment...I mean, almost never. You'll have to deal with surprises during the implementation. Yes, you can avoid buying Unix stuff to run on Windows boxes, but in big organizations the number of combinations of hardware, OS, middleware, applications AND bizarre settings is incredibly high. Be prepared to deal with those surprises.The point is, Bill is right about the mistakes, but I think he is to optimistic about how to prevent them. Some of them are simply what we need to pay for working in this crazy field. Looking back they will look like mistakes, but most of the times we simply cannot do anything better than that. As I like to say, "it's acceptable to do stupid things, as longs as it is not for stupid reasons".

Tuesday, February 23, 2010

Log management implementation details

OK, I'm trying to get out of from a long hiatus of producing content by putting together a presentation about Log Management: the devil is on the details. I have been working in log management projects for some years by now and I noticed I managed to assemble a nice list of small issues that you find when working on those projects that will normally be responsible for 80% of the headaches. As I'm saying in the presentation, things that the vendors simply don't know how to solve, so they never talk about it :-)Some of the things I'm including there:

  • Windows log collection: the options, the issues with them

  • Credentials (user IDs) management when doing file transfers and connection to DBs

  • Systems inventory (who are my log sources?)

  • Privileges needed to collect logs (DBA rights to get logs???)

  • Purging logs from the sources (who's gonna do it?)

  • and some other stuff
So, if you have an interesting experience on implementing log management systems, please let me know those interesting "details" you had found during the process that caused you problems. It will be interesting to talk about the subject without going into the old "performance / parsing / reporting" discussions. Most of the vendors have figured out how to solve those problems. I want to talk about small things that hurt and still haven't been solved.Hope to get that ready for a TASK meeting or something like that. If I get enough feedback and input, it may grow up to a SecTor or similar submission.