Friday, October 31, 2008

Virtualization? Give me a better OS instead!

Do we really need to go that deep into virtualization? I may sound dumb to try to reason against something that everybody is embracing, but that's usually what I like to do about hypes :-)
OK, you'll probably throw a lot of advantages of virtualization on me. And I agree that most of them are true. I was reading that some companies are being able  to increase their hardware processors utilization from 10 to 60% through virtualization. There is also all that high availability stuff from VMotion and other new products that are being released everyday. OK, but...
Let's go back some years and see how we end up where we are. Imagine that you had to put two new applications in production, A and B. To ensure proper segragation you decide to put both applications on their own servers, X and Y.
Of course, are they are both critical apps, you also build servers Z and V for high availability purposes.
In a few months, people start to complain the servers utilization is too low. They are consuming too much power, rack space, blah blah blah. Ok, then someone gets a nice rabbit from a hat called virtualization. Wow! Now you transform the hardware X and Y into VM servers (or whatever you wanna call it), build separate VMs for A and B and as you VM product has a nice feature of dynamically moving images from a box to another, you don't need Z and V anymore. Wow! You've just saved 50% of servers related cost!
OK,  could probably be worried about putting those application in the same "real" box. After all, you decided before that they should be running on different servers, and here they are on the same box! But you look into the problem and notice:
- One virtual server cannot interact with the other- Problems caused by application A still can't cause problems on application B server- A security breach on virtual server A will not affect virtual server B
Ok, everything is still good and you go to bed happy with the new solution.
No, people are greedy!
Seriously, now that we have all those servers on the same box, why can't we have a little more control over their access to resources available? Like, if one server is not using all memory allocated to it, why can't the other one use that when it needs? Same for processing power, storage? But in order to do that the Hypervisor would need a better view into what is happening into those black boxes...why not make them aware of the VM environment? Build APIs that allow communication between the guest OSes and the hypervisor? Nice! Now things are starting to get really advanced!
But where is that segregation that was mentioned before? Won't all this interaction between the HV and the guest OSes reduce the isolation? Of course it will! Some attacks from guest OSes to the HV or to other guest OSes are now possible. Anyway, it's the price for better management and better resource utilization. Isn't it?
Yes, it is. We already knew it! Isn't it the price to put two application on the same REAL box? Let's see. We want hardware resources to be shared by the applications and something controlling it. One application shouldn't be affected by the other or access non-authorized resources. And we want high availability too.
Well, please tell me if I'm wrong, but for me these things are just the requirements of a good Operating System with cluster capabilities!
Virtualization guys usually refer to mainframes as a virtualization success case. They are right about it. But on mainframes LPARs (their name for VMs) are usually used to isolate completely different environments, like development and production. It is very common to find several applications running on the same LPAR, being segregated only by the OS and Security Manager (that can be seen as part of the OS). Usually, LPARs are used because organizations can't afford different hardware for things like, testing, certification and development, whilst on the "new virtualization" world VMs are used to optimize resource utilization. As far as I remember from my Operating System course classes from university, that was the Operating System role.
Are we creating this beast because we couldn't produce a Operating System that does its job?

Tuesday, October 28, 2008

I left this one pass

I was visiting Dan Kaminsky's blog today and I noticed that he is creating a community council to help on the disclosure of big vulnerabilities like the one he found on DNS and others that followed, including that famous one on TCP that Robert E. Lee and Jack Louis are planning to disclose after vendors have issued their patches. This is a very good outcome of all these happenings from the last months.

With a council like that everybody who finds a vulnerability and thinks that it is critical enough to start a coordinated effort to fix it and disclose the details will have a safe place to go. Not only it will be full of people with enough knowledge to verify their claims and to make sure it is not something old or not-that-big, but it will also be a trusted part that won't "steal" the credits for the discovery. If they manage to make its existence and their purposes known to the security research community the only reason for someone to go into a "partial disclosure" alone will be "flash fame".

Another step towards a more mature security research community. Nice!

Financial malware gets smarter? But we've said that many times!

This is yet another case of predictions coming true; Now it's Kaspersky time to say that malware is changing the way they attack online banking users to defeat two-factor authentication. Tjey even try to create a new security buzzword for that:"For example, two-factor authentication for online banking, which uses a hardware token in addition to a secret password, is increasingly ineffective. This is because malware writers have perfected the tools to get around it by redirecting the user to a separate server to harvest the necessary access information in real time – the so called ‘man in the middle’ attack.This defeats the two-factor process, but malware writers have taken the process a step further with a new ‘man in the endpoint’ attack. This eliminates the need for a separate server by conducting the entire attack on the user’s machine."Nice catch, but we are saying that this would be the next logical step for financial malware evolution since 2005. Now that it's here the important questions is, how we're gonna deal with that? If 2FA doesn't work, what does?There are some interesting stuff being developed to provide a "secure tunnel" inside the user's computer, avoiding keyloggers and other nasty stuff. But again, we end up on that malware x protection_software_whatever at the user computer. Every time a security company develops something to protect resources from being tampered by malware, malware evolves to get the information from a lower level layer or by disabling the security software. This problem won't go away until we can assure that security software will always run in a higher privilege level than the malware.I like Windows Vista because of the effort on trying to make the user run as a non-privileged user. Unfortunately, this hasn't been the MIcrosoft OS user culture for years, it won't start from nothing. UAC tried to make it less painful, but the huhe amount of badly designed 3rd party software turned that feature into a nightmare. Even with all SDLC efforts there are still a lot of things out of Redmond's company  to be done. Unix and Linux has technology and security conscious users. Apple has complete control over hardware and software. Microsoft, in the other hand, lives in hell (no control over hardware AND software + the most dumb users).A intermediate option to secure online banking transactions is trying to explore the different devices that banks customers have. There are some products that implement 2FA on mobile phones, but most of them suffer from the same vulnerabilities as regular 2FA tokens. Challenge-response and transaction signing could be leverage mobile phones as a OoB (out of band) factor. A over-simplified example would be:- User on computer intiates transaction- Bank encrypts the transaction data received with the user public key and send it by SMS to his mobile,  together with a confirmation code- The bank's app on the phone receives the message and decrypts it with the user private key- The user verifies the details of the transaction on the mobile and, if everything is the same as it was sent from the computer. The user sends the confirmation code to the bank (can be done from out of the previous session, to minimize the assyncronous nature of the conversation), who finishes the transaction.You may ask why the user answers the challenge from the computer instead of doing that from the phone too. This would be good as end user's SMS messages can have a different priority level to the mobile networks than the messages sent by the bank, who can buy differentiated SLAs from them.I know that there are lots of challenges in this single example (public key encryption on devices with limited resources, protecting the user private key, mobile network dependency, among others), but it can be seen as a way to allow users to do banking over untrusted channels. The catch here is that only half of the transactions passes through a untrusted channel. One can argue that the mobile network is also untrusted, but in order to allow fraud both channels would have to be compromised by the same attacker. Very unlikely (not impossible!).

Thursday, October 23, 2008

Microsoft MS08-067

I have been away from the blog for a while because of a series of reasons, but I couldn't avoid to comment on this recently published advisory from Microsoft, MS08-067. Just as some worms we witnessed in the past, this one is related to a core Windows service, meaning that almost all boxes are vulnerable. It's also interesting to see that the security efforts related to Vista and Server 2008 had brought results as those versions are not as vulnerable as previous versions to this issue. Thanks to DEP and ASLR for that!Now it's just a matter of time for the first worms and bots. I'm already seeing some emergency patch management processes being fired to deal with that, but it's important to ensure that detection and reaction capabilities are also up-to-date. Keep an eye on the sources for IDS signatures and be sure to check if your SIEM/Log analysis systems are able to identify abnormal traffic related to the Server service (139/445 TCP). Do a quick review of your incident management procedures to ensure that people will know what to do if the bell rings. For instance, if you catch signs of infection in your internal network, how will you act to identify and clean the infected computers?May the Force be with you!

Saturday, October 18, 2008

Victor is back

My friend Victor is back to the blogosphere. He built a blog platform just for his new blog, blogs about a series of things, but mostly on software development and security. His last post (VP, you need to develop something to link directly to an specific post!) about vulnerabilities related to debugging code is pretty interesting.Welcome back, VP!