Thursday, December 7, 2006

Technical CSO x Gartner's MBAs

One small interview by Brian McKenna with Paul Henry, in Infosecurity Today magazine (Nov/Dec issue) caught my attention as it sheds a very bright light over an interesting topic, the "trend" of security teams starting to be composed more with guys like a MBA than technical personal.

Well, Mr. Paul Henry is very clear, and his toughts fit my opinion too, in saying that a security team can't be made only by "business guys". He is right to point out that the results would be policies and procedures that wouldn't be followed because of lack of technical enforcement safeguards. His examples use situations where people security awareness can improve a lot the security but is not enough to achieve the desired level.

He also points out a very interesting opinion about research companies like Gartner on indicating this businessmans trend. This would put more guys that like to hear their opinions and can't challenge their technical positions in charge of security departments, making their job a lot easier.

I strongly agree with Mr. Henry. Yes, security is not a technology problem only. However, technology is a very big part of the problem (and of the solutions). The people dealing with it need to know about the technology involved. CSOs use to participate on several meetings about new projects or technology products being bought by the organization. They need to, at minimum, know how to detect that something was made without security in mind. Unfortunately, most CSOs that I know can't even do this basic analysis.

Monday, December 4, 2006

Domain Isolation and Cima

There is a very good security professional in Microsoft called Fernando Cima. He wrote an article about the Domain Isolation strategy implemented through the use of IPSecurity, from Windows 2000 and above. There are some thing that I didn't know about, like the simpler version of the system introduced in Windows 2003 and Vista. I see this approach as a very good alternative for 802.1x, even because it can include encryption. Cima also shows how to include systems that do not support IPSEC in the system, using ISA Server as a gateway. Very clever solution.

Monday, November 27, 2006

New NBTEnum version

Those who perform penetration tests probably already know this tool. Ok, a new version was just released. Even if you don't use it, visit Reed Arvin site, there are lots of great tools there.

Friday, November 17, 2006

Bejtlich and SANS Top 20

I thnk that Richard Bejtlich is being a little picky about this subject, but he still got his point. Even in a work with such good content as the Top 20, basic concept mistakes can jeopardize its value. A document like this is read and used by lots of people, spreading the mistakes throughout the field. Hey SANS guys, instead of criticizing, why not try the CISSP? It won't hurt, it'll only add value (and it's not even something that Bejtlich will agree with me, given his opinion on this cert).

Mistakes with vulnerabilities and threats concepts is something that a CISSP doesn't usually do, even if with very bad technical skills. Mix the technical skills provided by SANS with solid fundamentals from the CBK. That's the source of an incredibly valuable Top 20 document.

MS06-070

Should I still need to say that this one is critical (well, MS already did that)?

Every time that there is a vulnerability in core Windows services, like "Server" and "Workstation", it smells like worm spirit. There is a relatively new fact that needs to be remembered these days...

Microsoft is pushing its processes to find illegal copies of Windows inside its updating system. I believe that in the last months several illegal copies that were being regularly updated are not doing that anymore. I know that personal firewalls and SoHo routers are more present, but I won't be surprised if a new worm finds more success than the last ones because of this.

New sysinternals tool

Those that constantly need to study trojans and viruses behaviour, or to debug "LUA bugs" in Windows applications, probably already know Sysinternal tools Filemon and Regmon. I always wondered why there wasn't a tool combining both. Now there is.

Wednesday, November 1, 2006

Ping!

It have been more than two months since I posted here. I was visiting Canada and California on vacations, and now I'm a bit overflowed with duties from my job and the local ISSA chapter (I'm president since July). I hope to be able to translate some things that I wrote this week and to resume posting here more regularly in the next few weeks.

A quick note, I went to ToorCon in San Diego during my vacations. It was a bit too technical comparing to what I'm used to do now, but the presentation from Dan Kaminsky is always something worth to watch. I was expecting to see David Maynor and Johnny Cache doing a live presentation of their famous wireless exploit, but I believe you all already know what happened there.

Its funny how we stil have a lot of people bashing at Microsoft about security while we have companied like Apple and Oracle, with their terrible security behaviour. Microsoft has a huge security improvement in its products (can anyone remember the last vulnerability in IIS?), it's releasing good security products (the new ForeFront product lie has some interesting concepts), buying companies with good security products and professionals (Sysinternals...) and promoting security awareness everywhere. If there is a company "at our side" about this subject nowadays, it's Microsoft.

I really have hope on seeing Oracle following the same path. Still have doubts if Apple will try or if it will just close its eyes and pretend nothing is happening.

Monday, August 14, 2006

No network is safe

Mike Rothman wrote a very good article about the results of what he saw in Black Hat. I really appreciate the tips that he is giving in this article, like putting focus on containment and monitoring/detection. This is exactly the way that I think internal network security should be made.

Tuesday, August 8, 2006

Again on MS06-041

This is one of those vulnerabilities than can really bring big problems (like very aggressive worms and viruses) .

The vulnerability is in the Windows DNS client. It seems that it can be exploited by specially crafted Resource Records (RR) in responses from a malicious server. They are not RRs usually present in common users activities queries, but I'm curious about how an attacker can force them to do the "vulnerable query".

I went to check some DNS responses details and I noticed that the server can send "Additional RRs" in the response. My remaining questions are:

1 - Can the exploitable RRs be sent inside the "additional" part of a response to a common A/CNAME query?
2 - Can the vulnerability be exploited when the crafted RRs are inside the "additional" field?
3 - When using recursive queries, additional responses sent by a server are forwarded to the initial source of the query?

Depending on the answers for these questions, the severity level of the vulnerability changes. In the worst case any DNS server and a HTML e-mail can be enough to exploit it.

Another problem can be Windows servers that resolve names (or IPs into names) when logging requests (like webservers and proxys). The malicious guy access the server, that tries to resolve his IP to a name to put it in the log. The answer comes with additional fields carrying the exploit. Bingo! Owned. Wow. While in doubt, folks, patch ASAP.

Creepy MS06-041

I still haven't found detailed information about MS06-041, but it seems to be related to the Windows DNS client.

DNS client vulnerabilities are freaking scary. Depending on what the problem is about one can exploit thousands of workstation with a single DNS server and a mass mailed HTML e-mail. Patch as soon as the update is available.

Wednesday, August 2, 2006

Reviewing concepts

Schneier posted a comment today in his blog about an idea from Dave Piscitello mentioned in the Firewall Wizards mail-list. Dave says that besides the already known concepts Authentication, Authorization, Availability and Authenticity, there is also need for "admissibility". This concept is related to the trustiness of the other endpoint of the connection (like if it's free from keyloggers). Initially I thought it might be just a different way to understand different aspects of the other concepts, but now I think it really makes sense. I like these out of the box discussions about basic concepts, I believe that big evolutions born from them.

With the 5 properties vision it's clear that two-factor authentication is not enough (it does not deal with admissibility) to solve the problem of Internet Banking sessions security. Good example of applicability.

Wednesday, July 19, 2006

McKeay Quote - GREAT

I was browsing Martin McKeay blog when I found some stuff he wrote. I have special interest on talking about security to non-technical people, and I found in his site a document with some tips to these people. The last one is so good that I immediately put that on my quote list:

"Use common sense Anything that sounds too good to be true probably is. Don't follow the link from an anonymous email promising quick riches or cheap products. Most of those are just attempts to get your money, and some are going to try and install software on your computer or get information from your computer."

Tuesday, July 18, 2006

HD Moore and responsible disclosure

Vulnerability researchers have the right weapon in their hands to push vendors on faster response times for security issues. I think that the best sample of how this should be done is David Litchfield. He does responsible disclosure, and uses gradually public advisories to push vendors (in his case, Oracle) to a more responsible attitude. HD Moore is being a bit selfish on this IE case, IMHO.

Instant disclosure brings too few benefits to victims (most cases don't have usable workarounds) and huge benefits to a very broad black hat community. I think that the fact that there could be people exploiting the undisclosed vulnerability doesn't mean the rest of the bad guys should also know it.

A mixed approach, with instant announcement of an open issue, without further details (only the product affected and the date when the vendor was informed) is the best option. Public disclosure can be used later if the vendor refuses to fix the hole.

Winternals and Systernals acquired by MS

Another great step by MS in its quest for more secure products. Winternals and Sysinternals have just been bought by Microsoft. I hope to see things like the excellent PSTOOLS package as part of Windows now. And it's not only about products, but about people too. Mark Russinovich is the guy that discovered that famous Sony rootkit.

To MS guys, congratulations again! Enjoy the acquisition (specially the great product called "Protection Manager") and integrate everything that those guys have made into Windows, it will aggregate great value to your product.

Wednesday, July 12, 2006

Schneier and two-factor authentication

Schneier posted in his blog a report about phishers being able to defeat two-factor authentication by using a Man in the Middle attack. They are basically proxying the user credentials to the original site.

What really impresses me is that almost everybody that is suggesting solutions for this are thinking about the problem as "how the original site can identify that the request is not coming directly from the real user?". THIS IS NOT THE RIGHT APPROACH!

Last year I presented a Proof of Concept code in a security conference. That code was created as a Browser Helper Object, but the main concept can be done by other means. The code was created to target a specific web application, an Internet Banking that uses two factor authentication. It doesn't try to steal authentication credentials, but it uses a valid established and authenticated session . In my PoC, whenever the user executes a wire-transfer transaction, the destination account number is replaced by another account. The confirmation sent by the server is also modified to show the original destination account. The user can't notice anything wrong in his experience, but his money has just been sent to another destination.

Why bother about stealing credentials when you can use the session that has been established by the user to perform what you need to do? If you chose to not steal credentials you have the additional benefit of not having to find a way to send them to you. No need to disable personal firewalls, NAT issues, etc.

The real problem (technically speaking) is the user actions (using bogus websites) and his environment (backdoors, trojans, DNS poisoning). Two-factor authentication will not solve any of them.

Monday, July 10, 2006

Base Rate Fallacy and NSA

I usually stay out of USA internal matters, like the VA lost laptop and NSA spying stories. But Bruce Schneier today posted in his blog a very good argument about why the NSA plots to identify terrorists are flawed. The Base Rate Fallacy is a very interesting problem that applies to a lot of detection based security technology, specially those that are anomaly based. Perhaps this is why we still haven't applied this approach to IDSes and Antiviruses.

Thursday, July 6, 2006

BS25999

The draft fot the new British Standard BS25999 about Business Continuity Management has been published. It's important to take a look (and provide comments), as we know that this is the kind of document that tends to become a ISO standard in a few years. It's available for download here.

Thursday, June 22, 2006

Scary Wireless attack

Unfortunately I won't attend to BH this year. There will be a presentation about hacking wi-fi drivers remotely, which is very scary.

Monday, June 19, 2006

Excel 0-day

There is a new disclosed and not patched Excel vulnerability. Last week the monthly update package from Microsoft was published. It's not the first time that someone releases a vulnerability a few days after the patch cycle, apparently to cause more problems to MS. These situations make systems vulnerable for almost a month, or they force MS to release an out-of-schedule patch. It's a vulnerability in the monthly release program that simply can't be solved.

Usually MS monitors the noise caused by the disclosure to check if it needs to release an update out of the cycle. If the problem grows, it launches, if don't, it waits. I can't see much thing besides that to do. Perhaps the release of "beta updates" or "no warranty updates" before the cycle, for those more worried with the problem, can help to reduce the noise. It's just a little dangerous to open this possibility, as I, as a customer, wouldn't find it fair to install something "beta" to reduce risk that I'm not responsible for.

Thursday, May 25, 2006

Remote kernel overflow exploit

This is from DailyDave:

"Sinan Eren wrote a working version of GREENAPPLE, a remote kernel
overflow in SMB for Windows 2000. It's available now to Immunity
Partners, but it will be in the June Immunity CANVAS release, which
will be interesting. Essentially it's the first remote kernel overflow
I've ever seen - maybe someone knows of one I don't?"

It's related to the MS05-011 vulnerability. One interesting thing is to see a "remote kernel overflow" in a micro kernel OS, Windows 2000. Linux and its fat kernel has never suffered from something like that. I think that it proves how good concepts can suck with bad implementation and how bad concepts can work with good implementation.

More ammo for Mr. Torvalds against Tanenbaum :-)

Saturday, May 20, 2006

Word exploit in the wild

It's not surprising to see a new exploit for MS Word that is being used to run malicious code. It only confirms my belief that workstations/users are the prefered entry point for attacks. Interner facing servers are usually well protected and monitored. Workstations are usually bad configured, not patched and placed in flat and not monitored internal networks. Yummy!

SANS Internet Storm Center has published some tips for defense against this threat. I'm glad to see honeytokens being proposed. In fact, the whole list is very good. My favorite itens are monitoring and blocking outbound traffic and limiting data on desktops. The kind of security measure that is effective against lots of threats and does not depend on previous knowledge of the attack being used.

More myths debunked

Do you really have to change your password at short periods?

The increase in Rainbow Tables tools, and tables for sale, is showing that changing password would be efficient only if performed daily (or hourly!). Let's make people learn a very good password to avoid dictionary and guessing attacks, them let them use it for more than 30 or 60 days.

Thursday, May 18, 2006

PCI and SOX changes? Less security?

I've recently heard about changes in two security compliance drivers that I deal with, SOX and PCI. There are discussions about changes in SOX to avoid the confusion of which controls are needed (and how they should be implemented), as well as how the audit firms should assess risk in their clients.

PCI Data Standard Requirements will also be subject of changes. There is information about reducing the encryption requirements and increasing application security controls.

In both cases I've seen myself in discussions with peers regarding the changes, if they're good or bad. Man, I did it again! I caught myself advocating less security!

Well, in fact, I'm not defending that companies need less security. I believe that they need the right amount of security to their business. SOX and PCI try to define the minimum requirements (SOX, of course, is much broader, but I'm focusing on the aspects that result in security requirements), but I understand that in some points they push too hard.

SOX, in fact, does not push anything, but it leaves to auditors the decision of which controls are needed. I think it's a bad idea, because auditors usually don't have the sense of "how much of control is enough", but I'll try to comment it again in another time. Let's talk about PCI.

My main concern about PCI is that it seems to have been written to avoid card data to be stolen by "Internet Hackers". When reading PCI requirements you'll notice that it is always trying to protect your "internal network" from "public networks". Ok, we know that this is necessary. But didn't these guys read anything about internal threats?

When you're aiming at online merchants, like Amazon, it probably makes sense to focus on external threats. PCI, however, is also being pushed to issuers, who have thousands of employees that have direct contact with cards and cardholder information. I really think that PCI does not give the same treatment for these threats that it gives to the "threats of the moment", like hackers and viruses.

As a security professional I'm constantly worried about building a holistic security strategy. PCI, as other security standards, should try to push minimum requirements in all directions of information security. As an example, we are always discussing about how companies respond to their incidents. What they should do to reduce damage, communicate people affected, protect evidence and so on. Why PCI doesn't have anything about it? (same for security monitoring, security staff, etc)

And when it tries to help, like when defining firewall policy requirements, it usually dives too much in detail, like defining which protocols should be accepted. I could be more flexible there, just by defining that the organization need to have proper procedures to assess and deploy rules in its firewalls.

Despite the different points of view, I'm happy that discussions about laws and standards are happening. These discussions will help us to improve those documents, allowing us to reach better cost/benefit equations. Too insecure systems do not grow because people don't trust them. Too secure system will also not grow, as they are too inflexible, expensive and hard to use.

Monday, May 15, 2006

Still on Security

One post at cisspforum caught my eye. The author, Scott Pinzon, authorized me to quote him:

"I don't think Information Security is "failing," for the simple reason that today more online commerce is occurring than ever in history, and for the most part, it works.

Info Sec is far from perfect; we all know that. But you can't point at a bunch of bad drivers and say "the national highway system is failing!" or a few crime-ridden cities and say "our entire culture is crashing into chaos!"

The fact that we all go about our day banking, buying, and investing proves that Info Sec is not failing."

His example ou crime-ridden cities is very appropriate for the moment that we are passing through here in Sao Paulo, Brazil.

Friday, May 12, 2006

Chip and PIN fraud

This is the matter of the moment in UK. More problems, this time with Lloyds. This article gives more details about what is really happening.

Myths!

I love when someone attacks infosec absolute truths! Roger Grimes did that in this article at Info World. I lke the part where he comments security through obscurity:

"The myth would have you believe that security by obscurity has no value and any scheme using it should be immediately discounted. But the fact of the matter is that security by obscurity works, and works well. It is among the least expensive security defenses you can employ. It should be considered a part of anyone’s defense-in-depth plan."

The bold is mine. It's very important to make clear that security through obscurity is not enough alone, but it can be very valuable in a defense in depth strategy. Grimes himself gives a very good example in the article.

Cambridge and security

I haven't heard about it yet, a blog from Cambridge security researchers. It seems to have very good content, in a first glimpse. I'll look closer later.

Security Absurdity - more comments

Noam Eppel wrote an article called "Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security." that generated a lot of noise in the security community. I decided to comment it in my blog too.

Yes, it's really too-FUD. But it also has great points about things that are real. Some of them are not always seen in other places, and I'm glad to see that a lot of them are things that I'm always reminding people about. Among them are:

- Antivirus signature based approach failure
- Trojans and backdoors targeted to specific companies and organizations
- Trojans that instead of stealing credentials just perform funds transfers after the user is authenticated (I made a PoC presentation about it last year in CNASI). I was impressed to know that there are real cases now
- 0-days usage more common every day
- Internal attacks issues, one of the biggest motivators of my Master Thesis.

He used these facts to drain conclusions, some right, some wrong. I agree that there is a raising complexity that makes security harder to do, that the cost of security controls is too high and that our "best practices" don't solve the problem. This last one is one if my favorites, I have been saying that for some time.

I have a friend that is a penetration test specialist. His approach gives him almost 100% success rate, even in companies that have advanced security programs. What is happening is that the main sources of information for the CSO, with their indications about most common threats, don't drive to solutions that could stop my friend's approach. The "by the book" CSO will be a easy prey for him. I believe that we need a deeper technical discussion about what we understand as "best practices", making them more effective and clear. When I say technical discussion I mean "bring the good guys!", specially those that are not related to off-the-shelf products vendors. Have you ever noticed that the "next biggest threat" always fit in the features description of those just released blackboxes? Wow, so every new threat can be avoided just by buying them?

Back to the article, I think that its qualities end here. The author does not remember that our goal is not reaching 100% security, but the security level needed to allow the business to keep going. The "it just need one single vulnerability to fail entirely" approach is counting that defense in depth and compartmentalization are not being applied. It's over reacting.

I also think that there too much confusion about "home user" security and corporate security. Really, we need to improve a lot the security for the common home user, it's very hard to a non technical person to keep a computer secure. But we can't forget that we are not dealing with a common home appliance, like a refrigerator or a TV. There is two-way communication, there are new features being deployed on the fly, from different sources. The user has part of the responsibility to decide which features and which sources are safe, we can't deny that. If you want to drive your car in the streets you need to know that your safety depends not only on roads conditions or on your car safety features, but also on decisions and skills from you and other drivers. It's the same thing with the Internet and computers in general.

There are still more deaths in car accidents than in wars!! I don't think we are terribly failing in infosec as we are with traffic safety.

There is another thing. Those numbers, increasing losses, frauds, etc. I can't say for sure as I haven't made a extensive research, but I bet that when paper money or checks were introduced, the frauds grown wild. As technology is gradually dominated the ways of making it secure evolve. However, if the technology is evolving too fast there is not time to security to evolve. It's natural. Security systems created 10 years ago are not very effective today, but if we apply their current versions in the same problem for which they were created to, they would be almost perfect.

Let's try to imagine if the weapons evolution had happen in a much more accelerated form. We should have spears, swords 6 months later, muskets in two years and grenades after 3. If we compare this with the infosec we would be trying to make hand shields stronger and complaining that they were not protecting us from the grenades.

So what Augusto, will you do exactly like him and don't tell us how to solve it?

First, it's necessary to make people in charge of security to know about it. They know about products, not about security. They think that they just need to build the lego with firewall+ids+ips+av blocks and everything is ok. We need education, make them skilled professionals. It can be dome with better training (SANS!), certifications, standards, code of practices, etc.

Second, user awareness. Sorry Ranum, but I think it's more than necessary if our intention is to keep the flexibility and power in their hands. We can replace all our cars by a public transportation system and drastically reduce the accidents. Do anybody think this is possible? :-)

Third, product intelligence. Keep running behind attacks, virus and Trojan signatures?? This is too archaic. The advantage of more frauds is that there will be more investments in security technology, bringing more money and brains to the research field too. With this investment we can reduce the gap between state of art technology and the security tools available.

Fourth, demystify insecurity. This not black or white, all or nothing, but the gray tone that each person or company can live with. When you go out to the streets there is a risk of being robbed, murdered, victim of an accident. These risks are, usually, getting higher every day. Have you give up going out of your house because of that? Maybe you have changed some habits (mitigating risk), but you accept that there is risk to keep doing what you need to do. You go to the bank, there is the risk of someone who saw you withdrawing following you later to rob you. You use the Internet banking, there is the risk of someone taking advantage of this. Nothing changes. People only need to be conscious that the problem exists in any situation, be it "real" or "virtual".

That's it.

Monday, May 8, 2006

Chip and PIN Fraud in UK

Thre is a lot of noise in the security feeds about this fraud in UK. Most articles from the press gives the impression that the chip on the cards were victim of the fraud. The problem, however, seems to be on the old magnetic stripe fall-back feature. This is another situation that shows why supporting old technologies for backward compatibility is a bad idea for security. If you have a card that uses chip technology and it can be used also as a magnetic stripe, that has a very lower security level, its general security level will be the same as the one from the stripe. That old thing about the weak chain, again.

Chip cards vulnerable to skimming are just a waste of money and a false sense of security.

Thursday, May 4, 2006

Least Privilege in XP

This week I started to follow what I preach and removed administrator privileges from my user account in my home computer. In fact I had to create a new account, as I was running XP with the Administrator account renamed (shame!!!). I had some problems on copying the old profile to the new account, but everything went fine. Until now nothing have caused my serious issues, and the "runas" feature, as well as Fast User Switching, is making the move as smooth as as it can be. I don't think it has been any more problematic than having to use sudo on Linux. Some NTFS permission tweaking solved most of the problems.

A good resource for solving issues when trying to run with reduced privileges is the Aaron Margosis blog. It has been helping me a lot.

Backup tapes, again

Iron Montain has lost some backup tapes from its clients again. I started to look more closely to these incidentes after seeing a standard contract from this kind of a company, where they declare that they'll only reimburse the media value if a tape is lost. Wow, you loose a tape with all your customer database and receive only a buch of dollars for it?

The companies usually ask you to hire a insurance. I think that changing the contract to allow a standard fee (or a classification label based fee) to be paid in case of tape loss would be better for their clients, but they usually don't accept these terms. Of course it is an additional risk for them, but, after all, it's their business, transfering risk related to media handling to a third party! Why can't they hire a "catch all" insurance to mitigate this risk?

Thursday, April 27, 2006

Bejtlich and IPSxIDS

Richard Bejtlich is one of the best sources of information and reasonable opinions about intrusion detection. He wrote a very precise argument about why Detection is important even when you can use Prevention. I'll quote him here:

"traffic inspection is best used at boundaries between trusted systems. Enforcement systems make sense at boundaries between trusted and untrusted systems."

Very good!

Monday, April 24, 2006

Banks and authentication challenges

Daniel Blum wrote a incredibly good article today on Network World. He said something very sharp on the matter of additional security measures that the banks need to deploy:

"From a business perspective, banks are much less concerned about losses to fraud than they are about scaring away customers. To them, online banking represents a Mecca of huge cost savings and revenue opportunities. The technical solutions that win out for them will be those that offer unobtrusive but effective protection."

The savings from Internet Banking usage growth are huge. Should the banks risk this savings by sending tokens, password cards to their customers? What if they agree on paying the losses for their clients instead of using additional security controls? Isn't it a valid way of dealing with that risk? Isn't it the way that credit card companies are taking?

Sometimes security people focus too much on vulnerability/control and forget about risk management.

Wednesday, April 19, 2006

Sun Ray Security

Recently I was evaluating the Thin Client solution from Sun, "Sun Ray", and one thing caught my attention.

The Sun Ray clients run only a firmware, without OS. The firmware is responsible for getting the initial settings from a DHCP server, incluing the address of the Sun Ray Server. Once the client establishes a conversation with the Server it uses a X11 emulation over UDP, using a Sun protocol called ALP (Appliance Link Protocol). If there is a firmware upgrade the client downloads it from the server when powering up.

Hey! So the client receives the information about who is the Server from a DHCP response. Yes, and this is server is the one who sends the new firmware. Then, if anyone can forge a DHCP response, he can then send a contaminated firmware to the client. Is anyone looking at the Sun Ray firmware characteristics to find how much one can hack with it? The clients has a syslog reporting feature, for example. What if someone alters the firmware in a manner that the client sends the keystrokes to a syslog server? Wow.

Well, I think that network based (switches features) controls can be used to avoid those bogus DHCP responses, but I really don't know if there is such granularity today.

And what if my network uses 802.1X authentication? Obviously it will need to be disabled where the Sun Rays are being used. Bad thing. However, I think the risk from this issue can be truly reduced by ACLs and PVLANs.

One of the sales arguments from this solutions is security. Using Java Cards for logon and so on. But what about these network level issues? If the device does not havr any static setting (another sales argument), even a server identity check is hard to be implemented. Perhaps using some kind of challenge-response with the Java Cards, I don't really know if it's possible.

Well, these are some of my random thoughts about this subject. If anyone out there has already made an anlysis on those issues, I'd really like to know the conclusions.

Tuesday, April 18, 2006

He's back...let's patch!

Apocalypse Knight David Litchfield is back with another bunch of Oracle vulnerabilities. The patches are available to install.

McAfee misses the target

I've just read Richard Bejtlich comment about today's most noisy new, the McAfee report. I read in bloglines when I was looking for more information on the subject to be able to post a comment here. Well, I think Bejtlich said it all.

The real menace of rootkits wouldn't be clearly understood without the disclosure of what the Sony CD's where doing, and security professionals would be shooting at random without information provided by sites like rootkit.com. I don't feel comfortable with some sorts of vulnerability disclosure (like what happened with WMF and those last in IE), but blaming information like rootkit.com is a bit too hard. I'm discussing some thoughts about ways that trojans can steal money from Internet Banking accounts or even how worms can be more destructive or hard to fight. I don't do that to help people that create them, but to help those that need to avoid them. Rootkit.com is the same thing.

However, there is one thing that we need to think about. There is a lot of research like rootkit.com that is presented in a way that seems to be directed to black hats, to be used in a improper way. Even if this way of presenting results seems to be "cool", it won't help on gaining respect from places like Gartner or IDC. If it's security research, let's try to present it like that. Do you know anybody that does (biological) virus research and present its results saying "0wNeD! KiLlInG QuIcK AnD DiRtY!"????

(Does anybody remember that scene from "The Jury", where Dustin Hoffman shows that the gun industry was using "fingerprint proof" as a sales pitch?)

Monday, April 17, 2006

Firefox update

The Infosec industry is really biased when commenting on browsers security issued. Every IE problem causes an avalanche of hatred comments on "MS insecurity". Meanwhile, Firefox has just been update for security issues and almost nobody mentioned it. What was fixed? Was it serious? How long has the issue been known? Hey guys, let's try to face all products with the same critical approach.

The update took me by surprise this weekend. As a security professional, I don't like surprises. MS can took to long to fix a public disclosed vulnerability, but at least they try to keep us informed of their plans and about what they are doing with the software we use.

Monday, April 10, 2006

Certificates Private key in Windows

I've just read something interesting about the way that Windows handles private keys for certificates when you delete a certificate. It keeps the private key in a way that if you install the certificate (yes, the public key only) again later, it will allow you to use the private key (that was kept somewhere [Protected Storage?] in the system). So, if you really want to delete a private key for a certificate in Windows, there is a tool to do that in the link above.

Thursday, April 6, 2006

Schneier on VoIP Security

Schneier is so interested in privacy and US Homeland Security matters that his blog has been a bit boring in the last times. Luckily, today he chose a interesting subject, VoIP Security.

It's a very good comparative analysis of the threats from the conventional telephony and those from VoIP. It's the kind of thinking exercise that we always need to do when you change the technology used by some activity. Even without anything new, it's good to read because of its approach. For those who like it, you can find more from the same in his book "Beyond Fear".

Thursday, March 30, 2006

Good measure, but not enough

According to the InfoWorld:

German bank fights phishing with electronic signatures
Postbank to begin attaching electronic signature to all e-mail correspondence with customers

By John Blau, IDG News Service

March 30, 2006

German retail banking giant Postbank AG, the target of several phishing attacks, aims to curb the theft of online personal information with the help of electronic signatures.

The bank will begin attaching electronic signatures to all e-mail correspondence with customers, Postbank spokesman Jürgen Ebert said Thursday.

It's a very good measure, specially when the bank sends messages with links to account balances and other private information. However, they need to be aware that this will not be enough to avoid problems with authentication data theft.

In Brazil we've had a large number of phishing scams pretending to be from the Banks a couple of years ago. But now the fraudsters realize that people are already aware that these are fake. They are using a different approach now, sending trojan horses to capture the same information when the users are accessing the real bank website. It's easier to make people click on messages that appear to be from apparently innocent or not related to banking sources, like virtual cards websites or government agencies (saying that you have problems with your tax report, for example).

Banks need to protect their communication with their clients, but it won't be enough to ensure that credentials will not be stealed. They need to use additional measures to avoid that, like One Time Passwords cards or tokens like SecureID.

Why phishing works?

I've just read a very good article about Why Phishing Works. I'm glad that some of my personal thoughts on the subject were confirmed with the study presented in the text. I'll try to find some time and to recover some of my ancient programming skills to develop an anti-phishing toolbar Proof of Concept. I know that there are too many of them, but it's a good excuse to try to build something up after so many years :-)

Tuesday, March 28, 2006

Products Evaluation

I"ve just read a very good article about doing security evaluation of IT products . I liked this part specially:

"9. Do not be sorry for a vendor.
There were projects when our evaluation results literally made people cry and beg to buy their products. One vendor even offered a 100K product for free, so they could add the company logo to the list of their customers. Remember, you are choosing the product to protect your assets and if it fails and expose your data - you are the one who will be in trouble."

Some vendors look at me like furious animals after arguing with me about their products security features. I just can't hear things like "We have an assymetric 198 bits 3DES encryption" (yeah, it was exactly like that) without complaining.

What makes me feel uneasy is that if vendors are used to give answers like that (or just saying "don't worry, the data is encrypted") it means that people are not doing the right questions and neither they understand the answers.

Monday, March 27, 2006

How to deal with this?

It's the second time in this year where we have a known vulnerability that can be used to install malicious code on users' computers without a released patch. Just remember that almost all big companies rely on the "Patch Management + Antivirus" formula to avoid this threats.

What would be a big threat for those companies? Let's suppose, malicious code designed to steal corporate information. If Mr.Criminal creates one of these and spread it through a limited target space (to avoid being identified by antivirus vendors) using one of those unpatched vulnerabilities, he will succeed in stealing a good bunch of information. Will it be detected? Probably not, specially if his code vanishes from the victim's computer after doing the job (and sending the results through proxy-enabled HTTPS or DNS tunneling).

I'm not trying to spread FUD when I show this imaginary scenario. I believe that companies need to understand that the PM+AV formula is not enough to avoid problems caused by infected workstations. Yes, it fits perfectly to combat dumb and simple malware, but not those made by professional criminals. And we are already seeing that this is not science fiction (good example).

There is a need for better workstation protection and better abnormal user behaviour. Users suddenly trying to collect and send out huge amounts of information need to be promptly detected by the Security Team. This is one of the goals of my current Master Thesis. I'm trying to integrate differente forms of Intrusion Detection targeted to the internal networks. Honeytokens will probably play a part.

Friday, March 24, 2006

x.805 -> ISO18028-2

It was recently announced that the x.805 standard became also ISO18028-2. It's a network security standard. It was presented to me by my friend Nelson Correa. It was written by people related to the telecom world, and it's very similar to other telephony standards, with all its planes, dimensions, etc.

Anyway, I think it's a great document. I like it because it's very pragmatic, more focused than, for example, ISO17799. I suggest to any Infosec professional to take a look at that. There is a draft available here.

Blog visits increase explained

Yesterday I was looking the access log from this blog and noticed a sudden increase on the number of visits. I thought about what could have caused this and today my hypothesis was confirmed.

Thanks Martin McKeay for mentioning the blog in the Network Security Podcast of this week! It was the starting push I was hoping to find.

Martin showed, as all English native speakers, how hard is to say my name when you don’t know Portuguese (Spanish is similar). You can hear how to pronounce it right in http://www.oddcast.com/sitepal/, where you can use the SitePal demo. There is a Portuguese (female Brazilian) voice there that can read anything you type, it’s interesting to play with.

 

 

Thursday, March 23, 2006

Bank trojans - it's just beginning

There are lots of news in the last days about trojans targeting bank customers. Although they are making noise because of their ability to capture authentication data, I still think this is nothing very different from what was being predicted for a long time.

My main concern is with code that still has not appeared. Last year I made a presentation with a PoC about a code that installs itself as a BHO (Browser Helper Object). It is not a trojan that steals information, it changes information. A user can access his Internet Banking website with two factor authentication (like a SecureID) and authenticate again when doing a transaction, but the trojan will not save any information. It just changes the target account. It does not need to be able to send information back to its creator, it fullfills the fraud alone, while being authenticated by the user.

Internet Banking security won’t be safe until the endpoint security problem is not solved. You can build fraud detection and prevention process to live with the risk, but if you want to solve the problem you will need to provide endpoint security.

NEw IE vulnerabilities

I usually don’t like to spread FUD by asking people to leave IE and migrate to this or that browser. However, I must admit that today it’s more secure to NOT use IE.

I think there’s a difference that comes from the market share and from the amount of “haters” that MS has. People with intention to do harm will focus on looking for vulnerabilities that can provide them a bigger return, and “MS haters” tend to not follow responsible disclosure guidelines when dealing with MS products. This, even if unrelated to the software quality, will make IE more insecure. I use Firefox and really like it. Making more people adopt it, if my line of thinking is correct, can even make IE more secure as they start to share the focus of attackers.

I believe that all of them have vulnerabilities, and with equal conditions (vuln research focus and responsible disclosure) a well oriented user will be able to use any with acceptable security.

Testing BlogJet

I have installed an interesting application - BlogJet. It's a cool Windows client for my blog tool (as well as for other tools). Get your copy here: http://blogjet.com

"Computers are incredibly fast, accurate and stupid; humans are incredibly slow, inaccurate and brilliant; together they are powerful beyond imagination." -- Albert Einstein

Monday, March 20, 2006

Security through Begging

From Schneier's blog. Not only this solves the wrong problem, according to Schneier, but it also shows that governments are victims of VERY bad Infosec advisory. It's quite common to see defense department people responsible for advising on these matters. There are lots of trivial relationships between real warfare and information warfare, but assuming that they are the same thing is a real big mistake. Call the Subject Matter Experts, please.

Security Through Begging

From TechDirt:

Last summer, the surprising news came out that Japanese nuclear secrets leaked out, after a contractor was allowed to connect his personal virus-infested computer to the network at a nuclear power plant. The contractor had a file sharing app on his laptop as well, and suddenly nuclear secrets were available to plenty of kids just trying to download the latest hit single. It's only taken about nine months for the government to come up with its suggestion on how to prevent future leaks of this nature: begging all Japanese citizens not to use file sharing systems -- so that the next time this happens, there won't be anyone on the network to download such documents.

Even if their begging works, it solves the wrong problem. Sad.

Is it a joke?

Oracle is releasing a software to help people on searching through their personal data. The most interesting thing in this is this speech by Larry Ellison:

"We have the security problem solved. That's what we're good at, and that's the hard part of the problem."

Is it a joke? Why does Ellison keep ignoring everything David Litchfield is showing about their products.?


(I never thought I would say something like this:) It's time for Oracle to learn a bit about dealing with security issues from Microsoft. Yes, they have a lot of them, but at least they are taking the matter seriously.

Friday, March 17, 2006

Firefox extensions for webapp testing

For those that perform security tests on Web Applications, today I ran through this list of Firefox extensions that can help a lot in the job. One of them allows you to edit your cookies, while other can be used to edit the entire HTTP request. Very good to test the applications without installing Paros or other proxies.

BS7799-3

The BSI has just published the new document of the 7799 family, BS7799-3. It is a guide to the implementation of a Risk Management process, one of the main parts of the ISMS proposed by BS77799-2/ISO27001. I haven't read this document yet, but it's good to know that material to support the development of the main infosec processes needed by an organization is being produced. There are several other standards being developed by the SC27 of ISO, which is in charge for the 27000 family. I believe that in a few years we will have a very good set of security standards.

Threat evolution

It's interesting to watch the evolution of vulnerability research and exploit development.

In the beginning we used to see vulnerabilities in basic network protocols implementations, like ICMP, IP, TCP. It was the time of TCP Spoofing, Fragmentation attacks, Ping of Death.

Later, those protocol implementations started to be more solid, and the hackers (both white and black hats) changed their focus to the daemons, like HTTP (Apache, IIS), SMTP (sendmail!), etc. I think that this was the most fertile terrain for them until now, mainly because of the diversity of versions and configurations of all those daemons.

But even daemons became more solid. So, where to look for more vulnerabilities? Initially we thought it would be the web applications. But to find web applications vulnerabilities wasn't so cool for those who were searching. It wouldn't bring the desired publicity to the researchers (one thing is finding a vulnerability that can impact all Windows users, another is to find something that is related to a specific website shopping cart), and for the black hats, less profit. So, what did become the next target?

Something very natural happened. They climbed the layers! We departed from downstairs, from layers 3 and 4, directly to layer 6. Yes, people started to find quite interesting things in the presentation layer (that is so strange that only few people understand what it does really mean). There are lots of standards for representing data like images, audio and video. People started to verify how the applications were dealing with data manipulation. That's when vulnerabilities related to the use of ASN.1, several image type files (JPG, TIFF, and the latest WMF), video (WMV) and many others. And they'll still probably find more, as these data handling functions were never considered risky by the developers. There must be a lot of bad code in there. But what it brings in terms of security is what really matters.

First, there isn't anymore that link between the service and the vulnerability. You can't view the problem as "I don't have this port open in my firewall so I'm secure" anymore. The vulnerable file types can be transferred in several ways, by different applications and services, mainly HTTP (ops..isn't AJAX making everything HTTP?) and e-mail protocols. It's hard to understand the impact of a vulnerability in a big network. The attacks doesn't need to be targeted to the servers, as many applications dealing with the files run in the workstations. The target now is the user, the workstation. And that will be a real problem, because everybody was busy thinking about putting the public servers in DMZs and buying another IPS, trying to keep the perimeter safe. Hehe, sometimes I feel like saying "I told you! I told you", but it's not very productive. :-)

An important step is trying to reduce the impact of having a compromised workstation in the network. Today's networks are too "all or nothing", it won't help with this new reality. Another important thing is trying to build better ways to protect the workstations. Today the main protection tool for them is the antivirus, reactive and signature based. These tools need to evolve, improving their ability to deal with "0 days" and being more preventive. Isn't anybody selling a "workstation IPS"? Gee, it would be a good "revolutionary new product category" :-)

This threat evolution is changing the way that we need to build our defenses. Just that is enough to make our jobs interesting. It's certainly bad in terms of business risk, but yeah, it's really cool.

Why I don't like IPS

Someone asked me some days ago why I don't like IPSes. It's another device in the traffic path, subject to its own vulnerabilities and failures (see a recent vulnerability report for the TippingPoint IPS). I think that's too much risk for too few benefits, specially if you have a good vulnerability management process and a properly managed firewall.

I still think it can be a good tool for companies that are common targets to Script Kiddies and that have lots of published services available, as it is easier for them to let something wrong pass through its process and defenses. But, IMHO, for most cases, just waste of resources.
E

Brazilian bank trojans

I was impressed today when I read this story from The Register. Trojans that capture mouse clicks to defeat "screen keyboards" are common here in Brazil for more than 2 years. Are we (Brazilian infosec people) failing to report these things to the international community?

I remember reviewing forensics information from ftp servers used by these trojans a couple of years ago. There was a lot of little images with the area that has been clicked by the user, together with txt files with typed passwords. One of those trojans was also capable of stealing private key information from the user.

These trojans perhaps are the main motive why Br banks are distributing cards with passwords to be used in a "one time password" like scheme, like this one from Banco Itaú:

CSO challenges

The challenges of a CSO job are very well stated in this CSO Magazine article. One of the statements caught my eye:

"Business Continuity Planning is like concern for the Environment. Something that can only be reliably practiced by the well off. Protecting the rain forests is important to citizens of developed countries lacking rain forests. For citizens of rain forest areas, the main concern is getting by, feeding the kids and survival, which doesn't necessarily equate to protecting the environment, and can actually lead them to cut valuable trees for charcoal to use for cooking fires. In a similar manner, maintaining redundant systems of production and building hardened sites for maintaining business continuity requires a vision beyond the bottom line. If the sky falls, those who set aside the resources for BCP will shine, however, until the sky falls the BC planner looks like a spendthrift and is in the sights of the budget cutter. When the disaster strikes the poor planner has the best excuse in the world, it was God's will. HSD did the best it could during Katrina, The intelligence agencies did the best they could during 9/11. No one can blame them right? Everyone understands lack of foresight, we are all guilty of that, the ones who seem to survive best are the ones whose heads are in the sand. Those who actually foresee a disaster like those are negligent if they cannot share their vision. So, why bother? It doesn't bode well for getting the committment required to spend the money and avoid cutting it during the next budget cycle."

Terry Clark
IT Director
The Republic

File hijacker trojan

There is a story in Security Focus this week about a trojan that encrypts files in the victim computer so its creator can ask for money to decrypt them later. The price seems to be something like 300 bucks. However, imagine this kind of thing in a corporate environment. Running something like that in a file server or even in a database would be enough to raise considerably larger amounts of money. Very good "movie plot".

pauldotcom podcast

I'm still just starting to select my favorite infosec podcasts, but pauldotcom definitely is one of them. The guys are extremely funny. I specially like the "I may or may not" Twitchy stories. Kudos guys!

ISO NBR 27001

Tuesday I went to the final meeting of the comitee that deals with the Brazilian Information Security Standards to approve the local version of ISO27001. It's very good to be part of the process. The Standard will be published as NBR/ISO 27001 in the next weeks. The translation was very well done, it will be a great document. I hope to see it being used by the Brazilian companies in the next months.

Thursday, March 16, 2006

blogging in english

Hello! For those who don't understand Portuguese, welcome to my blog. I've been blogging about Information Security for more than two years now, but always in Portuguese. I'm very happy to have several Brazilian collegues constantly accessing my website, but I also want to be able to post to a broader community. I noticed that it would only be possible by blogging in English. I apologize for the native speakers about my bad grammar, feel free to correct me if you like.

I'll try to post some translations of my older favorite posts first. Meanwhile, I'll also try to put my quick comments about news (what I constantly do in portuguese) also in english. I hope you enjoy this blog, and please feel free to comment the posts and to drop me a line if you want to discuss anything presented here.

X 1 (go! go! go!)