Wednesday, August 29, 2012

Security generalists (and QSAs...)

This post is not supposed to be a rant about PCI DSS and the quite common low-qualified QSAs that make hell of the life of those pursuing compliance validation. Although it evolved from that, it’s now just an understanding from my part about the role of generalists in Information Security.

They are the glue. But more about that later.

I’ve been working through a PCI validation assessment and during a discussion of findings with the QSA I realized that, in a room full of people (and more than one QSA), no one was really understanding the requirements that were being discussed, their intent and what would be the alternatives that could be acceptable as compensating controls. It was all around custom applications development, so requirements 6.3 to 6.6.

PCI DSS includes a bunch of requirements for secure development of custom applications. There are items for adding security considerations in the early phases of development, doing code review, security functionality testing and vulnerability scanning (not mentioning secure coding itself). My personal point of view is that it’s too prescriptive (a recurring criticism about PCI DSS), where maybe the best thing to have would be some outcome based requirements. After all, what we want are secure applications. Or a better description, applications that can’t be exploited for unauthorized access to cardholder data.

An issue with all the prescriptive requirements is that they force people involved to understand a SDLC. They need to understand exactly what is a code review, functionality testing and vulnerability scanning. Without that you’ll see discussions where those definitions are used interchangeably and just make the assessment messy. If the QSA is one of those who can’t understand the differences, it gets VERY messy. Is that because he is a bad QSA? Yes, from a blunt point of view, as the QSA should be able to understand what he needs to check, but I think we are not being entirely fair with those professionals.

What’s the required background for a QSA? If it’s a guy who used to work with Network Security, then went through the QSA training and passed the exam, is he ready for any assessment? Unless he is one of those curious and ever-learning minds, it’s not a shock if we find he (and other auditors and security professionals in general) is completely ignorant in big pieces of the body of knowledge (BOK) required by his function. How can that happen?

One of  the key answers is how security professionals obtain their credentials. Different than engineers, lawyers and  doctors, we are not required to get a degree in Infosec and sit for a board/college exam. It’s no different than many IT related jobs, but there’s a catch. We are simultaneously asking people to have a minimum level of knowledge in a number of disciplines and not requiring them to prove that they achieved that.

But what about the certifications? CISSP? The QSA test?

All of them will (at least in theory) cover everything, but will gladly allow someone to pass without a clue about pieces of the BOK. There’s a minimum pass mark, but in almost all those credentials exams there is no minimum mark per knowledge domain. So, you can ace the network security piece and go blank the secure development part, and it’s still ok. The obtained credential, however, still implies that you have that minimum skill level in that domain you couldn’t answer a single question.

I’ve seen that multiple times. CISSPs that couldn’t even understand firewall rules or don’t know what an application vulnerability looks like. It is the same thing in the QSA training, so you’ll end up with someone that needs to assess if an organization is doing security functionality testing  but doesn’t even understand how that is different from code reviews.

Civil Engineers, for example, can’t become engineers if they can’t achieve a pass mark in Solid Mechanics. Having to sit through (and pass with a minimum mark on) individual courses that compose the Engineering BOK ensures that no critical gap will exist in an engineer formation. It’s not perfect, of course, but it’s far better that the unrealistic assumptions of minimum skills we currently have in Infosec.

That’s where the Infosec generalist comes to the stage. There are several roles in our field that must be filled with people with minimum skills in each piece of our BOK. QSAs are just one example. If we want to get rid of those “how can he ask something so stupid” moments (ok, reduce…there’s no patch for stupid), we must start forcing people in (or trying to get in) those roles to reach minimum levels on all BOK domains. Let’s change the CISSP credential (or create a new one), for example, forcing the candidate to reach a minimum score on all domains. Same thing for QSAs, CISAs, etc. I’m not sure if I want to advocate the creation of a new certification, but I’m starting to think that it could be useful too. Reducing the pressure for early specialization is also something that we could do to increase the number of good generalists out there.

There are many roles out there that would benefit from good quality generalists. Security organizations within big enterprises normally have consultants or advisors lined with the different LOBs or departments, with attributions that go from access control responsibilities to providing security requirements to new applications and business processes. I’ve met lots of people in those roles, but only a few had the necessary skills set for that.

The interesting aspect of those roles is that they share a common thread: they are often a liaison role, bringing together different groups with their specialists. Without a generalist the dialogue with one or more of those groups is undermined, with that person usually lining up with the group more aligned to his skill set and being seen as “one of them” by the others. Think about it. Developers x Infrastructure, Policy x Technology, Business x Technology, Servers x Networks, Blue Team x Red Team. If there’s someone capable of speaking the language of all those groups he’ll be able to reduce conflict, acting as “the glue” between them.  

There is value in having security generalists. Keep  that in mind when hiring people for those roles, or when considering your career options. Even if your plan is to eventually manage a team of security professionals, being a generalist puts you in an advantage position for that (but don’t forget that “Manager” is also a role that has its own set of minimum skills).  

Tuesday, August 21, 2012

Weaknesses in MS-CHAPv2 authentication - From MS Security RD blog

Interesting post from MS Security Research & Defense blog describing the newly discovered MS-CHAPv2 weaknesses:

MS-CHAP is the Microsoft version of the Challenge-Handshake Authentication Protocol and is described in RFC2759.  A recent presentation by Moxie Marlinspike [1] has revealed a breakthrough which reduces the security of MS-CHAPv2 to a single DES encryption (2^56) regardless of the password length.  Today, we published Security Advisory 2743314 with recommendations to mitigate the effects of this issue.

Any potential attack would require a man in the middle situation in which a third party can get all the traffic between the client and authenticator during the authentication.

Without going into much detail about the MS-CHAPv2 protocol, we will just discuss the part that would be affected by this type of attack: the challenge and response authentication.  This is how the client responds to the challenge sent by the authenticator:

The authenticator sends a 16 byte challenge: CS

The client generates a 16 byte challenge: CC

The client hash the authenticator challenge, client challenge, username and create an 8 byte block: C

The client uses the MD4 algorithm to hash the password: H

The clients pad H with 5 null byte to obtain a block of 21 bytes and breaks it into 3 DES keys: K1,K2,K3.

The client encrypts the block C with each one of K1,K2 and K3 to create the response: R.

The client send back R,C and the username.

Or:

C=SHA1(CS,CC,UNAME)

P=MD4(PASSWORD)

K1|K2|K3=P|5 byte of 0

R=DES(K1,C)|DES(K2,C)|DES(K3,C)

There are several issues in this algorithm that combined together can result in the success of this type of attack.

First, all elements of the challenge and response beside the MD4 of the password are sent in clear over the wire or could be easily calculated from items that are sent over the wire. This means that for a man in the middle attacker, the gain of the password hash will be enough to re-authenticate.

Secondly, the key derivation is particularly weak. Padding with 5 bytes of zero means that the last DES key has only a key space of 2^16.

Lastly, the same plaintext is encrypted with K1 and K2, which means a single key search of 2^56 is enough to break both K1 and K2.

Once the attacker has K1, K2 and K3 he has the MD4 of the password which is enough to re-authenticate.

- Ali Rahbar, MSRC Engineering

Now, about that “Any potential attack would require a man in the middle situation in which a third party can get all the traffic between the client and authenticator during the authentication.”Isn’t that exactly the scenario that a secure authentication protocol is supposed to protect you against?

Friday, August 17, 2012

A quick tale about a PMT

After my last post about PMTs I remembered one situation (in a previous and distant life) when I worked for a financial institution security office. We were being hammered by Internal Audit about our controls around access provisioning. There were several cases that we couldn’t find the access request form (paper!) for adding users to domain groups. Of course, there was an Identity Management plan that was promising to magically automate everything, but we needed something to address our needs until then.

So I created a simple PMT solution. We modified the Access Database that was used to record the content from those access request forms to generate a text log file, used a sysinternals tool to dump the Event Log from the PDC (Well, some time ago…NT4 domains! :-O) to a text file and I created a script that would compare all events of access management (creation of groups, users, users to groups) with the forms we registered. Any deviations were then investigated by the team.

It was fun to see how much was done informally by the domain administrators. That new process forced new habits to them (such as immediately informing us any time they needed to do something that would appear in the logs), solved our problems with IA and didn’t cost a dollar (at least no green dollars). Considering the number of mistakes (honest mistakes, but that were providing excessive access rights) that were identified, we actually reduced risk to the organization.

If a financial institution, that is normally more formal and process oriented could do it, why can’t those solutions be useful everywhere else?

How to make rich men use poor man's tools?

I was reading this great post from Johannes Ullrich on the SANS ISC Diary (in which he describes a very nice and simple script to help using DNS query logs as a malware detection resource) when I realized that although there are tons of very nice tricks and solutions out there (normally described as “Poor Man’s tools” - PMT) that are simply not used by medium and large organizations. I’ve seen that happening multiple times, but normally what happens is:

1.       Techie guy finds the solution and thinks: cool! Proposed to middle management

2.       Middle management thinks:

a.        “no way we will spend time and resources on this” OR

b.       “it’s too simple to be good” OR

c.        “I’ve never heard about this on those vendor webcasts so it’s not worth” OR

d.       “oh no if do this once the executives will deny all my budget requests expecting me to solve everything with things like this” OR

e.      It’s “open source”, doesn’t work in an organization like us” OR

f.        “I can’t trust this thing it doesn’t come from IBM/Microsoft/Oracle” OR

g.       Put your stupid reason here

3.       If for a miracle it moves up the food chain, it’s denied by higher management for one of the same reasons listed on #2

So we end up with organizations struggling with problems that could be solved with those PMTs. I’m more than aware that some of those concerns, specially around maintenance costs, are not totally unfounded. But there are organizations that actually do those things, normally due to different cultures (Universities, DotCom companies), and pretty successful with that. So, what could we do to change the way that organizations deal with PMTs and increase their adoption?

I think we need to sell the idea of Simple Solutions Task Forces. Every IT group in a big Enterprise, including Security (don’t even start by saying Security is not and IT group, there’s at least one piece of it that is), should have its own SSTF. People that would look at problems and say “hey, we can actually fix that with this little script”. I’ve seen so many very expensive products that are nothing more than simple scripts disguised as pretty shiny boxes, so in the end the result may not be that different in terms of features and the cost/time to deploy the solution can be really reduced. As it would be proposed and implemented by a specialized and formalized group, all the required precautions around documentation and support would be covered.

Another option would be to just create the framework for  those solutions in the organization. Someone like those Standards and Methodologies groups would put together what is necessary for anyone to implement a PMT in the enterprise: a support and a documentation model, code repository, roles and responsibilities minimum requirements. With that available, anyone could champion a PMT implementation while providing the necessary assurance that it won’t become a unsupported black box Frankenstein.

From my side, I was thinking about assembling a crowdsourced Security PMT repository and see if we can create some momentum to give these solutions a little more visibility and chance to find a place in sun. We know our problems, we have the tools; what about using them?

Wednesday, August 15, 2012

You don't need to be too concerned about the Cloud...

1.       Because your firewall rules suck

2.       Because you are not applying patches

3.       Because your users are all administrators of their desktops

4.       Because you trust those nice charts with HIGH/MEDIUM/LOWs

5.       Because you have malware active in your network…

6.       …and you can’t see what it is doing…

7.       …but you think the next shinny box will solve it

Maybe when you fix those you can start worrying if the Cloud is secure enough for you.