Thursday, November 29, 2012

Great study on spear-phishing from TrendMicro

Trend Micro has published a great study about spear-phishing email. It’s available here.

Although I would also like to know the overall numbers (i.e. amount of samples that were used during the research, to ensure the findings are really meaningful), there is good data in the paper that can feed into a well established SecOps practice. Some interesting pieces:

“Monitoring revealed that 94% of targeted emails use malicious file attachments while the rest use alternative methods like installing malware by luring victims to click malicious links and to download malicious files and using webmail exploits”

“Spear-phishing emails can have attachments of varying file types. We found that the most commonly used and shared file types in organizations (e.g., .XLS, .PDF, .DOC, .DOCX, and .HWP) accounted for 70% of the total number of spear-phishing email attachments during our monitoring.”

I’ve been spending a lot of time looking at numbers that affect the ability to look for potential compromises. For example, how many email messages an organization usually receives (of course it varies a lot according to size/business)? How many of those have attachments? How many of those attachments could be considered potentially dangerous? Is this information useful to help on narrowing the focus of detection/investigation practices?

I believe our industry has spent a lot of time working on detection systems without necessarily leveraging evidence based guidance about what to look for, where to look for. Reports like this one from TrendMicro are really useful to change that and help organizations to maximize the return of their SecOps resources.

 

Wednesday, November 14, 2012

The grain of salt

As a non-native English speaker, I always like to learn new idiomatic expressions that give color to the language. One of my favorites is to tell someone to take some information “with a grain of salt”. According to Wikipedia,

The phrase comes from Pliny the Elder's Naturalis Historia, regarding the discovery of a recipe for an antidote to a poison.[2] In the antidote, one of the ingredients was a grain of salt. Threats involving the poison were thus to be taken "with a grain of salt," and therefore less seriously.

An alternative account says that the Roman general Pompey believed he could make himself immune to poison by ingesting small amounts of various poisons, and he took this treatment with a grain of salt to help him swallow the poison. In this version, the salt is not the antidote. It was taken merely to assist in swallowing the poison.

The Latin word salis means both "salt" and "wit," so that the Latin phrase "cum grano salis" could be translated as both "with a grain of salt" and "with a grain (small amount) of wit."

The phrase cum grano salis is not what Pliny wrote. It is constructed according to the grammar of modern European languages rather than Classical Latin. Pliny's actual words were addito salis grano.[3]

But why am I talking about this in a Information Security blog? Because I want to say that all security advice should be taken with a grain of salt!

When I’m reading security blogs and listening to security podcasts I’m always impressed how some of my colleagues present some advice as the ultimate truth. There are a lot of “MUSTs” (in fact, a lot of “MUST NOTs”), as a magical set of rules that will allow us to easily do everything we want in a secure manner. However, all of them forget about one of the most important pieces of solution design exercises: CONSTRAINTS.

Of course we don’t want to use FTP or TELNET. Yes, 4 digit only passwords is stupid. Oh, how come you still have Java installed?

It’s not because people are stupid (well…ok, sometimes). It’s because they have, in most cases, valid constraints to be considered when security is being assessed or designed. Sometimes the only software available in the market for a specific business process is that crappy screen scraper based on TELNET. Or that hardcoded password crap is provided by the vendor of a multi-million dollars medical device that you can’t just throw away because of a security vulnerability. We should always try to apply pressure and make people aware of how security unwise those things are, but let’s be honest, if we were in the place of an executive, would you really consider a huge change to your business because of things like that? No, you would tell your security team to “be creative” or, most probably, call one of  those big brand consultants to tell you what you want to hear.

So, let’s pick our battles. We’ll never be able to work under ideal conditions. We’ll have to deal with “stupid” things like FTP, Telnet, unpatched Windows boxes and unsupported software.  We should keep evangelizing the IT world that security should be built-in, but we have to be prepared to bolt-on.

Next time a security curmudgeon comes with his canned “you MUST NOT” advice, ask him back: “and what if I have to?”. That’s a good way to see who is just playing to the crowd and who deals with real world, full of constraints conditions.

Wednesday, October 17, 2012

Groupthinking

Very nice post from Dave Shackleford:

These days, I am very, very afraid for the future of CISOs. Over the past few years, and specifically the past 12 months, I have become increasingly alarmed at the level of “groupthink” and “synchronized nodding” going on with security executives. Here are some of the things I am seeing:

1.    Lots of talking about the same shit, with absolutely no innovation at all. Good examples include metrics (we need them! they’re IMPORTANT!) and talk about policy and governance that usually means absolutely nothing.

2.    A desperate need to find “the metrics” to report to “senior management” – there is no such thing. Your management, in all likelihood, does not want any tactical numbers on antivirus events, IDS alerts, or such blather. They want real risk advice on business goals and functions. Period.

3.    Managing by managing what everyone else is managing. You would not BELIEVE how many security products get purchased because other security executives are buying them.

[…]

That’s really the current picture of our field: people doing what the others are doing. I like his idea of treating the security program like a startup, but an interesting thing to consider is how many CISOs would have the opportunity to do that. Their bosses would expect something different, their peers, security committees and external consultants/auditors. It’s not easy to escape that rat wheel!

A CISO job where one have the opportunity to shake things up like Dave suggests is a dream opportunity for any security professional. Unfortunately a lot of  those in positions like that are too busy…groupthinking J

Thursday, September 20, 2012

This is a test post

Sorry about this, but Posterous put some odd stuff on my LinkedIn profile for my last post's "autopost" feature. investigating what's going on...

Wednesday, September 19, 2012

Best of breed

The debate about whether it makes sense to buy best of breed products for security or if “good enough” is good enough (Ok, that was ugly J). Mike Rothman wrote a very good post about this a few years ago. I agree with most of his point on this, that “best of breed” makes sense for innovative products but not for mature technologies. But lately I’ve been seeing some discussion that expands the discussion.

I think that Mike’s post makes all sense for products. However, I’ve been seeing the same rationale being applied to services. But I’m really not sure that the same thing applies to services. Let’s think about Managed Security Services, for example. If you are outsourcing your SOC, even if it’s a mature service offering in the market, does it make sense to go for the “good enough”? Doesn’t look like it does. For mature products there isn’t a big difference between the best products and rest of the pack. In addition to that, there are usually benefits to get a product that is part of a suite you already have in place, from a vendor that you already have an enterprise license agreement or provides better integration with your other tools. But services are about human intelligence. You get what you pay, plain and simple. You can have you big box IT services provider doing that, but if you look the way they operate you’ll always see the same things: high turnover, low salaries, unskilled and inexperienced employees. It’s just not possible to provide the same level of service as the boutique providers that are specialized in that type of service and thus put a lot more energy on getting the right people doing it. The features of a service are directly linked to the people providing it, so the differences between the best and rest are higher than for products.

There are services where that wouldn’t matter, where you really don’t need intelligence and content, just a bunch of eyes and hands. Those are the services that are usually outsourced in all other disciplines, and I don’t see why it wouldn’t be different in IT or Infosec. But we often underestimate the skills necessary for some services, and MSS are usually the case. A SOC manned by unprepared analysts will only spit alerts provided by default configuration and out of the box rules from standard tools. A best of breed SOC will bring intelligence to the work, providing customized rules, configurations, threat information, ingest internal context information and prove meaningful alerts. Be careful when discarding the best of breed. For things like highly specialized services the “best” is the minimum you should expect, and “good enough” will almost always be “not enough”.

Friday, September 14, 2012

What should I do about BYOD?

There are lots of people providing canned advice about BYOD (for all the cloud related stuff too). It’s very important to understand that the only correct answer for the “what should I do about BYOD” question is the standard lawyer line: it depends.

Technology trends (I don’t question the fact that it is indeed a trend) such as BYOD often bring advice in the form of “you can’t fight the future” and “security is past the time of blocking new stuff”. I definitely agree that anyone working with security should keep an open mind, specially for technology trends. But those that are always anxious to stay on the bleeding edge should also understand that security must consider multiple factors when making decisions. BYOD is a very good example for that.

How prepared is the organization for BYOD? What’s the maturity and technology state of the organization network? About the applications, are they prepared to be used in a BYOD way? What’s the point of allowing people to work on their iPads if most of them use to work on fat client applications running on Windows? What about access control, encryption, etc? Is your network prepared to handle those for those devices?

Technology is only one aspect. Of course it’s easier to people to read email on only one nice smartphone. But are there any compliance regulations that should be considered? Financial Institutions usually have to comply with a lot of regulations regarding monitoring and controlling employee communications, how will they enforce those in a BYOD model?

It may be straightforward to decide about BYOD in a startup in California, but a big defense supplier may have a few additional threats to consider when deciding about that. Keep that in mind when someone asks for advice on BYOD and any other technology trend. The answer may not be as simple as you would expect.

Monday, September 10, 2012

DLP and encryption

The “Security Manager’s Journal” series of articles from Network World are a really nice way to understand the day to day challenges of real Infosec shops out there. Today’s article, “DLP tool is suddenly blind to email”, is a very interesting example of the challenges related to DLP and encryption. However, the most interesting aspect of Today’s article for me is the approach around decision making for the issue reported.

Summarizing the post, the author says they had implemented a DLP solution, but recently it stopped finding data leaving by email. It was found that the issue was caused by opportunistic TLS encryption between their Exchange server and the cloud based anti-spam solution. After finding that the author goes about the potential solutions to allow the DLP system to inspect the encrypted traffic.

What I found interesting about the article is that he never mentioned the alternative to disable encryption. WHAT?? ARE YOU FREAKING NUTS?? Yes, I’m serious. I mean, encryption is always good, but let’s consider it; email hitting the anti-spam provider will probably be delivered to the final destination unencrypted. So, what’s the real benefit of encrypting it between their Exchange server and the anti-spam system? Is it more valuable than the ability to scan (and block?) the outbound messages for data leakage?

That was something that I’d like to see when issues like the one described are discussed. We know that security is always about trade-offs, and this case is about a slightly different trade-off: one control for another. How would we compare the value of the controls? What’s the organization priority on this case? Those are all questions that would help understanding the scenario and adding a more mature risk management spin to the whole thing.

Elderwood project: the FUD, and some reality too

It’s been interesting to read all the frenzy about what Symantec has been calling “The Elderwood Project”.  The summary is “we are seeing these guys, who were behind that Aurora thing some time ago, still using a lot of 0-days in their hacking of NGOs, defense supply chain and government agencies”.

There are many different spins around the story now on the interwebs. There any many degrees of FUD around it too, but it’s important to analyze and thing about it carefully. Take, for example, this piece from Symantec’s blog post:

In order to discover these vulnerabilities, a large undertaking would be required by the attackers to thoroughly reverse-engineer the compiled applications. This effort would be substantially reduced if they had access to source code. The group seemingly has an unlimited supply of zero-day vulnerabilities. The vulnerabilities are used as needed, often within close succession of each other if exposure of the currently used vulnerability is imminent.

I’m not here to downplay the efforts of finding new 0-days. I cannot do it with my technical skills, I know it’s something really hardcore. But wait a minute, “a large undertaking to thoroughly reverse-engineer the compiled applications”, with supposed access to source code? My skeptical alarm rings loud here; we know a lot of security researchers that have been founding dozens of vulnerabilities in the same applications without access to source code and, if not with “minimum effort”, just by playing with fuzzers and doing some part-time (sometimes just for fun) testing. While I think there are really very good people putting some decent effort on finding vulnerabilities, I don’t think we can conclude there is a huge lab with never ending resources too. It might be, but we just don’t know.

Now, there’s something important to take as lessons learned from this. It is the fact that patching is just not enough, and that you have to have good defense in depth and detection capabilities in place too. If the adversary has some “unfair” advantage, such as 0-days, we need to level the field by boosting our monitoring capabilities. It’s important for “regular” organizations, but extremely important for those that can be an interesting target for motivated attackers (state sponsored, crime rings, carders, etc). This is not FUD, guys. It’s reality.

Wednesday, August 29, 2012

Security generalists (and QSAs...)

This post is not supposed to be a rant about PCI DSS and the quite common low-qualified QSAs that make hell of the life of those pursuing compliance validation. Although it evolved from that, it’s now just an understanding from my part about the role of generalists in Information Security.

They are the glue. But more about that later.

I’ve been working through a PCI validation assessment and during a discussion of findings with the QSA I realized that, in a room full of people (and more than one QSA), no one was really understanding the requirements that were being discussed, their intent and what would be the alternatives that could be acceptable as compensating controls. It was all around custom applications development, so requirements 6.3 to 6.6.

PCI DSS includes a bunch of requirements for secure development of custom applications. There are items for adding security considerations in the early phases of development, doing code review, security functionality testing and vulnerability scanning (not mentioning secure coding itself). My personal point of view is that it’s too prescriptive (a recurring criticism about PCI DSS), where maybe the best thing to have would be some outcome based requirements. After all, what we want are secure applications. Or a better description, applications that can’t be exploited for unauthorized access to cardholder data.

An issue with all the prescriptive requirements is that they force people involved to understand a SDLC. They need to understand exactly what is a code review, functionality testing and vulnerability scanning. Without that you’ll see discussions where those definitions are used interchangeably and just make the assessment messy. If the QSA is one of those who can’t understand the differences, it gets VERY messy. Is that because he is a bad QSA? Yes, from a blunt point of view, as the QSA should be able to understand what he needs to check, but I think we are not being entirely fair with those professionals.

What’s the required background for a QSA? If it’s a guy who used to work with Network Security, then went through the QSA training and passed the exam, is he ready for any assessment? Unless he is one of those curious and ever-learning minds, it’s not a shock if we find he (and other auditors and security professionals in general) is completely ignorant in big pieces of the body of knowledge (BOK) required by his function. How can that happen?

One of  the key answers is how security professionals obtain their credentials. Different than engineers, lawyers and  doctors, we are not required to get a degree in Infosec and sit for a board/college exam. It’s no different than many IT related jobs, but there’s a catch. We are simultaneously asking people to have a minimum level of knowledge in a number of disciplines and not requiring them to prove that they achieved that.

But what about the certifications? CISSP? The QSA test?

All of them will (at least in theory) cover everything, but will gladly allow someone to pass without a clue about pieces of the BOK. There’s a minimum pass mark, but in almost all those credentials exams there is no minimum mark per knowledge domain. So, you can ace the network security piece and go blank the secure development part, and it’s still ok. The obtained credential, however, still implies that you have that minimum skill level in that domain you couldn’t answer a single question.

I’ve seen that multiple times. CISSPs that couldn’t even understand firewall rules or don’t know what an application vulnerability looks like. It is the same thing in the QSA training, so you’ll end up with someone that needs to assess if an organization is doing security functionality testing  but doesn’t even understand how that is different from code reviews.

Civil Engineers, for example, can’t become engineers if they can’t achieve a pass mark in Solid Mechanics. Having to sit through (and pass with a minimum mark on) individual courses that compose the Engineering BOK ensures that no critical gap will exist in an engineer formation. It’s not perfect, of course, but it’s far better that the unrealistic assumptions of minimum skills we currently have in Infosec.

That’s where the Infosec generalist comes to the stage. There are several roles in our field that must be filled with people with minimum skills in each piece of our BOK. QSAs are just one example. If we want to get rid of those “how can he ask something so stupid” moments (ok, reduce…there’s no patch for stupid), we must start forcing people in (or trying to get in) those roles to reach minimum levels on all BOK domains. Let’s change the CISSP credential (or create a new one), for example, forcing the candidate to reach a minimum score on all domains. Same thing for QSAs, CISAs, etc. I’m not sure if I want to advocate the creation of a new certification, but I’m starting to think that it could be useful too. Reducing the pressure for early specialization is also something that we could do to increase the number of good generalists out there.

There are many roles out there that would benefit from good quality generalists. Security organizations within big enterprises normally have consultants or advisors lined with the different LOBs or departments, with attributions that go from access control responsibilities to providing security requirements to new applications and business processes. I’ve met lots of people in those roles, but only a few had the necessary skills set for that.

The interesting aspect of those roles is that they share a common thread: they are often a liaison role, bringing together different groups with their specialists. Without a generalist the dialogue with one or more of those groups is undermined, with that person usually lining up with the group more aligned to his skill set and being seen as “one of them” by the others. Think about it. Developers x Infrastructure, Policy x Technology, Business x Technology, Servers x Networks, Blue Team x Red Team. If there’s someone capable of speaking the language of all those groups he’ll be able to reduce conflict, acting as “the glue” between them.  

There is value in having security generalists. Keep  that in mind when hiring people for those roles, or when considering your career options. Even if your plan is to eventually manage a team of security professionals, being a generalist puts you in an advantage position for that (but don’t forget that “Manager” is also a role that has its own set of minimum skills).  

Tuesday, August 21, 2012

Weaknesses in MS-CHAPv2 authentication - From MS Security RD blog

Interesting post from MS Security Research & Defense blog describing the newly discovered MS-CHAPv2 weaknesses:

MS-CHAP is the Microsoft version of the Challenge-Handshake Authentication Protocol and is described in RFC2759.  A recent presentation by Moxie Marlinspike [1] has revealed a breakthrough which reduces the security of MS-CHAPv2 to a single DES encryption (2^56) regardless of the password length.  Today, we published Security Advisory 2743314 with recommendations to mitigate the effects of this issue.

Any potential attack would require a man in the middle situation in which a third party can get all the traffic between the client and authenticator during the authentication.

Without going into much detail about the MS-CHAPv2 protocol, we will just discuss the part that would be affected by this type of attack: the challenge and response authentication.  This is how the client responds to the challenge sent by the authenticator:

The authenticator sends a 16 byte challenge: CS

The client generates a 16 byte challenge: CC

The client hash the authenticator challenge, client challenge, username and create an 8 byte block: C

The client uses the MD4 algorithm to hash the password: H

The clients pad H with 5 null byte to obtain a block of 21 bytes and breaks it into 3 DES keys: K1,K2,K3.

The client encrypts the block C with each one of K1,K2 and K3 to create the response: R.

The client send back R,C and the username.

Or:

C=SHA1(CS,CC,UNAME)

P=MD4(PASSWORD)

K1|K2|K3=P|5 byte of 0

R=DES(K1,C)|DES(K2,C)|DES(K3,C)

There are several issues in this algorithm that combined together can result in the success of this type of attack.

First, all elements of the challenge and response beside the MD4 of the password are sent in clear over the wire or could be easily calculated from items that are sent over the wire. This means that for a man in the middle attacker, the gain of the password hash will be enough to re-authenticate.

Secondly, the key derivation is particularly weak. Padding with 5 bytes of zero means that the last DES key has only a key space of 2^16.

Lastly, the same plaintext is encrypted with K1 and K2, which means a single key search of 2^56 is enough to break both K1 and K2.

Once the attacker has K1, K2 and K3 he has the MD4 of the password which is enough to re-authenticate.

- Ali Rahbar, MSRC Engineering

Now, about that “Any potential attack would require a man in the middle situation in which a third party can get all the traffic between the client and authenticator during the authentication.”Isn’t that exactly the scenario that a secure authentication protocol is supposed to protect you against?

Friday, August 17, 2012

A quick tale about a PMT

After my last post about PMTs I remembered one situation (in a previous and distant life) when I worked for a financial institution security office. We were being hammered by Internal Audit about our controls around access provisioning. There were several cases that we couldn’t find the access request form (paper!) for adding users to domain groups. Of course, there was an Identity Management plan that was promising to magically automate everything, but we needed something to address our needs until then.

So I created a simple PMT solution. We modified the Access Database that was used to record the content from those access request forms to generate a text log file, used a sysinternals tool to dump the Event Log from the PDC (Well, some time ago…NT4 domains! :-O) to a text file and I created a script that would compare all events of access management (creation of groups, users, users to groups) with the forms we registered. Any deviations were then investigated by the team.

It was fun to see how much was done informally by the domain administrators. That new process forced new habits to them (such as immediately informing us any time they needed to do something that would appear in the logs), solved our problems with IA and didn’t cost a dollar (at least no green dollars). Considering the number of mistakes (honest mistakes, but that were providing excessive access rights) that were identified, we actually reduced risk to the organization.

If a financial institution, that is normally more formal and process oriented could do it, why can’t those solutions be useful everywhere else?

How to make rich men use poor man's tools?

I was reading this great post from Johannes Ullrich on the SANS ISC Diary (in which he describes a very nice and simple script to help using DNS query logs as a malware detection resource) when I realized that although there are tons of very nice tricks and solutions out there (normally described as “Poor Man’s tools” - PMT) that are simply not used by medium and large organizations. I’ve seen that happening multiple times, but normally what happens is:

1.       Techie guy finds the solution and thinks: cool! Proposed to middle management

2.       Middle management thinks:

a.        “no way we will spend time and resources on this” OR

b.       “it’s too simple to be good” OR

c.        “I’ve never heard about this on those vendor webcasts so it’s not worth” OR

d.       “oh no if do this once the executives will deny all my budget requests expecting me to solve everything with things like this” OR

e.      It’s “open source”, doesn’t work in an organization like us” OR

f.        “I can’t trust this thing it doesn’t come from IBM/Microsoft/Oracle” OR

g.       Put your stupid reason here

3.       If for a miracle it moves up the food chain, it’s denied by higher management for one of the same reasons listed on #2

So we end up with organizations struggling with problems that could be solved with those PMTs. I’m more than aware that some of those concerns, specially around maintenance costs, are not totally unfounded. But there are organizations that actually do those things, normally due to different cultures (Universities, DotCom companies), and pretty successful with that. So, what could we do to change the way that organizations deal with PMTs and increase their adoption?

I think we need to sell the idea of Simple Solutions Task Forces. Every IT group in a big Enterprise, including Security (don’t even start by saying Security is not and IT group, there’s at least one piece of it that is), should have its own SSTF. People that would look at problems and say “hey, we can actually fix that with this little script”. I’ve seen so many very expensive products that are nothing more than simple scripts disguised as pretty shiny boxes, so in the end the result may not be that different in terms of features and the cost/time to deploy the solution can be really reduced. As it would be proposed and implemented by a specialized and formalized group, all the required precautions around documentation and support would be covered.

Another option would be to just create the framework for  those solutions in the organization. Someone like those Standards and Methodologies groups would put together what is necessary for anyone to implement a PMT in the enterprise: a support and a documentation model, code repository, roles and responsibilities minimum requirements. With that available, anyone could champion a PMT implementation while providing the necessary assurance that it won’t become a unsupported black box Frankenstein.

From my side, I was thinking about assembling a crowdsourced Security PMT repository and see if we can create some momentum to give these solutions a little more visibility and chance to find a place in sun. We know our problems, we have the tools; what about using them?

Wednesday, August 15, 2012

You don't need to be too concerned about the Cloud...

1.       Because your firewall rules suck

2.       Because you are not applying patches

3.       Because your users are all administrators of their desktops

4.       Because you trust those nice charts with HIGH/MEDIUM/LOWs

5.       Because you have malware active in your network…

6.       …and you can’t see what it is doing…

7.       …but you think the next shinny box will solve it

Maybe when you fix those you can start worrying if the Cloud is secure enough for you.

Monday, July 16, 2012

Honeytokens being used in real world

Very interesting case of honeytokens deployment in  this Network World article today. Here's what they did:

Here's what happened. We use Salesforce.com as the single repository for information about all of our current customers, potential sales opportunities, sales forecasts and more. It's all highly sensitive material and not anything we'd like our competitors to get their hands on.

That's why one of our marketing executives was worried when she called me into her office earlier this week. She had received a marketing email from one of our competitors. The interesting thing about this email was that it was sent to all of the dummy, or "honey token," email accounts that we had set up in Salesforce for testing purposes. The implication was that the email had also gone to all of our legitimate customers and that this competitor somehow had gotten access to the information in our Salesforce deployment.

 

XaaS, cloud services in general are a fertile terrain for honeytokens deployment. Don't forget them as tools to complement your DLP strategy!

Tuesday, July 10, 2012

Simple and effective

Although there's no hard evidence for any of the tips from the links below (and it would be nice to collect that!), I've always liked simple security interventions that could reduce risk without the associated cost of implementing new tools or processes. It was interesting to see in the same week to separate posts with "cheap" security measures that can help a lot who doesn't want to be the low hanging fruit. Enjoy:

http://www.netspi.com/blog/2012/07/09/5-ways-to-find-systems-running-domain-admin-processes/

http://www.networkworld.com/research/2012/070912-10-crazy-it-security-tricks-260746.html?page=1

Tuesday, July 3, 2012

"We are not a target"

Yes you are. Security professionals should be educating executives that make that mistaken assumption to understand how valuable their IT infrastructure is by itself, no matter what data is there. Brick and mortar criminals steal fast cars to use when robbing a bank, it’s the same thing for servers on the Internet, email accounts, FTP and web sites; they might not be valuable for the data they hold, but they are valuable tools to be used in attacks against others.

Even when you consider malware (such as Flame, Stuxnet), they still can cause you problems (downtime of IT the most common issue) even if you are not the original target, as most of them don’t include checks to confirm they are running on their targets only. Even silly stuff, like those created to steal World of Warcraft credentials, for example, will still affect your systems and can cause issues. Even if they are “benign” for you, it’s someone else’s (and someone not trustful at all) code running on your computers.

So, forget about “We’re not a target”. Even if you are not because of your data, you still are just because you are connected.

Tuesday, June 26, 2012

Here it is, as expected: transaction poisoning

Why can't we coin cool names like that?

From the McAfee report on the "Operation High Roller":

 

"In addition, we observed a scheme known as “transaction poisoning” that targeted a well-known online escrow company. Rather than initiating new wire transactions on behalf of infected victims, the scheme would silently modify transactions initiated by the legitimate account holder. The original transactions were intended to go from a North American account to a recipient in the United Kingdom to fund an escrow account for auctioned vehicles. Instead, the funds were diverted to a mule account (see Figures 6 & 7).

This attack used a remote script that injected the necessary information behind the legitimate data, so the fraudulent transfer was invisible to the account holder. The script altered the following fields:

1. Bank Name

2. Sort Code

3. Swift Code

4. IBAN code

5. Account Number

6. Beneficiary Address"

 

We saw it coming. That's a very efficient way to deal with banks that apply two-factor authentication to each transaction. 

Wednesday, May 30, 2012

Flame, exactly how we predicted 5 years ago

This week news are all around Flame, the father of all malware. There are several interesting posts and code analysis floating around about it, but what I wanted to highlight is how Flame is following the evolution pattern I and my friends Victor and Fucs presented back in 2007 (Black Hat Europe). Some of Flame's characteristics that we talked about at that time are:

- Modular architecture: we said "The payload, the part of the bot that is responsible for its "features", can also be developed as a separate layer. It would be composed by several features modules, which receive the commands from the command layer. The bot can just download a new feature module, that is programmed to receive its parameters through a defined API"

- Script language: from our paper: "If the botnet master's objective is to avoid transferring executable binaries while maintaining the ability to have flexible bots with extensible functionality, there is also the option of using script languages."

 

Flame was designed to allow updates for its exploits. 

Thursday, May 24, 2012

Browsers and malware

So, "Google Chrome Just Passed Internet Explorer To Become The World's Most Popular Web Browser". What does that mean for security?
I think that, putting aside the always present concerns around privacy every time the name "Google" is around, it's good to have a browser with security conscious design being widely adopted. However, I think the interesting part of this is to consider the data below together with some other factors:


Source: StatCounter Global Stats - Browser Market Share
Have you noticed that browser vulnerabilities are not the key vectors being used by attackers to compromise end users anymore? The culprits of the day are Adobe Flash and Acrobat Reader. And it's easy to understand why.
If we look at the graph above we'll notice there's no browser with more than 40% of the market. So if you are an attacker and you want to write malware than hit a bigger chunk of the "victim space" it's not a good strategy to use browser specific exploits, such as those related to MS12-010. Wouldn't it be better to target something that runs on 99% of the PCs (considering that PCs are still the major malware target), or even 73%? Those are Adobe Flash and Java, respectively.
Whenever there's monoculture, there is increased security risk, as Dan Geer has been saying for years:

In biology, a monoculture--a singular species that supplants all others--is a bad thing. When every plant is the same species, every plant is susceptible to the same predators, the same diseases. Examples are as plentiful as they are sad: Consider the virus that brought on the Irish potato famine or the boll weevil that nearly obliterated the South's cotton crop in the early 20th century, and you see the destruction that human-made monocultures bring upon themselves.
Computers are no different. Computer viruses spread efficiently, lethally when all computers on a network run the same software. MyDoom, Melissa and MSBlast were a function not of the Internet, but of a Windows monoculture. They caused havoc because they were designed for specific vulnerabilities of Windows. Since one virus generally affects one species of software, any computing monoculture poses a hazard the same way it does in nature.

As always, the old is new again. Geer was talking about document formats at that time, now the discussion is around active web content. But there is hope: HTML5 has been seen as something that will allow the diversification necessary to reduce the risk. No more single piece of software necessary to browse the web, no dependence on specific Operating Systems or platforms.
But for this specific case, there is a catch: HTML5 is so powerful that there's a risk it becomes not only an attack vector, but a new species by itself, a huge new monoculture. Things like the WebSocket API could make it be the new One, the One to rule them all, the One to bring them all and in the darkness bind them (Yes! I did it! I quoted Lord of the Rings :-)). Cross-platform malware is the new threat rising, leveraging HTML5 features to exploit PC, Mac, iOS and Android.
The perspective (temptation?) of malware that can potentially run on all those platforms is certainly drawing the attention of all sorts of colored hats. Javascript worms have been reality for a long time, so there's really no reason to believe HTML5 malware won't be a rising issue in the near future. Trend Micro's Robert McArdle wrote a very nice piece about HTML5 attack scenarios that illustrates our future challenges around it.
So be it. The browser monoculture is dead (at least for now - keep an eye on Chrome's rising trend!). Long live HTML5 monoculture!

Friday, May 18, 2012

which tool to pick?

A friend of mine sent me an e-mail asking for my opinion on some tools for a DRP (Disaster Recovery Planning) project. It’s a subject that I haven’t touched for a long time, but in the end the thought process around his question ended up being so interesting from a security planning perspective that I thought it could be good material for a post.

 

He asked me about two specific tools, LDRPS and Archer. We had a good experience with LDPRS when we worked together on a BCP/DRP project a few years ago, and someone suggested Archer to him. As I said above there’s been a long time since I worked with BCP processes, but I spent a few minutes researching the current state of those tools in order to provide him a decent opinion.

 

The interesting aspect of his question is that it replicates a very common dilemma we often face when we are developing tools roadmaps and architectures. The Best of Breed x Generic solution.

 

I haven’t put my hands on those tools for BCP, but I’m certain that LDRPS is better than Archer on a simple feature by feature comparison. LDRPS was developed by Strohl, later acquired by Sungard, two companies specialized on availability services. It’s used by a lot of Fortune 500 companies and it’s been evolving for literally decades.

 

Archer, on the other hand, is a GRC tool that happens to have a BCP module. It’s a tool to solve a broader variety of problems than LDRPS, and I bet that it won’t have all the bells and whistles LDRPS has for developing and testing disaster and business continuity plans. But (and there is always a but)…

 

The wider scope for Archer can be the source of its weakness on this case, but it’s also its major strength. There are a lot of common steps and similarities in the BCP/DRP processes and other processes supported by other Archer modules, such as Risk Management, Compliance Management and Vendor Management. For all these processes it’s necessary to identify data, assets, locations and other components of the organization, establish ownership, value/impact and interdependencies. And that’s what could make Archer the best pick for my friend. Depending on this organization’s strategy for those other processes they might be able to leverage some work already done or re-use the data being gathered for the BCP project on those other processes. They may end with a tool that is not the best available for developing Business Continuity and Disaster Recovery plans, but they might be getting more value by leveraging the data obtained during that project on other fronts.

 

Integration and data sharing is one of the key aspects of a successful security strategy. Good security architects and managers will always consider that when choosing the tools to implement that strategy.

 

Wednesday, May 16, 2012

Grimes article on firewalls

It’s always interesting when an article or blog post generates multiple responses from the security blogosphere. It lets us gauge the general opinion of that particular idea or concept. It wasn’t different with this post from Roger Grimes, “Why you don’t need a firewall”. It sounds very similar to the general rationale for the Jericho project, but those guys have clearly stated that the firewall doesn’t have to be removed, but it assumes a smaller role in the new security strategy.

There are similar opinions about the article here, here, here and here. Some different spins, but the general understanding is that the firewall is not a silver bullet, but it has its use. The most important thing to consider when assessing the firewall value is to understand the value of choke points:

In military strategy, a choke point (or chokepoint) is a geographical feature on land such as a valley, defile or a bridge, or at sea such as a strait which an armed force is forced to pass, sometimes on a substantially narrower front, and therefore greatly decreasing its combat power, in order to reach its objective. A choke point would allow a numerically inferior defending force to successfully prevent a larger opponent because the attacker would not be able to bring his superior numbers to bear.

(from the “Choke Point” Wikipedia entry)

Firewalls are also valuable enablers of other security tools, such as IPS/IDS and deep package inspection systems. Deploying those systems behind firewalls reduce the amount of data to be inspected and the number of events generated for investigation, reducing capital (hardware) and operational (people) costs for those controls. There are some decent metrics out there for sizing deployments of those tools based on the amount of traffic being monitored, so it should be straightforward to factor them into a cost/benefit analysis for firewalls.

One can argue that we should also consider the additional costs from the firewall deployment itself, but the controls above are just one example of things that will cost less because of (well managed) firewalls. Those reductions sum up to a point where not having firewalls is just a very bad business decision.

Tuesday, May 15, 2012

PCI DSS overhaul necessary

First, I have to admit this post should have been submitted to the PCI Security Standards Council as part of the feedback phase for the DSS that has just closed, but my concerns are related to the core structure and format of the standard. I believe it wouldn’t make any difference to submit them as the last changes to PCI DSS have been more of an incremental aspect. Anyway, that’s my mea culpa for delivering criticism without contributing on improving the standard.

 I’ve been working exclusively with PCI during the past year. Being involved in remediation activities and not in assessments allows us to have a direct view of the challenges to have the standard fully implemented, even considering it a “bare minimum” in terms of security.

 In short, PCI DSS must be more effective: It has to shift from a simple list of controls to an outcome based system.

 All PCI requirements have the same weight to organizations trying to achieve compliance. There was an initiative from the Council called “the prioritized approach”, but it’s more a roadmap to an organization towards compliance than a risk based model. It says that “The roadmap helps to prioritize efforts to achieve compliance, establish milestones, lower the risk of cardholder data breaches sooner in the compliance process”. It tells you what should be tackled first, but at no point it means that you don’t need to work or put effort on the items on the end of the list. So, when full compliance is the end goal, having clear roles and responsibilities expressed in the security policy is as important as ensuring that Internet facing web applications are not vulnerable.

 I won’t argue about the importance of specific controls in the standard, but clearly some key deficiencies are directly related to a big chunk of the breaches, so the standard should be tuned to put more emphasis on those controls while allowing organizations to deal with the other items according to their own internal prioritization, planning and even culture. An organization with strong yet informal controls, for example, can only consider those controls in place for PCI after formalizing them, driving resources away from other areas that carry more risk and that might need improvement.

 PCI DSS also stifles innovation by forcing organizations to apply a set of “best practices” that otherwise could be replaced by more modern practices. Imagine a scenario where you are working to improve your controls over data traversing your network perimeter. A lot of interesting approaches and technologies are currently evolving and being discussed in our field, such as application aware Next Generation Firewalls, DLP systems, network behavior analysis tools and having an active security monitoring group who can understand what should and what should not be there. However, if PCI compliance is one of your priorities you better put all those things aside and start putting together extensive documentation about ports, protocols and rules in your environment (and keeping that updated!). The DSS seems to be written for low complexity networks, with just a few entry points and a very small number of services available and ongoing connections. You need to have all of them documented and keep that documentation up to date. No wonder the card issuers (i.e. Banks), who have far more complex networks than the average merchant, are still trying to keep a healthy distance from PCI.

Validation also needs to be reviewed. As the reporting instructions are public, organizations are tailoring their compliance efforts to what the QSA will look for, not to meet the requirements intents. A lot of documents are created as empty shells and placeholders, just because some process and procedures have to be documented. The QSA has limited time to go through all those documents in a very short time engagement with limited resources (lots of QSA competition reduce their ability to charge decently for those assessments), reducing the time available to check things that really matter. Can’t the assessment be changed from a control checklist to something more outcome based? Integrate the ASV scanning and pentest requirements into a single continuous assessment framework that would check the outcome of security processes the organizations choose to put in place to keep some key defined metrics under control?

 That’s a lot of food for thought. I’m not holding my breath for any exciting changes from the current review cycle. In fact, I’m expecting more from the same, additional layers of controls to keep the compliance wheel of pain running fast. Here we go for another lap.

 (I know about compensating controls, scope reduction and other things that can be done to make PCI DSS compliance more “manageable”. Although I agree they are useful tools I don’t think they are enough nor well defined and understood within the QSA community. Today it’s easy for an organization to just hop from one QSA to the other looking for someone that “likes” their approach on those items. There are so many different opinions out there that you can always find a QSA that will agree with anything you want to do)

Wednesday, May 9, 2012

Why does PCI-DSS (and other standards) suck?

From: The Six Enemies of Greatness (and Happiness) - Forbes
Just check item number #3:

 3) Committees
Nothing destroys a good idea faster than a mandatory consensus. The lowest common denominator is never a high standard.
Standards like PCI are always created by Committees. Unfortunately, as this nice article says, "the lowest common denominator is never a high standard".

Tuesday, May 8, 2012

Adding context - tech jobs

Professionals starting in network security (or any other specialized IT job) are often concerned about improving their skills and knowledge in networking and the products and gear they spend most of their time with. Although it’s extremely important to know the technology you work with, it’s also very important to learn at least a little about all the other technologies you may find in the IT environments you’re (and will be) dealing with. Even very basic tasks as defining or reviewing firewall rules are challenging when there’s no context available. I’m tired of seeing people with stupid hardwired rules in their minds (HTTPS is good; FTP is bad; and so on…) struggling to understand why a specific control is in place or swallowing stupid justifications such as “we need port 80 open both ways (bi-directional – ugh) for this app to  work” just because they know nothing about any other technology or process that is not directly related to their job descriptions.

Almost all security professionals learn that the Business defines Security, and not the opposite. However, few are able to tell you how to transform that piece of wisdom into practical advice. So here it is: learn about what the organization is doing:

·         What does the “business people” do?

·         Which applications do they use?

·         How those applications work? What kind of data, architecture, protocols?

·         What’s the data flow for the business? What are the people’s roles in the business process?

There’s plenty to learn from the other IT silos too, such as:

·         What is running on all these servers? What do all these applications and middleware do?

·         How are the operations teams doing their jobs? How are they accessing and connect to servers and applications? Jump boxes? Shared IDs?

Learning about how the organization works is as important as learning more about security. You’ll find which issues are easy to fix, what process deficiencies will keep spitting out vulnerabilities, how controls will or will not work. Security is usually not part of their core job descriptions, so don’t expect them to go the extra mile to understand how security should be done for their context. If you want it to work, get that context yourself and apply your security knowledge to it. You’ll be far more effective and, surprisingly, they will listen when you start to sound like you know what they do.

Tuesday, March 13, 2012

MS12-020

It’s been some time since I wrote anything related to specific vulnerabilities, but MS12-020 is a quite interesting one. It allows remote unauthenticated exploitation of the RDP server on Windows.

 

Let’s keep in mind that since Windows 2000 we’ve been pushing organizations to migrate from stuff like Dameware, VNC and PCAnywhere to Terminal Services, as it is a native service with decent authentication and encryption. Due to remote access and support requirements there are lots of firewalls out there with a hole for TCP3389, leaving a lot of servers exposed to the Internet. The list of vulnerable Windows versions also indicates the vulnerability is in a piece of code that has been around for some time, so for those with unsupported Windows 2000 and Windows XP/2003 older Service Packs, keep in mind that you may have a huge hole on your systems without a fix to apply. Time for an upgrade?

 

We’ll see how bad the exploitation for this will be in the next few weeks; I can see it as big opportunity for worms and botnet developers.

 

UPDATE: not only this is really becoming great news and rumours of exploits being developed getting stronger, there are some interesting news about the source of the data used in the first PoCs circulating on the web. It seems that PoC code developed/used by Microsoft Security Research Centre is actually the exploit found in a chinese web site. Just imagine what it could mean to all the cyber-war / cyber-espionage thing if we find out that organizations like the MSRC have been compromised and details about 0 days are actively being stolen from them. Creepy.

Thursday, March 1, 2012

You don't have to always be the bad guy

So, Zenprise is saying that most of their clients are buying Mobile Device Management (MDM) tools to block stuff such as Angry Birds and Facebook, due to productivity issues, instead of doing real security work.

If you ever had to manage Web content filtering tools you know how it works. Some manager gets mad because he sees an employee browsing his Facebook or Twitter timeline at work and decide that Security has to block those productivity killing nightmares. Security is always blocking stuff, right? Why wouldn’t they block that too?

Because Security is always having a bad time trying to not look like Mordac. Yes, sometimes we have to block stuff due to security risks, but that doesn’t mean we should also be responsible for blocking stuff for other reasons. In order to inject itself in the early phases of business and IT initiatives we are constantly trying to change our image from the guys who are always preventing anything from happening to business enablers. How can we do it if we keep wearing all those control freak hats?

Security has to either say no to who is asking to block stuff not related to security threats or demand that those actions are clearly defined as policies from other groups, such as HR. Even if the tools used for those controls are the same being used for security reasons and operated by the Security team, the reasons for blocking stuff unrelated to security should be clearly stated and the processes to request exceptions or changes to the policy should be detached from those used for security stuff. Even the risk assessment of those requests is different, so why would we do it the same way (and by the same people)?

Maybe those draconian policies are being used to justify money spent on all those shiny tools, some classical security theater. If users are seeing that huge STOP! sign every time they try to access a website they will certainly think the network is really secure, right? J

Wednesday, February 29, 2012

One Size Fits None

One of the trendy topics of current security discussions is the BYOD (or the less sexy “IT consumerization” term) thing. It’s good to see the topic being discussed, but the way those discussions flow is what really makes me concerned. How come that we’re still asking questions such as “should we allow it?”, “how to protect those devices”, and so on? It’s the same thing for cloud services or any other new IT thing, we’re always asking if we should allow it and how we’ll protect it. I see those questions basically as:

-          Will we give a sample of our infinite power to deny the users requests?

-          How will we manage to make this thing useless through death-by-a-thousand-controls?

These situations always remind me of our dear Mordac. No wonder our actions inspired such nice Scott Adams character.

What I would love to see ourselves doing with all this new stuff coming to our environments is getting rid of the One Size Fits All approach. We keep doing this allow/deny thing to everyone, without considering the different needs from the different types of users (ok, the label is almost derogatory nowadays, but whatever) we deal with. Even worse, not considering the different data those users have access to.

What I mean is that security must be a bit more ADAPTIVE. There’s no point in applying the same level of control to a someone without access to sensitive information  who wants to read on his/her iPhone to the CEO’s iPhone during a big M&A process. Can’t you see how context changes everything?

It wouldn’t be a matter of applying or not controls anymore, but how much of it, based on several context variables. For that our controls should be designed to take those variables in consideration; Several classification labels for people, data, applications, locations, networks and hosts should be considered by enforcement points and controls to provide an adaptive set of security measures that are aligned to each situation.

Don’t think I’m dreaming too high and being unrealistic. There are tools available for that now and many more increasingly becoming available. You can buy Adaptive Authentication systems right now and a lot of the SIEM tools already allows you to use different data sources to apply context to your security monitoring processes. Policy based remote access controls (apply restrictions when connected to the corporate network, for example), different personal firewall profiles based on locations are standard features from lots of security products. We just need to keep expanding on that, considering new technologies that enable us to do it, such as IF-MAP, OpenFlow and RMS, when designing our security architecture.

Some other cool products and Technologies that allow organizations to apply context based security controls:

-          Data Classification

-          Adaptive Authentication

-          Identity based security monitoring

-          Next Generation firewalls

Now, what we’re still missing is a management/orchestration layer on top of all that. Some big vendors providing solutions from multiple domains have some integration in place providing limited centralized policy definition, but there’s nothing out there capable of controlling such diverse ecosystem of applications. We still have some work ahead in order to translate “Top executives can read their email on their phones but with limitations based on the classification of data and the country where they are” into settings for stuff like firewalls, authentication systems and email servers.  

Thursday, February 16, 2012

Great research

Good research, well written report, it's always good to read something like this:

http://resources.infosecinstitute.com/ghost-domain-names/

Friday, January 20, 2012

The non-critical stuff

For HBGary, it was a less important website. For CardSystems, it was just a research database, not the critical payment processing systems. For Heartland it was also a minor web application. RSA initial compromise point was an end user workstation.

As we can see, big breaches not necessarily happen through an organization’s most important systems. That’s actually quite similar to security breaches in the physical world, it’s not common to see the attacker coming through the front door.

Even if that in mind, security decisions are still being made to protect the critical systems only, what is normally seen as appropriate “Risk Management”. I have no doubt we should protect critical systems first, but we also need to make executives aware that attackers are not picky about their targets. If they find what they are looking for (passwords, credit card numbers) in secondary, less important systems, they’re still happy with the outcome. And the breach will still be quite damaging; it doesn’t matter they didn’t reach your critical systems; they got what they want from somewhere else, and for everyone else it was just “they got it from your network”. It doesn’t matter which system was that.

Even if there’s no valuable data on secondary systems (are you really, really sure about that??), they still can be used as bridgeheads for attacks against the major data repositories. So, pay attention to your compartmentalization strategy (are those different levels really segregated from each other?) and your network wide monitoring capabilities. Those secondary systems may be part of critical processes or be responsible for any of your revenue, but they are still juicy targets for whoever is interested in your data.