Thursday, December 1, 2011

Complexity

Complexity is always a key factor for security decisions. In general, less complexity means more security, as simple is usually easier to protect than complex. A few days ago I read something about cloud and security (again J), something along the lines of CSOs concerned that cloud means more complexity so it’s insecure. Well, an interesting thing about complexity is that it doesn’t necessarily make things harder; generally it doesn’t matter how complexity the entire system is, but how much of that complexity affects you and your ability to provide security.

 

Take, for instance, ABS brakes; they are far more complex than plain simple brakes. However, they provide more security. Still on cars, the electronic fuel injection is more complex but easier to operate, at least in the driver’s perspective. Same thing for fly-by-wire systems and many others; they are more complex, but they reduce the complexity presented to the operator of the system. When that happens, it makes the device/system easier to handle, reducing the opportunity for human mistakes. There are more moving parts (and more parts that can fail), but the operator has to handle less variables.

 

Cloud computing is the same thing; highly complex environments such as Amazon EC2 will make a lot of things easier for you. You’ll have to directly handle less security aspects than you’d usually have by controlling your own data center, servers, etc. The security issues related to those components are still there, but they are being managed by someone else, who is probably relying on heavy automation in order to make this new system viable. That someone else can be Amazon, or can be the maker of your car; Microsoft or Apple, for your Operating Systems; and so on.

 

In the same way as ABS brakes had been first introduced in Formula 1 cars and later came to the “end user”, the same thing happens with computing technology. I remember when electronic injection cars were being introduced; a lot of car aficionados would complain that they were losing control to those little computer boxes that couldn’t be as good as the old carburetor they could fine tune by ear. Cloud computing has been maturing for quite some time, and is now being adopted by end users. The complexity is still there, but it’s so well managed that the end user perception in the end is a less complex system. In their security point of view, a system that is easier to protect.

 

ABS, Electronic Injection, Fly-by-wire, all those systems are trade-offs. Relying more on technology and automation to reduce the complexity presented to the human operator. It’s a fact, proven by numbers, that it works for those technologies. Does it work for IT security?

 

Thursday, November 24, 2011

Monitoring the Policy

I noticed an interesting thing about security policies the last time I started in a new job. Every time I start with a new company I read the entire Security Policy. (It should be required reading for anyone in a security job for an organization, but I’m impressed that I usually end up becoming the “Security Policy Authority” after that exercise, just because nobody bothers going through it J). The impression is generally that a good set of security controls are in place. However, as  time passes, I start to see the exceptions, the new controls that are still being implemented, the legacy stuff that should have been retired but is still lingering around, etc. It always takes time to understand the gap between the policy and its current implementation.

After seeing that so many times, I wonder: why aren’t organizations monitoring the policy implementation? In fact, it should be one of their key metrics! You measure your policy against the threat landscape and your risk appetite, than check if that policy is in fact being enforced.

Unrealistic expectations about the implementation of the security policy are extremely common. The executive sees a document to be approved and signs it. It probably has the same feeling as Capt. Picard saying his famous “Engage!”. But after that, he doesn’t pay attention to the huge number of exceptions granted or just delegates that process, in such a way that what’s on paper ends up being  very far from what’s actually being done. An Internal Audit department might be able to help, but I’m not just making the point of verifying, I’m talking about actively monitoring it as a guidance metric. Audit usually doesn’t go that far. I’m also talking about benchmarking different Lines of Business and Technologies, in a way that a CISO would be able to understand where he’s getting more support from, who is resisting the implementation of the policy and even whether it makes more sense to drive investments to more enforcement or to deploy additional controls. I think some would call it an “actionable metric”.

I’m interested in hearing from people currently monitoring their security policy implementation level? How are you doing it? How are you using that data? Any tools being used (maybe GRC)?

Wednesday, November 23, 2011

Log reviews and PCI

There are two ways to automate log reviews. There's the common approach:

 

Buy a product with PCI Compliance reports, check the box for each of those, send the reports by email to someone who will say they are being reviewed. done.

 

A lot of organizations do that, but it's really just checkbox compliance with the standard and does not add anything in terms of security value. Ask yourself, what are those "PCI Compliance Reports"? How can someone know what needs to be reviewed in our logs if the standard itself does not specify that?

The other way can use the same product mentioned above, but on this case you have real people (with knowledge about what's in those logs and what you need to look for) writing the rules for alerts and reports. A process for periodical reviews of those conditions is also necessary.

There's no "Enable PCI" solution for log review. Only dumb QSAs buy that.

Tuesday, November 22, 2011

Policy exceptions

Michelle Kinger has a very good post at infosec island talking about the harm from exceptions to security policies. I also mentioned that in my unrealistic expectations posts.

There are many discussions about security and risk metrics, but it’s rare to see anyone mentioning something to control the number of exceptions granted; a key indicator to any security program should be related to the exceptions granted/revoked ratio and to the exceptions stock. If you have an always upwards trend in your chart, it’s time to review the policy or the incentives for people to follow the policy. Having a policy with good controls that no one adheres to is just the same as having no controls, with the down side of giving the wrong perception about your security state.  

Friday, November 4, 2011

Security by virtualization: where is the secure OS?

I can’t disagree with Simon Crosby when he says “virtualization holds a key to better security”. Isolation is the basic security building block here, being achieved by virtualization.  And that just makes me sad. Relying on virtualization for that just shows how unsuccessful  we’ve been on building decent Operating Systems.

Operating Systems are generally built with the isolation concept in mind, trying to prevent one application from interfering with others. Almost all modern OSes have that concept as part of their design goals. Yet, we go deep into wasting resources to duplicate the OS and emulate the hardware layer to each virtual machine. Really, can anyone tell me why would we have to rely on virtualization for isolation if Operating Systems were capable of doing that?  

Friday, October 28, 2011

OpenFlow

Very good summary of what OpenFlow means to security by my friend Fernando.

The interesting part in his post is this one:

“Well, for all the power that OpenFlow offers, it can still only visualize flows in the context of L2-L4 attributes: what port is connected, what the IP address is, what protocol, etc... In the meantime, it comes as no surprise to anyone that the threat profile has long since changed to the application layers, exploiting Adobe PDF, Flash, SQL Injections, Cross-Site Scriptings, ... To me, what this will mean is that these higher-layer security controls - be they Web Application Firewalls (WAFs), Data Loss Prevention (DLP), Network Forensics, Host Security Agents ... - still need to intercept and inspect traffic.” 

That’s true, but the real value from OpenFlow is how it allows us to perform security interventions dynamically; you’ll still need to inspect traffic at higher layers to find trouble, but once  you’ve found reasons to believe there’s malicious activity going on OpenFlow can be used to selectively add more inspection capabilities and apply damage control measures.

It’s always good to get some new tools to our arsenal. The bad guys are far ahead in that aspect, so better start thinking about improving our instrumentation capabilities too.

1 Raindrop: Assurance of Assessments

An assessment is supposed to go up to the dart board and check to see if you got a bulls eye or how close you got. Having people throw darts and then going up to the board and drawing a bullseye around where the dart lands isn't helpful.

This kind of assessment is worse than useless, its harmful, its like giving people umbrellas and taking them back when it rains. being insecure is not the biggest problem, you can be insecure, know you are insecure and act accordingly. As Brian Snow said, the most dangerous stance is to assume you are secure when in fact you are not secure.

This is really an awesome post from Gunnar Peterson. I work with PCI everyday and I can tell you that poor assessments, either the official QSA ones or the internal ones performed by organizations trying to achieve PCI DSS compliance, are the main reason why PCI does not bring as much security as we expect. It's the land of cognitive dissonance where everybody thinks they are doing a great job just because the assessor said so.

Thursday, October 27, 2011

Old stuff, always good to keep in mind

I'm happy to see how the security community is realizing the importance of detection and monitoring. I'm reading a lot of good stuff recently, but as there's a lot of "re-discovering" happening it's important to know the results of research done in the past to avoid falling into the same mistakes. That's why it's so important to whoever is thinking about security monitoring to consider the "base-rate fallacy". This paper written by Axelsson dates back to 1999, but the basic idea is still valid and must be always considered when we are designing a detection system. 

I won't write here about it, you can read it directly from Axelsson's paper. The basic lesson is to not spend too much time on being able to find every possible attack, the must important thing is to reduce false positives as much as possible. Otherwise, you'll end up with a huge team looking for needles in montains of hay. 

Automation and security

There is another great post by Brian Krebs at his blog Today, about APT. However, the best part of it is a quote from Cisco's Gavin Reid:

“One of the areas where we’ve failed as a security community is that we’ve got an over-reliance on automation,” Reid said. “We’ve sold this idea that we can automate it, in a way that will not only help your security staff identify threats, but that you can cut your staff down because these technologies are going to do the work of a lot of people. That has failed. We’re still stuck with [the reality that] you need smart people who understand computer, applications and networks, and a logging solution becomes a tool they can use to identify some of these things. Hopefully this has been a little bit of a wake-up call, and we can start looking at things a little differently and start putting people back into the equation.”

When you see organizations believing that their simplistic set of IDS or SIEM rules is enough for security, it's a sure sign that there's too much trust on automation.

Wednesday, September 28, 2011

Unrealistic Security Expectations - part 2

Unrealistic expectations are not only related to technology.  In fact, I believe it’s more common to see that in security policies and standards. Based on the unrealistic expectation that anything written in a policy will be blindly followed, we end up writing prescriptive documents describing everything an organization must do for security. Done! By putting words on paper, we solved our security problems!


The problem begins with the extremely unrealistic assumption that someone reads the security policy! Sometimes I try to understand how someone can possibly believe that a hundred-page security policy will be read; most of the times reading the policies is not only unnecessary for people to do their jobs, it’s also something that will prevent them from working!  It’s plain economics, there’s almost no incentive for them to read those documents. An assumption like that is pretty unrealistic, eh? So why do we keep being surprised when people don’t comply with the policy?


Anyway, there are processes we can use to force people to comply with policies and standards. But no process or mandate will help if we keep writing policies that are impossible to comply with. Ok, that sounds obvious, right? Well, it should, but there are lots of security policies out there just like that.


Even if it’s possible to comply, there’s another thing that will make a policy fail: Exemptions. In every organization with a security policy there’s a process to get exemptions. That’s ok, until you realize so many exemptions are being granted that the policy is simply wishful thinking. You shouldn’t expect a policy to act as a control if it’s not being followed. Yet many professionals do that. The basic “enforcement rule” applies here; if you can’t enforce a policy, or if it’s easier to get an exemption them to comply, it doesn’t meet its purpose.  


Discussions about policies and standards effectiveness usually flow to whether the bar is being set too high or too low. That’s not always the case. Sometimes the issue is related to how prescriptive the policy is. Prescriptive policies can only be applied where the current conditions are aligned to the original expectations of whoever wrote the policy. Do you remember the older version of the antivirus requirement in PCI DSS? The requirement had originally been written with Windows environments in mind. It was funny to see mainframe shops puzzled about how to comply with that requirement. Less prescriptive policies have far less expectations about the environment where they will be applied, reducing the need for exemptions.


However, it’s not as easy as just writing non-prescriptive policies and standards. Write them too open and won’t be sure if they will be interpreted in the way they should be. Policies with generic requirements are often based on an unrealistic expectation how they will be interpreted. Balance is the key here.


In the end, policies and standards are just that: guidelines and rules. They might not be followed. Have you ever thought how your security will perform if people choose not to comply with your policies? Do it. You should build your defenses based on reality, not on unrealistic expectations.

Monday, September 19, 2011

Unrealistic Security Expectations - part 1

A frequent issue I have with some blog posts, articles and tweets from my security colleagues is how frequently they rely on unrealistic expectations.  From the down-to-earth guy to the curmudgeon, it seems that all our field suffers from a collective illusion that executives will be reasonable when deciding about risk postures, people will willingly comply to security policies or architecture end states will one day be achieved. If we want to really improve security and produce sensible results, it’s time for us to wake up to reality and deal with security without unrealistic expectations.

I won’t write about the human component of these expectations, about risk related decisions and users behavior. At least on this subject I believe we’ve been seeing some ideas and people realizing we cannot expect behaviors to change and people to be conscious about security. For those who still don’t believe that go google  “candy bar password”, just to mention one of the many studies that show how poor are our decisions regarding security. My main concern is about the technology landscape within the organizations and assumptions related to it made by security professionals. I just can’t help being surprised on how naïve my peers can be about how their networks will look like in the future.

It’s easier to explain what I’m talking about with an example;  Back in distant 2004 I was discussing with the Wintel support team of the company I used to work for what should be done regarding Windows NT 4 servers, since the security patches wouldn’t be available anymore after the end of that year. At one point in  the discussion there was a general perception that the risk from having those servers in our network wouldn’t be that high, as the plan was to eventually have everything migrated to Windows 2000. When I left that company, a few years later, those servers were still around. Since then I’ve seen the same thing going on over and over again, in organizations of different sizes, countries and businesses.

IT changes are almost never implemented as “Big Bang” projects. There is always a phased approach. Paretto is always being applied, 80% of the bad stuff being removed fairly soon and the remaining stays around for a long time. An isolated situation like that wouldn’t be an issue, but in medium and large organizations we can see dozens of cases of older, unsupported, often unsecure technology, configurations, processes, just refusing to go away. That’s the nature of things and I can’t see that changing soon. The problem is how to build security in that reality. It’s just too common to see great security ideas failing to provide results because they depend on clean, stable environments. That was always the case for the Identity Management projects (“oh, those identity repositories will be retired soon, don’t worry about  those”), Log Management (“the new version that we’ll implement soon supports syslog-ng”), DLP and others. The security architects are developing solutions that depend on perfect scenarios, scenarios that will never become reality. That’s how most of the security technology deployments fail.

Here’s what we need to do to change: design your security solutions to work with REAL environments. Assume that things will fail, will not be as expected. Security solutions should be resilient to those environments, simply because that’s how our networks look like. I don’t like it, I would really love to have those perfect CMDBs available, all servers available to aggressive patching, all networks supporting 100% traffic capture for monitoring purposes. But that’s just not truth.

It’s not just “design for failure”. It’s design around failure. Your network is a mess and it will always be like that, deal with it.

In the next part I’ll expand on the unrealistic expectations for policies and standards. Meanwhile, let me know what are the unrealistic expectations you see in security and how you think we should deal with them!

Thursday, September 1, 2011

Dilbert - Alice could add a mention to ROSI too

After the kitten, Alice could also say the security project will bring a huge ROI :-)

Wednesday, August 31, 2011

Software security

This tweet from Pete Lindstrom made me think for a while in software security:

@SpireSec: Does anyone really think you can completely eliminate vulns? If not, when is software security "secure enough" #makesmewannascream

No, I don’t think we can eliminate software vulnerabilities; Pete’s question is perfect. If we accept the fact that software will always have vulnerabilities, how can we define when it’s too much and when it’s acceptable?

I like one of his suggestions, some kind of “vulnerability density” metric. But it doesn’t look like it’s everything to me. In fact, I would probably favor software with more vulnerabilities but with a better managed patching process by the vendor than something with just a few vulnerabilities which are never patched or the patches are a nightmare to deploy. So, the factors that would be included in this assessment would be:

-          Vulnerability density

-          Average time from disclosure to patch by the vendor

-          Patching process complexity/cost/risk

In short, it’s not only about how big the problem is, but also how easy is to keep it under control.

Another interesting aspect is that those factors are completely dependent on the software provider. But factors from the client side are also important. If the technology environment you have in place is better prepared to protect Microsoft systems than Linux, a vulnerable Microsoft system is a lesser problem for you than a Linux vulnerable system. Would you prefer to have software with less vulnerabilities but less monitoring  capabilities or more visibility with more vulnerabilities? It will depend on how your security strategy is assembled.

So, comparing software in terms of security is not trivial. I’m going even further by saying it’s context dependent too.

ShackF00 » Infosec Subjectivity: No Black and White




I have noticed a trend in the infosec community over the past few years. A new idea or concept emerges, a few “thought leaders” espouse or eschew the idea, and many sort of “go along” with the “yes” or “no” mentality. Sure, there’s a bit of debate, but it seems to be largely confined to a similar group of rabble-rousers and trouble makers (of which I am one, unabashedly). Overall, though, here’s the rub: There are almost no security absolutes. Aside from some obvious things (shitty coding techniques, the use of WEP, hiring Ligatt Security to protect you, etc)…everything is in the gray area.

Let me say that again: There is no black, there is no white – only gray. Why? Because each case is different. Every company, every environment, every person and how they operate, etc. Many decry the buzz-laden overhyped acronym technologies like DLP. There are companies that are getting immense value out of DLP today. So no, it’s not just crap. What about compliance? Plenty of organizations see it as a headache, sure, but many are really benefiting from a structured approach and some sort of continual oversight or monitoring. So again, no absolutes. Some other examples, just things I have observed through consulting, being a practitioner in end user orgs, and teaching, as well as just having debates on various topics:

  • Security awareness: Some would argue security awareness programs are beneficial. If even 5 people change their behavior to be more security-conscious, then it’s a win, right? I recently argued that these *traditional* programs are worthless, and speculated that building security in is a better option. A guy I like and respect a lot, Ben Tomhave, argued that I’m totally off base, and connecting people to the consequences of their actions is a better move. Who’s right? Really, there’s a very solid chance we both are. One organization may take a draconian lockdown approach, others may take the “soft side”, but in reality, some of both is probably what’s needed. A great debate, and one that’s likely to continue for some time.
  • Metrics: This is another area where people tend to have wildly polar beliefs. Metrics rule! Metrics suck! Those that have latched onto the Drucker mentality that you cannot manage what you cannot measure largely fill the former camp, those that are just trying to keep their heads above water often say metrics are a waste of time. I’ve actually changed my position on metrics a few times – for me, it’s one of those areas that I just can’t draw a good bead on, and thus it falls squarely into the gray. My friend Alex Hutton is a huge proponent of metrics, and worked hard to overhaul this year’s Metricon conference. Alex believes in metrics, and he’s a smart dude. Many others have argued we’re trying desperately to “fit” security into business, and it’s a round hole / square peg issue. Another tough one – what do we measure? How do we do it? What are the tangible benefits? On the other side, if we DON’T measure things, how do we have a clue what is going on?
  • Pen Testing: Pen tests are awesome. Wait, no, they are a total waste of time. But we need them for compliance?! And yet another gray area emerges. I do a lot of pen tests. I would love to think they have value when I do them. But I’ve seen plenty of cases, and customers, that get them performed just to check a box for compliance. So what’s the answer? Hmmmm.

This list can go on and on. But infosec is such a subjective area, I think we all have to take a step back sometimes and realize that our passion and desire to “get things fixed” usually has the caveat that one size almost never fits all. I am guilty of this. I think many in the “echo chamber” are sometimes. The pendulum will swing one way, then another, but almost always settles somewhere in the middle…the gray area. I’m going to try harder to be more open-minded, and understand other points of view, even on topics I feel passionate about. Sounds like a New Years resolution, only in August…I know. But who puts a damn time frame on these things!? They surely must be wrong.






Great post, it summarizes my approach for security. Everything is gray until you know the context.

Thursday, August 25, 2011

Win Remes petition

OK, it’s not the first time. Back in 2009 I mentioned Seth Hardy’s petition to have his named added to the (ISC)2 Board of Directors election ballot. The process is crazy, requiring an endorsement from the current board or a lot of signatures just to have the right to include the name on the ballot. Now it’s Win Remes’ time, and I really hope it works. I’m not one of those very vocal critics of (ISC)2, as I also work with the entity on developing the ISSAP exam, but I really think some fresh air from the community would benefit the certification value. His suggestion to add a paper requirement to the CISSP would really make it more than just a bunch of multiple choice questions, easy to anyone with good test taking skills. So, go to his petition page here and help making the CISSP a meaningful certification.

Friday, August 12, 2011

Why is security not taken seriously?

Because of guys like this. Mr. Evans is the symbol of what's wrong in our field.

 

Wel..but he sounds funny :-)

 

Thursday, August 11, 2011

Researchers decrypt data on mobile networks | InSecurity Complex - CNET News

Crypto expert Karsten Nohl at DefCon last year.

Crypto expert Karsten Nohl at DefCon last year.

(Credit:Seth Rosenblatt/CNET)

Researcher Karsten Nohl is continuing his crusade to get mobile operators to improve the security of their networks by releasing software that can turn phones into mobile data snoops of GPRS (General Packet Radio Service) traffic.

Using a GPRS interceptor, someone could "read their neighbor's Facebook updates," he told CNET in a brief interview last week. He planned to release the software during a presentation today at the Chaos Communication Camp 2011 in Finowfurt, Germany, near Berlin.

Karsten of Security Research Labs in Berlin and a co-researcher Luca Melette were able to intercept and decrypt data sent over mobile networks using GPRS using a cheap Motorola that they modified and some free applications, according to The New York Times. They were able to read data sent on T-Mobile, O2 Germany, Vodafone, and E-Plus in Germany because of weak encryption used, and they found that Telecom Italia's TIM and Wind did not encrypt data at all, while Vodafone Italia used weak encryption, according to the report.

One reason operators don't use encryption is to be able to monitor traffic, filter viruses, and detect and suppress Skype, he told the newspaper.

Nohl has been pointing out weaknesses in mobile networks for years in the hopes that operators will step up their security efforts. In August 2009, he released the encryption algorithms used by mobile operators on GSM (Global System for Mobile Communications) networks. Last year, he released software that lets people test whether their calls on mobile phones can be eavesdropped on.

If we stop for a moment to consider that lots of people out there consider their mobile data systems almost as secure as a VPN system, this is very serious. As we are seeing from rumours around Defcon last week, conferences with lots of people connecting back home through those networks are providing a feast to whoever decides to sniff that traffic.

Tuesday, August 9, 2011

NetSPI Blog » Echo Mirage: Piercing the Veil of Thick Application Security









In recent years web application security has gotten a lot of attention. The advent of easy to use web proxies has brought a lot of attention to SQL injection and cross-site scripting vulnerabilities, and developers have taken note. Thick application security/development, however, is lagging in that respect. You can pierce the veil yourself and witness the unprotected underbelly of thick application security, because I’m about to teach you how to use a useful tool called Echo Mirage. Echo Mirage is a versatile local proxy tool that can be used to intercept and modify TCP payloads for local Windows applications. It allows users to launch a program behind its proxy or hook into an existing process. It also supports OpenSSL and Windows SSL. Using this tool sheds light on a whole slew of bugs and holes concealed by the thick application security illusion.

Keep in mind that this technique could be interpreted as reverse engineering. Depending on the license of the software you are testing, this could stray towards the grey side of legality. For the purposes of this tutorial, I have created my own C# SQL command handler.

Step 1: Acquire Echo Mirage from here: http://www.bindshell.net/tools/echomirage. The official release version is only 1.2, and the demonstrated version is 2.0, which you can preview here: http://www.bindshell.net/entry/31.html

Step 2: Open up Echo Mirage, and click File-> Execute. Choose the .exe for your file, and click OK. Click on the green Go arrow, and your application should start. Phonebooks, invoicing, and ERP systems are common examples of applications which hook into a database and could be vulnerable to this sort of attack.

Figure 1: Having selected my target executable, the path is listed in black.

ma_042310_fig11 

Figure 2: After launching the application, the red text demonstrates that Echo Mirage is intercepting traffic from the target process.

ma_042310_fig2

Step 3: Initiate a connection to a remote database; while my slapdash SQL interface has a button labeled “connect,” many applications will be less clear about when a connection to a database is created. When I start the connection, Echo Mirage intercepts all the packets that I’m sending to the database. Note that even though the connection string is available, many recent implementations of SQL will encrypt the password before it goes over the wire.

Figure 3: Connection strings! My favorite!

ma_042310_fig3 

Step 4: Create a query. It will be automatically intercepted by Echo Mirage, and you can relay whatever malicious queries you want. In another application this step could be running a search, updating a record, or generating a report. When sending your request, one limitation of Echo Mirage becomes apparent: it is unable to change the size of the data sent. What this means for a potential attacker is that sending a larger query allows for more space when injecting. There is little worry of sending a query that is too large; if you have extra space at the end of your injection simply comment the rest out. 

Figure 4: This is the query as sent from my interface

ma_042310_fig4

Figure 5: Echo Mirage captures the request

ma_042310_fig5 

Step 5: Now that you have the query captured in Echo Mirage, overwrite some characters to inject. Try not to disrupt the formatting and only overwrite characters that were actually part of the query you sent.

Figure 6: The edited query, prior to sending

 ma_042310_fig6

Figure 7: The results of the edited query

ma_042310_fig7 

I hope this demonstration hits home and proves the necessity of input validation and parameterized SQL queries, even in thick client environments. As tools like Echo Mirage mature, this type of attack will only become more common and more dangerous.





Permalink | Email the Author |

Subscribe to NetsPWN: Assessment Services Blog









Tags: ,

NetsPWN: Assessment Services |

























Responses are currently closed, but you can trackback from your own site.

















This post from the netspi blog really helped me to give some additional information to developers who were not understanding the reason why we should move our fat client applications to a controlled terminal services environment before even thinking about becoming PCI compliant. Good stuff.

Thursday, August 4, 2011

Black Hat and Defcon FUD season has just started!

It's the same thing every year. Last year was around the ATM and GSM networks hacks. Now, it's OSPF time.

The headlines about the new stunts presented at Vegas at this time of the year are always implying the sky is falling and we should all give up and hand over our data to Anonymous and "State sponsored attackers". Today was no different:

OSPF flaw allows attacks against router domains, tapping of information flows

Looks pretty bad, eh? Until you find this little piece far below in the article:

"The exploit requires one compromised router on the network so the encryption key used for LSA traffic among the routers on the network can be lifted and used by the phantom router. The exploit also requires that the phantom router is connected to the network, Nakibly says." 

This reminds me of a guy who would visit banks to show how SSL encryption was broken, doing a "live demo" of his attack. Attack that used to require, as a first step, the victim running an executable sent by the attacker by e-mail :-)

The vulnerability on OSPF might be pretty bad, but that's definitely not something that makes routers using that protocol "open to attacks".

The security press should start putting a little more emphasis on the attack pre-conditions and assumptions before reporting on new attack research. It would certainly avoid FUD and save us some time from explaining to desperate executives why the whole network will not be immediately owned because of that. 

PCI - Data at rest encryption and 3.4.1

Even if the encryption stuff is still the most discussed issue on PCI, I still have concerns about the correct interpretation of 3.4.1. There is even some attempt of clarification in the PCI SSC FAQ about that, but IMHO it doesn't accomplish anything close to "clarify" the issue. From the FAQ:

"The intent of this requirement is to address the acceptability of disk encryption for rendering cardholder data unreadable. Disk encryption encrypts data stored on a  computer's mass storage and automatically decrypts the information when an authorized user requests it. Disk-encryption systems intercept operating system read and write operations and carry out the appropriate cryptographic transformations without any special action by the user other than supplying a password or pass phrase at the beginning of a session. Based on these characteristics of disk encryption, to be compliant with this requirement, the disk encryption method cannot have:

1) A direct association with the operating system, or

2) Decryption keys that are associated with user accounts."

It seems to me that the intent of the requirement is to protect the data from being directly accessed from the media (hard drives); otherwise, disk encryption wouldn't be enough even if it is completely managed out of the OS.

If the intent is to use encryption as an additional access control and segregation of duties mechanism, disk encryption would never be useful even if it's done out of the OS and without linking the keys to user accounts; take, for instance, SAN based encryption. It's completely independent of the OS and the keys are not linked to user accounts; so, it meets the requirement. However, it doesn't accomplish much in terms of risk reduction (besides protecting data in the media), as the control of who can access the data in clear is still entirely managed by the Operating System (the data is presented in clear by the underlying SAN system to the OS).

It's funny to see that file or application level encryption vendors defend that only those approaches can meet the requirement, while storage vendors say exactly the opposite.

There is the general instruction to consider the requirement's intent. Again, I'm not sure there's enough clarity around the intent of requirement 3.4.1 - Does it try to protect against bypassing the Operating System controls logically (by getting administrator/root level access at the box containing the data) or physically (getting physical access to the media/disks containing the data)?

The implications from saying one or the other are quite big. Storage based encryption won't protect against someone getting root access on the OS, as the data is being provided in clear from the Storage system to the OS, so the attacker has open access to it. However, it still protects against someone grabbing (or getting physical access to) a hard drive (or even the whole array, depending on how the encryption is implemented by the storage system).

SAN based encryption is not performed by the OS and the keys are not linked to user accounts. In a crude interpretation, it meets the requirement. However, does it meet the original intent?

The PCI Council usually replies to questions with "work together with your QSA". That's great, but there are some requirements that are being interpreted by QSAs in completely opposite ways, such as this one. For some cases some additional guidance has been provided by the Council, such as for IP telephony systems and virtualization. I believe the encryption at rest requirement requires (the requirement requires...funny wording :-)) additional clarification too. The last version of PCI also requires a risk management program, so one could argue that the chosen solution should be aligned to the results of the risk management process. I’m not sure the PCI Council wants to leave such sensitive issue subject to decisions based on the organization’s risk appetite. As we know, the economics of payment card data security usually put the risk appetite of the organization and cardholder data security at opposite corners.

(there’s a very good document produced by the Securosis guys about this; you can find it here.)

SQL Injection is 95% SQL, and the Rest of InfoSec is the Same

I’ve been frustrated for a long time with the ‘teach me to hack’ mentality. Not because I have a problem with beginners (quite the opposite, actually), but because certain people just never get the concept of security testing in the first place.

Yes, “hacking” is a loaded term. I am using it as “being curious and learning about something to the point where you can make it do something other than what it was intended to do…”

Most hear about this skill and rush out to buy all the “hacking” books they can find. How can I hack SQL? How can I hack Linux? How can I hack web applications? There’s a really simple answer. Learn SQL. Learn Linux. Learn to code web applications. What people call “hacking” actually reduces perfectly into two simple things:

  1. Deep understanding of a technology
  2. Making it do something it’s not supposed to do

The beauty is that once you combine a deep understanding with a healthy dose of curiosity, all sorts of ways of abusing said system are presented to you.

This requires talent, skill, and practice — don’t misunderstand. And there are many hardcore developers who understand their technology extremely well but couldn’t hack a vegetable cart. Why? Because they lack curiosity and/or the attacker mindset, so they never get to #2.

Developing on, or mastering a technology, is not only the best method to becoming good at security, it’s actually the only method. Anything less is a 0 in a world where 1 is the standard. If you don’t know SQL then you don’t know SQL Injection. If you don’t know Linux then you can’t break Linux. And if you can’t code a web application then you aren’t really doing WebAppSec.

You can use blunt tools to take chunks out of these subjects (tutorials, automated tools, etc.), but to truly be good at breaking something you must know how it works. Anything less is hamfisting.

Don’t be a hamfister.

Miessler is right about it. I remember when I started trying some SQL injection attacks in my penetration tests. I only managed to make them work properly and to get the data I was looking for after I stopped reading the SQL Injection white papers and started reading more about SQL and the RDMSes documentation. That's valid for practically all aspects of black box security testing.

Wednesday, August 3, 2011

Risk Management again

Interesting tweet from @joshcorman this morning:

@joshcorman: I believe in the concept of Risk Management. What I seldom see is comprehensive/accurate knowledge to inform the outcomes.

I share the same feeling about seeing enough useful and reliable data that could be used for that. I think we can go further and apply some basic economics here; Risk Management is useful and applicable when the cost to obtain the necessary data/knowledge required as input AND to inform the outcomes is lower than the potential impact of going wrong with other "educated guessing techniques".

So, if you need to spend the same amount of money/time to get some reasonable accurate data to manage risk appropriately that you would eventually lose from going wrong with another guesstimative method, maybe it's better to just not bother.

Monday, August 1, 2011

Explaining hacking episodes


From XKCD, great one. For all those who work on security and have to explain those news to family, friends, etc :-)

Tuesday, June 14, 2011

Different perspective on SecurID

If there was such effort from criminals to obtain RSA SecurID seeds, we can conclude that two factor authentication has been a real barrier to attackers; otherwise, they wouldn't bother to go after that.

 

Monday, June 13, 2011

Lenny Zeltser on Information Security — 6 Ideas for a Protean Information Security Architecture

6 Ideas for a Protean Information Security Architecture

Proteus, as envisioned by Andrea Alciato. Source: Wikipedia

Proteus, a sea god, could change his shape to confuse adversaries and avoid capture. Thinking along these lines, I wonder how the security architecture of networks and applications might incorporate protean properties, making it harder (more expensive and time-consuming) for attackers to compromise our defenses?

An environment that often changes may be harder to attack, but it is also hard to manage. In fact, many vulnerabilities seem to be associated with our inability to securely and timely implement changes, such as deploying security updates or disabling unnecessary services.

To create a protean security architecture, we’ll need to think asymmetrically: what attributes can complicate attackers’ jobs more than they complicate the jobs of defenders? I am not sure how to do this, but I have a few ideas to get started:

  • Open “fake” ports on your perimeter firewall using a script, so that an external attacker is misinformed about what services are accessible from the Internet. Redirect the connections to low-interaction honeypots.
  • Rather than blocking or dropping traffic on the perimeter firewall, configure the device to send TCP packets that indicate a transmission error, making it hard for the attacker to distinguish between a bad connection and a blocked port.
  • Deploy honeytokens on your web server to mimic the appearance of web applications that aren’t actually installed there. This may stall and misdirect the attacker. Vary the type and location of the tokens periodically.
  • Mimic the appearance of Internet-accessible servers that seem to be accessible via protocols such as SSH by using honeypots (e.g., Kippo). This can slow down and misdirect the attacker.
  • Set up a DNS blackhole to redirect internal infected systems to websites that aren’t actually malicious by using a tool such as DNS Sinkhole. You can use a honeypot such as Dionaea to further learn about malware.
  • Use open cloud services to bring up irrelevant web and other servers that seem to be associated with your organization, but don’t host sensitive data. Periodically decommission them and bring up new ones.

My ideas seem to be gravitating towards using honeypots to implement an element of deception, but there should be other ways of creating an infrastructure that is changing slightly to confuse or misdirect attackers and their tools. Do you have any ideas?

Proteus eventually captured by Menelaus, who found a way of ambushing Proteus and chaining him down. (Menelaus had an insider’s help, having received a tip from Idothea—Proteus’ daughter.) So a protean approach to defense isn’t foolproof—it is one of the elements we may be able to incorporate into an information security architecture to strengthen our resistance to attacks.

Related:

Lenny Zeltser

My dear little ugly baby is growing. With the current type of threat organization's are facing, it really makes sense to some more thought on honeytokens.

Friday, June 10, 2011

Information classification and Threat centric approaches

Always good to follow discussions between smart people in security. I suggest reading this nice pair of posts from Rob Bainbridge and Dominic White (SensePost blog).
 
As Rob said in his comment on Dominic's post, probably both are right. I believe the right approach is a mix of data centric and threat centric security. A good takeaway from Rob's post is the suggestion on working on a basic information categorization instead of using the old sensitivity levels classification model; it's just more natural to people and avoid that "oh my data is too important to me so it's probably top secret".
 
From the other side, a good view about why a threat centric approach is also important is Dominic's comment about pivoting and the consolidation of information containers. Using the threat centric approach helps dealing with that more than just trying to protect stuff according to classification labels.
 
This discussion just reinforces my suggestion of having two separate groups within the organization, each one with different roles (threat and protection) and bringing their findings and suggestions to the CSO (or a security architect) to define prioritization and strategy. That's probably how we could get the best from both approaches.

Wednesday, June 8, 2011

Good analysis of the LM case

Dave Kennedy wrote a very good post on the Verizon Business Security Blog about how the Lockheed Martin "breach" (was it really a breach?) is being handled. He points to the information being disclosed by Lockheed Martin and RSA and how that allows us to understand what had actually happened there.

The interesting aspect about this episode is that the only reasonable conclusion we can reach is that something really bad happened. If nothing happened LM would be quick to provide enough details to allow people to understand that it wasn't a big deal. On the other hand, an organization that only detects it's been breached after finding malware in its internal network wallowing in gigabytes of highly sensitive data will probably try to release only some vague statements such as "we detected a significant and tenacious attack on its information systems network".

Anyway, details about the attacker methods would allow a lot of other organizations to better protect themselves; not only that, if the detection was really in a early stage it would be quite beneficial (not to say to LM's image too) to others to know where to look for suspicious activity. As Dave says in his post:

At the end of the day, this could represent an opportunity for Lockheed Martin and EMC/RSA to set positive examples for communications among security professionals.  We, the good guys, are all in this together.  Many of us frequently express a longing for better defensive information sharing and bemoan how little timely, actionable information sharing there is.

These guys are some of the best honeypots the security community has out there; we should be doing something to leverage the information about attacks being gathered there. The first step is sharing that data.

UPDATE: Very good analysis from Dan Kaminsky on the subject here.

SecuriTeam Blogs » Simple passwords are the solution

ZDNet has a nice piece on why cheap GPU’s are making strong passwords useless. They are right, of course (though it’s pretty much been that way for 20 years, since the need for /etc/shadow) but they missing the obvious solution to the problem.

The solution is not to make passwords more complex. It’s making them less complex (so that users can actually remember them) and making sure brute force is impossible. We know how to do that, we just have to overcome a generation-old axiom about trivial passwords being easy to break (they are not, if you only get very few tries).

It’s not just cheap GPUs. Complex passwords are also the problem. Simple passwords are the solution.

Right on the spot. With the evolution of brute forcing techniques we shouldn't be trying to fight those attacks with complex passwords; properly salted hashes and thorough protection of the offline password (I mean hashes) databases is far more important than that. Online brute forcing can be handled with simple techniques such as timeouts, account locking and CAPTCHAs.

Of course, whenever the residual risk after all those measures is still not acceptable, better to go the two-factor way instead of adding complexity to the passwords. Let's stop trying to improve this control, accept its use cases and limitations and use different controls where context and risk require that.

Tuesday, June 7, 2011

Lastpass

I understand getting cold feet when talking about putting your passwords in the cloud, but I must say that Lastpass's approach and posture regarding security issues is really worth mentioning. For those who don't remember or didn't pay attention, they had a security incident a few weeks ago. Check their blog to see what happened, steps taken, the current action plan and the post-mortem procedures:

http://blog.lastpass.com/

Really, that's how other companies should be working on situations like this.

Monday, May 30, 2011

Lenny Zeltser on Information Security — Tracking Known Malicious Websites by ETag Identifiers

Tracking Known Malicious Websites by ETag Identifiers

Anti-malware companies as well as organizations that protect their own networks benefit from keeping track of known malicious systems on the Internet. The goal is often to block inbound access from known malicious hosts and also to restrict outbound connections to them. The undesirable systems are typically identified using IP address, domain names and URLs. Research by CompuCom’s Ramece Cave suggests adding ETags to the list of identifiers of malicious websites.

ETag is an optional HTTP header that was designed to make it easier for web browsers to cache website contents, thus improving the pages’ load time by avoiding downloading content that the user retrieved earlier. ETag acts as a fingerprint of the web server’s content; if the content changes, the server will generate a new ETag, indicating that the browser’s prior copy of the content should no longer be used.

Attackers sometimes use the same instance of the malicious page and web server, but expose it using different domain or server names. Ramece found it effective to use ETag as the unique identifier of a malicious page. This seems more efficient than keeping track of the numerous domain or server names the attacker might use. CompuCom’s research team:

“Identified a single ETag associated with malware which could be used to filter 12 domains as well as identify compromised hosts trying to reach command and control domains.”

Based on this information, the team created an IPS rule to flag web traffic that included the malicious ETag.

While there are several sources of known malicious IPs and domains, I haven’t seen the inforsec community discuss the use of ETags to track known malicious websites. Is this a promising approach or is does some limitation make it impractical? Perhaps time will tell.

If this interests you, check out the 2-day Combating Malware in the Enterprise class I’ll teach in DC in July; code COINS-LZ gets you a 10% discount. Also, I’ll teach a more in-depth Reverse-Engineering Malware class on-line this summer; get a free iPad 2 if you sign up by June 22.

Lenny Zeltser

This is really interesting and I'm surprised to not see it being discussed by the IPS, NGFW, Network Forensics and Threat feed vendors more extensively. Shouldn't we be putting up some sort of public database of malicious ETags to be used by those tools?

Friday, May 27, 2011

Vulnerability reporting in the age of social media - F-Secure Weblog : News from the Lab

Vulnerability reporting in the age of social mediaPosted by Mikko @ 13:28 GMT | Comments

Last night, I was searching for an old email when I spotted this funny header:

Tweetdeck XSS

Somebody had a sense of humour, inserting a XSS joke in email headers.

I thought it was funny, so I posted about it to Twitter:

Tweetdeck XSS

Few minutes later, I saw Robin Jackson reply with this:

Tweetdeck XSS

That can't be real. No Twitter client would execute Javascript just because a Tweet would contain a "script" tag.

Tweetdeck XSS

Tweetdeck XSS

To prove it's real, Robin posted a screenshot.

Tweetdeck XSS

The client he was using was Tweetdeck for Chrome. Time to inform the developers. And of course, they are on Twitter as well.

Tweetdeck XSS

Randy Janinda from Twitter's security team responded within minutes:

Tweetdeck XSS

Tweetdeck XSS

Tweetdeck XSS

And just two hours later I got the confirmation from Tom Woolway of the Twitter development team that the fix is out:

Tweetdeck XSS

Signing off,
Mikko


The security community working as it should. Collaboration, speed, effectiveness, no fussing around, quick response. Good to see it. Congrats to Mikko, Robin Jackson and the Tweetdeck (Twitter) guys.

Tuesday, May 24, 2011

ShackF00 » Less Talk, More Action

Earlier this month in NYC, my friend Marcus Ranum and I were having dinner and drinks after a day at the IANS forum. Marcus, in a lighthearted mood, posed the following question to me:

A fight breaks out between giant robots, pirates, and ninjas. Who wins?

We had a fun and spirited debate about this, and laughed at the sheer ridiculousness of the question itself – a pointless conversation, but fun, to be sure. The problem is, we’re having a lot of the same kinds of conversations in infosec right now.

Recently, my friend Josh Corman posted an article on CSO Magazine’s site entitled “The rise of the chaotic actor: Understanding Anonymous and ourselves”. As I would expect (coming from Josh), it is interesting, well-written, and insightful. It’s also totally, completely unimportant. Let me say that another way: IT’S A WASTE OF %*&^$ TIME. Now, lest you get the impression that I am bashing Josh, please know that I am not. I count him as a friend, he’s incredibly smart and talented. In fact, his Rugged Software project is one of the best, and likely most important, efforts underway in the infosec industry right now, and needs all the support it can get. But this? Drivel. And no, it’s not the content that chaps me. Not at all. Although, I must say, the use of D&D references crosses even MY boundary of geekiness acceptance.

Nope, not the content. What, then? The thing that pisses me off about this, and lit a fire under my ass yesterday, is that Josh, and CSO Magazine, put this out there with the disclaimer that this was “important”. Folks, it is not. It’s not because this kind of input is the equivalent of my conversation with Marcus – a watercooler discussion point, an anecdote, a thing to have a short chat and discuss casually – NOT something that will really change the fact of what we are dealing with. And what we are dealing with is the same problem we’ve had for a while now, in my opinion – too much blah blah blah, not enough elbow grease security.

I don’t blog a whole lot. I spend my time in a breakdown that consists of about 30% teaching people to fix shit (sometimes by breaking it first), 60% actually fixing shit (or breaking it first), and 10% speaking about these things. That is 10% of my time spent proselytizing or (hopefully) educating in some way, usually on a technical subject. What I see a lot of out there is people wasting their cycles debating shit that DOES NOT HELP ACTUALLY SECURE ANYTHING. This is not a good trend, folks. We need more do-ers, people who can put hands to keyboard and actually get some security done.

Josh and I had a spirited debate about this on Twitter. He reminded me of the Plan-Do-Check-Act cycle, and said we need to Plan before we Do. He’s right, of course. I’m not insinuating that. But this is not planning. This is mental masturbation. And too much planning, with too little doing, leads to “analysis paralysis” and that is a death-knell for your security program. I’d rather see a CISO who’s a former drill sergeant than one who is an endless pontificator of “what could be”. My friends Alex Hutton and Mike Dahn made small points that are valid – Alex reminded me that not all work is purely hands-on technically, as he and his team at Verizon compile metrics and risk data that all of us rely on. Totally valid, and that IS important. Mike nudged me and said that theory and practice must go together like PB and J (great analogy), and certainly there’s some truth to that as well. But if you are ALL theory, or spend too much time there, you don’t get around to the doing. And there’s a lot that needs doing. Check this stat from Alex and team’s latest Verizon Data Breach report:

Wow. If we spent just 10% of the time we waste on mental masturbation like “what do they want? who are they? are they nice people” kinds of crap on ACTUALLY hardening boxes, screening and pruning ACLs and FW rules, tuning IDS, performing sound vulnerability management practices, and actually fixing our code, we’d be in hella better shape. Are these conversations fun? Sure. Do we need to really rethink our focus? Maybe. I personally do not care if Anonymous is a secret league of 1337 grandmothers from Poland, or whether they want to hack me for vengeance, political motivation, or just plain old theft. Nope. Don’t care. I just know I have adversaries, and I need to protect my sensitive data. That’s what I care about, and that’s what you should care about too.

A few months ago I posted a post-RSA note on “Change we can Believe in”. I had grown tired of all the whining in this industry about how we “need change”. Well, here’s a change for ya: Stop wasting your time on crap like this that is not impactful unless you are a state agency. Most of us just need to hunker down and fix some things.


 

Thas was simply perfect. I completely agree with Dave, not only with his main point but also on the high quality of Corman's article. I wasn't actively following my Twitter feed when they discussed all this, but after I read the comment from Mike Rothman I decided to read both sides. I confess that Corman's piece was so ethereal I only scanned through parts of it.

It's interesting and important to debate over the adversary motives, means and opportunities. That's a crucial part (but not the whole) of the intellectual work to identify priorities. But as Dave was quite clever to point out using the 96% number from this year's DBIR, we might end up being breached through obvious stuff while discussing the colour and the "chaos level" of the adversary. Again, using my blog's motto, balance is the key.

In terms of practical advice, if your organization is big enough to justify it, having different teams to work on different work streams like Threat Intelligence and Vulnerability/Exposure Management is probably the best way to deal with it, both bringing their results to a prepared executive who can check if those two different activities are complementing each other.

Friday, May 13, 2011

Reporting breaches to SEC

Just saw this in Yahoo! Finance:

, On Thursday May 12, 2011, 8:00 am EDT

 

By Victoria McGrane and Siobhan Gorman, Reporters, The Wall Street Journal

 

A group of U.S. lawmakers wants the Securities and Exchange Commission to push companies to disclose when they have fallen victim to cyberattacks.

 

Three weeks after Sony Corp. was forced to shut down its PlayStation network by hackers who stole users’ information, the group, which includes Senate Commerce Committee Chairman Jay Rockefeller of West Virginia, on Wednesday sent a letter to the SEC asking it to issue guidance stating that companies must report when they have suffered a major network attack and disclose details on intellectual property or trade secrets that hackers may have stolen.

 

The SEC guidance should also clarify that existing corporate-risk disclosure requirements compel companies to disclose if they are vulnerable to cyberattacks, the five lawmakers, all Senate Democrats, said.

 

Read the rest of this post on the original site

This is really interesting and can change the way companies deal with breaches. I can see C-level executives asking the CSO about what's being done to ensure they won't have to report anything to SEC :-)

Wednesday, May 11, 2011

Web applications security - one size does not fit all

I was reading a very good post about a Application Security Program implementation from George Hulme and saw something that is mentioned quite frequently in this field: don't try to boil the ocean when pushing security to your developers. It's kind of obvious when you read it, but there is an important question to ask when you assume that you'll prioritize on the most important apps and more critical vulnerabilities; what about the rest?
 
It's an extremely valid question, specially when you take a look at some recent breach stories. Take as example HBGary, who was breached through an SQL Injection in an app that wasn't considered "critical". I've seen dozens of similar cases, so we can certainly say that it's not that easy to dismiss non critical apps or vulnerabilities. So, if we can't leave them behind, does that mean we have to go with the "boil the ocean" approach?
 
Not necessarily. There are multiple options to tackle application security issues. Building a robust SDLC and having developers who understand security is certainly "the best" way to avoid vulnerable applications, but we cannot forget those other "reactive" alternatives, such as IPS, WAF (Web Application Firewalls) and other "silver bullet" boxes. So, if you want to prioritize your critical applications and  the most critical vulnerabilities in your SDLC, be sure to add some other control to deal with "the rest". That's all about protecting everything that can be exploited, but with different assurance/quality levels according to the importance of the assets and cost of controls. 

Friday, April 29, 2011

Post Mortem lessons from Amazon

The AWS outage last week had caused a surge in cloud reliability discussions. I believe it turns out that using cloud service providers is much like using any other IT service, you must do your homework about how to deal with failures and also the appropriate vendor management procedures to choose wisely.

 

Having said that, Amazon is still the leader in cloud computing services and in my opinion their behaviour in reacting to this incident clearly shows why. They have just published an extremely detailed Post Mortem analysis, presenting the root causes, what is being done to avoid similar events in the future and also offering reasonable compensation to affected clients. It's also worth pointing that they mentioned the root cause even if it was a change mistake, a very honest posture, in my point of view.

 

If all the service providers behave like that we'll definitely keep seeing an increase in business moving to the cloud. Congratulations to Amazon.  

Thursday, April 28, 2011

Must read for those working with vuln. management

Seriously, if you have a Vulnerability Management process, you MUST read it. You don't need to necessarily apply everything in the presentation, but the idea behind it should really be considered when putting together a strategy to deal with the massive number of vulnerabilities that are published every day.

The key word on this is "Intelligence", gathering more meaningful information and data that you can base your actions on. Beware of the "best practices" in Vulnerability Management...most of them don't include anything like that and just tries to make you patching cycle wheel spin as fast as possible. Not very effective and greatly increases the chance of breaking stuff.

McAfee VirusScan Enterprise: False Positive Detection Generic.dx!yxk in DAT 6329

 

McAfee Labs have issued an alert that McAfee VirusScan DAT file 6329 is returning a false positive for spsgui.exe. This is impacting SAP telephone connectivity functionality.


McAfee have a work around for the issue documented in KB71739 https://kc.mcafee.com/corporate/index?page=content&id=KB71739

 

Chris Mohan --- Internet Storm Center Handler on Duty

They seem to be improving...at least it's not a core component of the OS this time :-)

Wednesday, April 20, 2011

Will we see the return of low level vulnerabilities?

With the efforts towards the migration to IPv6 (and all the protocols related to it, such as ICMPv6) and DNSSEC, a lot of vendors are running to add the support for those protocols to their products. Vulnerabilities in protocols at the lower ISO stack levels haven't been common, but there were plenty of those when the Internet became popular (remember the Ping of Death?). The times when you could bring down a system with a simple "ping" seemed to be over, but now, with a lot of new code handling the basic stuff being deployed, we'll probably see again a surge in vulnerabilities like those being exploited.

However, the scenario is quite different now. Some factors that may make things different:

  • The Internet now is slightly different from that one in the 90's...I wonder what could happen if someone finds a new PoD Today.
  • Developers know that their code will be attacked, that things like "buffer overflows" can be exploited. Big vendors have SDLCs in place.
  • The research community is bigger and better prepared. A lot of very good people trying to find bugs.
  • The tools to find bugs have also evolved. A lot of researches are pointing their new shinny fuzzers to everything that runs code.
  • More powerful and well funded organizations searching for "cyberweapons".

During the last years we've seen the attackers targets going up the ISO layers. With all the new code being deployed there's no reason to believe they won't revisit the lower levels to find "lower hanging fruits" (pardon the pun).

Tuesday, April 19, 2011

Quick comments on the Verizon DBIR 2011 report

The Verizon Business guys have just delivered the 2011 DBIR report. Again, a very nice job and one of the key sources for the CISOs around the world to do their planning, decision making and prioritization.
 
The most commented point of this year's report is the huge drop in number of records affected, even with a bigger number of breaches included in the report. I think this is a case of over analyzing and some really stupid explanations are flying around. In my opinion it's just a number theory issue; if you look at the numbers of the report in its multiple editions, the number of breach cases (let's say instances) are more or less reasonable, within the same order of magnitude and reflecting the growing effort of the authors in getting more instances into their database.
 
The number of records number, on the other hand, will always wildly vary, and unless it's considered with a lot of additional categorization and normalization it's not really good to derive any useful conclusions. The number of records kept by different organizations varies from hundreds to hundreds of millions. A breach in a single organization with a huge number of records (government agencies, for example) would completely change the numbers in the entire report. The authors are aware of that, and whenever possible they try to make it clear in the report. Of course, a lot of people will just skip those lots of words and go just for the juicy charts :-)
 
Anyway, I really hope Verizon allows us to play with their database in the future. Being able to produce our own charts using filters based on different organization demographics would greatly increase the value of the data for security planning. Maybe a two-way agreement (something like expanding the VERIS program, which by the way is already bringing nice results), where organizations submitting breach information would get access to the database would help them making the report even better but also more useful for the consumers.   

Monday, April 4, 2011

Beware of "low impact" in risk assessments

The details of the RSA breach emerged Today and confirmed one thing I already expected to see, the escalation of privileges path taken by the intruder from a regular user (one of the victims of the spear phishing e-mail) to the target data. That was the strategy we used to choose in pentesting 10 years ago, and I don't see why it wouldn't work now. That's something interesting that happens in the security industry and that has aspects of massive cognitive dissonance, the illusion of "low impact" intrusion targets.

Why did the Titanic sink? Ok, no big failure like that can be attributed to a single root cause, but I'll choose one here to illustrate my point, the compartmentalization failure. The Titanic hull was built in a way that a hole in it could not sink the whole ship, as the water would only inundate one compartment that could be isolated from the others. The issue with the iceberg incident was that it ripped the side of the ship in a way that multiple compartments were inundated, bringing the whole ship down. The threat assumption in the Titanic's hull compartmentalization design was that the threats would be holes, not a gash. Wrong assumptions sink ships. And breach networks too.

Even if it's considered best practice it's not very common to see properly compartmentalized networks out there. When compartmentalization is applied, it's usually done at the server side, with multiple segregated networks with servers grouped according to different criteria, such as data classification or line of business. That's cool, but it usually protects from intruders jumping from one groups of servers to the other. What about the users network?

Let's be fair, it's very hard to deploy appropriate network controls at the distribution network. There are lot's of switches, sometimes with very limited management capabilities and different physical locations. Not to mention wireless networks, remote and mobile users. However, there's a lot of interesting products out there (most on the NAC realm) to help on that. But those networks are usually seen as less important then the servers side, incidents there considered "low impact". That's bullshit. Most of the big breaches now target the user's computer, where there'll be someone willing to click on links and opening files all over the place. It's easier to get your bridgehead in a network on a user workstation than in a well protected and monitored server. From there the intruder will learn how the organization infrastructure works, will start harvesting interesting credentials and look for the target data. Everything in a network that is not usually monitored. Can you detect today a brute force authentication attack against the local built-in administrator account from one workstation to another? I mean, no servers involved?

So, when defining that some targets, specially users workstations, would only cause "low impact", remember to consider what an intruder could do inside your network. Even better, try to hire a pentest starting from your users network, defining your most critical data as the final target. Check how that test will be seen by your security monitoring processes. The lessons from that will most likely change the "low impact" classification of a lot of things in your organization, what will cause a revolution in your risk assessments and security initiatives prioritization. And do it fast, before someone you'll call APT does that for you. 

Friday, April 1, 2011

World Economic Forum 2011 Risk Report

With all the discussions about risk measurement and how to present risk information, the report created by the World Economic Forum about global risks is full of awesome ways to present risk information. They managed to include in their graphical representations stuff like risk perception and uncertainty. A couple of good examples can be seen below:



By the way, "cyber risks" are in the top of lists of "Risks to Watch", in other words, risks with a lot of uncertainty and hard to predict trends. It makes sense.

1 Raindrop: "I know" and "I don't know" schools of security architecture

Excerpt from Howard Marks’ July 2003 Memo “The Most Important Thing”:

"One thing each market participant has to decide is whether he (or she) does or does not believe in the ability to see into the future: the “I know” school versus the “I don’t know” school. The ramifications of this decision are enormous.

If you know what lies ahead, you’ll feel free to invest aggressively, to concentrate positions in the assets you think will do best, and to actively time the market, moving in and out of asset classes as your opinion of their prospects waxes and wanes. If you feel the future isn’t knowable, on the other hand, you’ll invest defensively, acting to avoid losses rather than maximize gains, diversifying more thoroughly, and eschewing efforts at adroit timing.

Of course, I feel strongly that the latter course is the right one. I don’t think many people know more than the consensus about the future of economies and markets. I don’t think markets will ever cease to surprise, or thus that they can be timed. And I think avoiding losses is much more important than pursuing major gains if one is to achieve the absolute prerequisite for investment success: survival."

In security architecture terms, I differentiate Identity & Access Services which are designed to help enterprise achieve some business goals (and despite what a lot of people say about ROSI, these services have ROI attached to them from day one), but these are implicitly "I know" kind of service or least "I guess."

otoh, there are Defensive services like monitoring and logging, which implicitly say -  "I don't know" how I am going to be attacked, how and where things will fail, but I need to build a margin of safety into the system to be able to react if and when they do.

Securitytriangle

The Security Triangle shows that depending on whether you have "I know" assumption or "I don't know" assumptions, you'll end up with a different looking architecture. Of course, its not a binary choice, you will have some of both, but there are always priorities and choices. The goal for security architects is to be clear about the choices, because if you are trying to know or accepting that you don't know, your security services delivery, measurements and processes will vary.

When you are building out monitoring services, your goal is to identify assets and event types with the goal of increasing visibility. This typically results in a people, process and technology like IRT that respond to catalysts and vectors that are often not known at the time the system is being built.

When you are building out Identity & Access services, you are assuming much more knowledge of subjects, objects, attributes, data and applications. This mapping typically manifests in architecture like Identity & Access Management systems, publishing and enforcing known known relationships.

In each of these cases, the toolsets are different, how you staff for them is different, the design and operations is totally different, but they get lumped under the title "security."

Very good post from Gunnar Peterson. I always thought there's a huge diference between security technologies. Mike Rothman like to define them as "let the good guys in" and "keep the bad guys out" technologies. I even wonder if anyone has ever tried to model their security teams like that, or something like "external threat team" and "internal control team". That would be interesting.