Wednesday, August 31, 2011

Software security

This tweet from Pete Lindstrom made me think for a while in software security:

@SpireSec: Does anyone really think you can completely eliminate vulns? If not, when is software security "secure enough" #makesmewannascream

No, I don’t think we can eliminate software vulnerabilities; Pete’s question is perfect. If we accept the fact that software will always have vulnerabilities, how can we define when it’s too much and when it’s acceptable?

I like one of his suggestions, some kind of “vulnerability density” metric. But it doesn’t look like it’s everything to me. In fact, I would probably favor software with more vulnerabilities but with a better managed patching process by the vendor than something with just a few vulnerabilities which are never patched or the patches are a nightmare to deploy. So, the factors that would be included in this assessment would be:

-          Vulnerability density

-          Average time from disclosure to patch by the vendor

-          Patching process complexity/cost/risk

In short, it’s not only about how big the problem is, but also how easy is to keep it under control.

Another interesting aspect is that those factors are completely dependent on the software provider. But factors from the client side are also important. If the technology environment you have in place is better prepared to protect Microsoft systems than Linux, a vulnerable Microsoft system is a lesser problem for you than a Linux vulnerable system. Would you prefer to have software with less vulnerabilities but less monitoring  capabilities or more visibility with more vulnerabilities? It will depend on how your security strategy is assembled.

So, comparing software in terms of security is not trivial. I’m going even further by saying it’s context dependent too.

ShackF00 » Infosec Subjectivity: No Black and White

I have noticed a trend in the infosec community over the past few years. A new idea or concept emerges, a few “thought leaders” espouse or eschew the idea, and many sort of “go along” with the “yes” or “no” mentality. Sure, there’s a bit of debate, but it seems to be largely confined to a similar group of rabble-rousers and trouble makers (of which I am one, unabashedly). Overall, though, here’s the rub: There are almost no security absolutes. Aside from some obvious things (shitty coding techniques, the use of WEP, hiring Ligatt Security to protect you, etc)…everything is in the gray area.

Let me say that again: There is no black, there is no white – only gray. Why? Because each case is different. Every company, every environment, every person and how they operate, etc. Many decry the buzz-laden overhyped acronym technologies like DLP. There are companies that are getting immense value out of DLP today. So no, it’s not just crap. What about compliance? Plenty of organizations see it as a headache, sure, but many are really benefiting from a structured approach and some sort of continual oversight or monitoring. So again, no absolutes. Some other examples, just things I have observed through consulting, being a practitioner in end user orgs, and teaching, as well as just having debates on various topics:

  • Security awareness: Some would argue security awareness programs are beneficial. If even 5 people change their behavior to be more security-conscious, then it’s a win, right? I recently argued that these *traditional* programs are worthless, and speculated that building security in is a better option. A guy I like and respect a lot, Ben Tomhave, argued that I’m totally off base, and connecting people to the consequences of their actions is a better move. Who’s right? Really, there’s a very solid chance we both are. One organization may take a draconian lockdown approach, others may take the “soft side”, but in reality, some of both is probably what’s needed. A great debate, and one that’s likely to continue for some time.
  • Metrics: This is another area where people tend to have wildly polar beliefs. Metrics rule! Metrics suck! Those that have latched onto the Drucker mentality that you cannot manage what you cannot measure largely fill the former camp, those that are just trying to keep their heads above water often say metrics are a waste of time. I’ve actually changed my position on metrics a few times – for me, it’s one of those areas that I just can’t draw a good bead on, and thus it falls squarely into the gray. My friend Alex Hutton is a huge proponent of metrics, and worked hard to overhaul this year’s Metricon conference. Alex believes in metrics, and he’s a smart dude. Many others have argued we’re trying desperately to “fit” security into business, and it’s a round hole / square peg issue. Another tough one – what do we measure? How do we do it? What are the tangible benefits? On the other side, if we DON’T measure things, how do we have a clue what is going on?
  • Pen Testing: Pen tests are awesome. Wait, no, they are a total waste of time. But we need them for compliance?! And yet another gray area emerges. I do a lot of pen tests. I would love to think they have value when I do them. But I’ve seen plenty of cases, and customers, that get them performed just to check a box for compliance. So what’s the answer? Hmmmm.

This list can go on and on. But infosec is such a subjective area, I think we all have to take a step back sometimes and realize that our passion and desire to “get things fixed” usually has the caveat that one size almost never fits all. I am guilty of this. I think many in the “echo chamber” are sometimes. The pendulum will swing one way, then another, but almost always settles somewhere in the middle…the gray area. I’m going to try harder to be more open-minded, and understand other points of view, even on topics I feel passionate about. Sounds like a New Years resolution, only in August…I know. But who puts a damn time frame on these things!? They surely must be wrong.

Great post, it summarizes my approach for security. Everything is gray until you know the context.

Thursday, August 25, 2011

Win Remes petition

OK, it’s not the first time. Back in 2009 I mentioned Seth Hardy’s petition to have his named added to the (ISC)2 Board of Directors election ballot. The process is crazy, requiring an endorsement from the current board or a lot of signatures just to have the right to include the name on the ballot. Now it’s Win Remes’ time, and I really hope it works. I’m not one of those very vocal critics of (ISC)2, as I also work with the entity on developing the ISSAP exam, but I really think some fresh air from the community would benefit the certification value. His suggestion to add a paper requirement to the CISSP would really make it more than just a bunch of multiple choice questions, easy to anyone with good test taking skills. So, go to his petition page here and help making the CISSP a meaningful certification.

Friday, August 12, 2011

Thursday, August 11, 2011

Researchers decrypt data on mobile networks | InSecurity Complex - CNET News

Crypto expert Karsten Nohl at DefCon last year.

Crypto expert Karsten Nohl at DefCon last year.

(Credit:Seth Rosenblatt/CNET)

Researcher Karsten Nohl is continuing his crusade to get mobile operators to improve the security of their networks by releasing software that can turn phones into mobile data snoops of GPRS (General Packet Radio Service) traffic.

Using a GPRS interceptor, someone could "read their neighbor's Facebook updates," he told CNET in a brief interview last week. He planned to release the software during a presentation today at the Chaos Communication Camp 2011 in Finowfurt, Germany, near Berlin.

Karsten of Security Research Labs in Berlin and a co-researcher Luca Melette were able to intercept and decrypt data sent over mobile networks using GPRS using a cheap Motorola that they modified and some free applications, according to The New York Times. They were able to read data sent on T-Mobile, O2 Germany, Vodafone, and E-Plus in Germany because of weak encryption used, and they found that Telecom Italia's TIM and Wind did not encrypt data at all, while Vodafone Italia used weak encryption, according to the report.

One reason operators don't use encryption is to be able to monitor traffic, filter viruses, and detect and suppress Skype, he told the newspaper.

Nohl has been pointing out weaknesses in mobile networks for years in the hopes that operators will step up their security efforts. In August 2009, he released the encryption algorithms used by mobile operators on GSM (Global System for Mobile Communications) networks. Last year, he released software that lets people test whether their calls on mobile phones can be eavesdropped on.

If we stop for a moment to consider that lots of people out there consider their mobile data systems almost as secure as a VPN system, this is very serious. As we are seeing from rumours around Defcon last week, conferences with lots of people connecting back home through those networks are providing a feast to whoever decides to sniff that traffic.

Tuesday, August 9, 2011

NetSPI Blog » Echo Mirage: Piercing the Veil of Thick Application Security

In recent years web application security has gotten a lot of attention. The advent of easy to use web proxies has brought a lot of attention to SQL injection and cross-site scripting vulnerabilities, and developers have taken note. Thick application security/development, however, is lagging in that respect. You can pierce the veil yourself and witness the unprotected underbelly of thick application security, because I’m about to teach you how to use a useful tool called Echo Mirage. Echo Mirage is a versatile local proxy tool that can be used to intercept and modify TCP payloads for local Windows applications. It allows users to launch a program behind its proxy or hook into an existing process. It also supports OpenSSL and Windows SSL. Using this tool sheds light on a whole slew of bugs and holes concealed by the thick application security illusion.

Keep in mind that this technique could be interpreted as reverse engineering. Depending on the license of the software you are testing, this could stray towards the grey side of legality. For the purposes of this tutorial, I have created my own C# SQL command handler.

Step 1: Acquire Echo Mirage from here: The official release version is only 1.2, and the demonstrated version is 2.0, which you can preview here:

Step 2: Open up Echo Mirage, and click File-> Execute. Choose the .exe for your file, and click OK. Click on the green Go arrow, and your application should start. Phonebooks, invoicing, and ERP systems are common examples of applications which hook into a database and could be vulnerable to this sort of attack.

Figure 1: Having selected my target executable, the path is listed in black.


Figure 2: After launching the application, the red text demonstrates that Echo Mirage is intercepting traffic from the target process.


Step 3: Initiate a connection to a remote database; while my slapdash SQL interface has a button labeled “connect,” many applications will be less clear about when a connection to a database is created. When I start the connection, Echo Mirage intercepts all the packets that I’m sending to the database. Note that even though the connection string is available, many recent implementations of SQL will encrypt the password before it goes over the wire.

Figure 3: Connection strings! My favorite!


Step 4: Create a query. It will be automatically intercepted by Echo Mirage, and you can relay whatever malicious queries you want. In another application this step could be running a search, updating a record, or generating a report. When sending your request, one limitation of Echo Mirage becomes apparent: it is unable to change the size of the data sent. What this means for a potential attacker is that sending a larger query allows for more space when injecting. There is little worry of sending a query that is too large; if you have extra space at the end of your injection simply comment the rest out. 

Figure 4: This is the query as sent from my interface


Figure 5: Echo Mirage captures the request


Step 5: Now that you have the query captured in Echo Mirage, overwrite some characters to inject. Try not to disrupt the formatting and only overwrite characters that were actually part of the query you sent.

Figure 6: The edited query, prior to sending


Figure 7: The results of the edited query


I hope this demonstration hits home and proves the necessity of input validation and parameterized SQL queries, even in thick client environments. As tools like Echo Mirage mature, this type of attack will only become more common and more dangerous.

Permalink | Email the Author |

Subscribe to NetsPWN: Assessment Services Blog

Tags: ,

NetsPWN: Assessment Services |

Responses are currently closed, but you can trackback from your own site.

This post from the netspi blog really helped me to give some additional information to developers who were not understanding the reason why we should move our fat client applications to a controlled terminal services environment before even thinking about becoming PCI compliant. Good stuff.

Thursday, August 4, 2011

Black Hat and Defcon FUD season has just started!

It's the same thing every year. Last year was around the ATM and GSM networks hacks. Now, it's OSPF time.

The headlines about the new stunts presented at Vegas at this time of the year are always implying the sky is falling and we should all give up and hand over our data to Anonymous and "State sponsored attackers". Today was no different:

OSPF flaw allows attacks against router domains, tapping of information flows

Looks pretty bad, eh? Until you find this little piece far below in the article:

"The exploit requires one compromised router on the network so the encryption key used for LSA traffic among the routers on the network can be lifted and used by the phantom router. The exploit also requires that the phantom router is connected to the network, Nakibly says." 

This reminds me of a guy who would visit banks to show how SSL encryption was broken, doing a "live demo" of his attack. Attack that used to require, as a first step, the victim running an executable sent by the attacker by e-mail :-)

The vulnerability on OSPF might be pretty bad, but that's definitely not something that makes routers using that protocol "open to attacks".

The security press should start putting a little more emphasis on the attack pre-conditions and assumptions before reporting on new attack research. It would certainly avoid FUD and save us some time from explaining to desperate executives why the whole network will not be immediately owned because of that. 

PCI - Data at rest encryption and 3.4.1

Even if the encryption stuff is still the most discussed issue on PCI, I still have concerns about the correct interpretation of 3.4.1. There is even some attempt of clarification in the PCI SSC FAQ about that, but IMHO it doesn't accomplish anything close to "clarify" the issue. From the FAQ:

"The intent of this requirement is to address the acceptability of disk encryption for rendering cardholder data unreadable. Disk encryption encrypts data stored on a  computer's mass storage and automatically decrypts the information when an authorized user requests it. Disk-encryption systems intercept operating system read and write operations and carry out the appropriate cryptographic transformations without any special action by the user other than supplying a password or pass phrase at the beginning of a session. Based on these characteristics of disk encryption, to be compliant with this requirement, the disk encryption method cannot have:

1) A direct association with the operating system, or

2) Decryption keys that are associated with user accounts."

It seems to me that the intent of the requirement is to protect the data from being directly accessed from the media (hard drives); otherwise, disk encryption wouldn't be enough even if it is completely managed out of the OS.

If the intent is to use encryption as an additional access control and segregation of duties mechanism, disk encryption would never be useful even if it's done out of the OS and without linking the keys to user accounts; take, for instance, SAN based encryption. It's completely independent of the OS and the keys are not linked to user accounts; so, it meets the requirement. However, it doesn't accomplish much in terms of risk reduction (besides protecting data in the media), as the control of who can access the data in clear is still entirely managed by the Operating System (the data is presented in clear by the underlying SAN system to the OS).

It's funny to see that file or application level encryption vendors defend that only those approaches can meet the requirement, while storage vendors say exactly the opposite.

There is the general instruction to consider the requirement's intent. Again, I'm not sure there's enough clarity around the intent of requirement 3.4.1 - Does it try to protect against bypassing the Operating System controls logically (by getting administrator/root level access at the box containing the data) or physically (getting physical access to the media/disks containing the data)?

The implications from saying one or the other are quite big. Storage based encryption won't protect against someone getting root access on the OS, as the data is being provided in clear from the Storage system to the OS, so the attacker has open access to it. However, it still protects against someone grabbing (or getting physical access to) a hard drive (or even the whole array, depending on how the encryption is implemented by the storage system).

SAN based encryption is not performed by the OS and the keys are not linked to user accounts. In a crude interpretation, it meets the requirement. However, does it meet the original intent?

The PCI Council usually replies to questions with "work together with your QSA". That's great, but there are some requirements that are being interpreted by QSAs in completely opposite ways, such as this one. For some cases some additional guidance has been provided by the Council, such as for IP telephony systems and virtualization. I believe the encryption at rest requirement requires (the requirement requires...funny wording :-)) additional clarification too. The last version of PCI also requires a risk management program, so one could argue that the chosen solution should be aligned to the results of the risk management process. I’m not sure the PCI Council wants to leave such sensitive issue subject to decisions based on the organization’s risk appetite. As we know, the economics of payment card data security usually put the risk appetite of the organization and cardholder data security at opposite corners.

(there’s a very good document produced by the Securosis guys about this; you can find it here.)

SQL Injection is 95% SQL, and the Rest of InfoSec is the Same

I’ve been frustrated for a long time with the ‘teach me to hack’ mentality. Not because I have a problem with beginners (quite the opposite, actually), but because certain people just never get the concept of security testing in the first place.

Yes, “hacking” is a loaded term. I am using it as “being curious and learning about something to the point where you can make it do something other than what it was intended to do…”

Most hear about this skill and rush out to buy all the “hacking” books they can find. How can I hack SQL? How can I hack Linux? How can I hack web applications? There’s a really simple answer. Learn SQL. Learn Linux. Learn to code web applications. What people call “hacking” actually reduces perfectly into two simple things:

  1. Deep understanding of a technology
  2. Making it do something it’s not supposed to do

The beauty is that once you combine a deep understanding with a healthy dose of curiosity, all sorts of ways of abusing said system are presented to you.

This requires talent, skill, and practice — don’t misunderstand. And there are many hardcore developers who understand their technology extremely well but couldn’t hack a vegetable cart. Why? Because they lack curiosity and/or the attacker mindset, so they never get to #2.

Developing on, or mastering a technology, is not only the best method to becoming good at security, it’s actually the only method. Anything less is a 0 in a world where 1 is the standard. If you don’t know SQL then you don’t know SQL Injection. If you don’t know Linux then you can’t break Linux. And if you can’t code a web application then you aren’t really doing WebAppSec.

You can use blunt tools to take chunks out of these subjects (tutorials, automated tools, etc.), but to truly be good at breaking something you must know how it works. Anything less is hamfisting.

Don’t be a hamfister.

Miessler is right about it. I remember when I started trying some SQL injection attacks in my penetration tests. I only managed to make them work properly and to get the data I was looking for after I stopped reading the SQL Injection white papers and started reading more about SQL and the RDMSes documentation. That's valid for practically all aspects of black box security testing.

Wednesday, August 3, 2011

Risk Management again

Interesting tweet from @joshcorman this morning:

@joshcorman: I believe in the concept of Risk Management. What I seldom see is comprehensive/accurate knowledge to inform the outcomes.

I share the same feeling about seeing enough useful and reliable data that could be used for that. I think we can go further and apply some basic economics here; Risk Management is useful and applicable when the cost to obtain the necessary data/knowledge required as input AND to inform the outcomes is lower than the potential impact of going wrong with other "educated guessing techniques".

So, if you need to spend the same amount of money/time to get some reasonable accurate data to manage risk appropriately that you would eventually lose from going wrong with another guesstimative method, maybe it's better to just not bother.

Monday, August 1, 2011

Explaining hacking episodes

From XKCD, great one. For all those who work on security and have to explain those news to family, friends, etc :-)