Friday, November 26, 2010

How to measure the success of your security strategy?

The problem with metrics and measurements of security efficiency is that measurement is often done on the control perspective, and not on the actual results (I like the way Bejtlich sees it). So, there is no way to answer two important questions required to define the success of a security strategy:

  •  Are the necessary controls in place?

  • Are those controls effective?
 It is important to note that the even when the answer is “no” for both questions, an organization can still present a good incident history. That may happen because the controls in place don't provide information necessary to identify the breaches (the organization doesn't know it has been victimized) or because it simply hasn't happened yet. That’s quite common for fire incidents; no fire history does not mean that a building without fire extinguishing systems is secure against fire. In order to properly measure security, the assessment must be done in two steps:

  •  Identify the threat level for the organization

  • Test the security posture in the same way as the identified potential threats would materialize
 The threat part is the easiest. Current available data about breaches, such as the Verizon Business Data Breach Report, can point to the most common breach types, which can be translated into threat models for each organization profile. The ideal for this assessment is to mix generalized information (threats common to every organization in similar levels - such as regular malware, for example) with specific data for the target organization. The main threats for the financial industry are different from those for utility organizations, for example. Having identified the main threats, the tests that need to be performed can then be picked from a standardized list. What’s the difference between these tests and the tests that are currently performed for PCI-DSS, SAS-70, ISO27001 and other assessments? The difference is that most of those standards are control oriented, in a way that the tests will verify if a specific set of controls are in place and working properly. However, they are not always effective in identifying if the controls in place are relevant to the threats facing the organization and if the effectiveness of those controls is really affecting the likelihood of those threat to materialize. A good example is antivirus deployment. You may be able to present a 99% coverage of AV installed and updated on the organization’s workstations, but that doesn’t really say too much about the organization ability to prevent impact from malware attacks. I’ll take this example to give provide a better understanding of my suggested approach: A payment processor company goes through breaches reports and identifies that one of the biggest threats to organizations in that field is related to card data being stolen by malware. There are several ways of testing the organization ability to defend against that threat, such as:

  •  Remove one of the corporate desktops from the network and try to execute common malware found in the Internet on that machine

  • Execute a Proof of Concept customized malware on a corporate desktop

  • Execute a PoC customized malware on a corporate desktop that tries to send out a file containing sample card numbers to the Internet
 You can see by these tests that threat resistance can be tested in different levels. A series of tests against the same threat can be designed with different levels of assurance. The organization can choose which to use according to the importance of that threat to its profile, the impact and cost of the testing procedure itself. An interesting approach for this kind of assessment would be the development of a common database of tests, each of those linked to the threats that are being replicated and the level of assurance they can provide. With that database in hands, an organization can build a test set according to its needs and verify if the security strategy (and posture) works properly. Going a step further, security standards could be written to require specific sets of tests or minimum assurance levels for each test type. The organizations wouldn’t be required to implement specific controls anymore, but to resist against a series of tests that replicate the most important threats to that kind of organization. No more checklist based security. It would be something similar to vehicle crash tests and those fire resistance tests for cabling or safes, "resists to up to 30 minutes to fire". The vulnerability scanning requirements from PCI-DSS already provide some level of testing similar to what I’m mentioning above. Things like “from different points of the internal network, scan for common services and try to authenticate with default/blank password”, “from different points of the internal network, scan for the target data in open shares”, would also be tests to be performed during an assessment. The assurance level could change by running more targeted tests, without scanning procedures (providing the list of critical servers to the tester, for example), by making them more frequent (as it’s usually done with the vulnerability scans) or in non-regular intervals and even leverage internal knowledge of more important systems, common passwords and keywords within the organization and so on. The tests should be used to validate monitoring and response processes effectiveness too. Can you see how different this is from a checklist item saying "are the logs being reviewed?"? This is real security, this is constant testing and results driven, not controls driven. The tests performed should be constantly reviewed to reflect the changes in the threat landscape and even what is happening within the organization (for example, more tests targeting internal access control weaknesses during major layoff situations). An interesting aspect from evolving the security posture based on the results of those tests is that the control set doesn't need to follow standard regular frameworks or best practices. In my experience the ability to apply controls is heavily influenced by the maturity of the organization in other IT aspects. Every security professional knows that building security in the SDLC is the best way to approach application security, but for an organization facing challenges such as low development processes maturity and independent development groups it may be easier to tackle the application related threats with application firewalls, for example. Same thing for malware, where adding more "anti-malware" technologies can be replaced by using different technologies such as thin clients, less targeted Operating System platforms or a white-list based approach to software execution. Each of these approaches will appeal to different organizations depending on their approaches and maturity levels for desktop/client computing, software distribution and even IT consumerization. In order to apply this different model for security strategy I see two major challenges, but I believe they are easier to handle than those we face Today with the current controls driven approach. One is related to the security testing. The creation of a common set of tests that are results driven and that map to specific and real threats may not be as easy as I'm making it sound like. There is a risk we'll end up with watered down tests created by those generally incompetent but C-level influencing big consulting companies that will be not very different from the current checklist based controls testing. The other challenge is the ability of security professionals to identify the appropriate measures to tackle the identified threats. There is a huge reliance on (also watered down) "best practices" disguised as control frameworks today, with a lot of lazy guys thinking that security is achieved just by implementing this or that bunch of controls. They will do only that and nothing more. They put in place controls that are not the best for those specific circumstances and even controls that are not necessary at all, without thinking even for a minute in anything that is not part of that standard list (of 12 high level requirements, anyone? :-)).Does it make sense to follow this path? Yes, if these two challenges are easier to solve than those we currently face. If not, it's time to try to find another alternative. After all, I refuse to believe we already found the most efficient way to do security.

Thursday, November 4, 2010

Crazy ideas to think about: Defense x Security

We love to use analogies to discuss and illustrate information security concepts. We often see people referring to Sun Tzu's Art of War,  mentioning Army combat strategies and using military terms. Well, have you ever considered that information security mixes concepts from two different things, defense (like the Army protecting the borders and interests of a country) and internal security (law enforcement entities, such as police)? Well, anyone that works for one of those entities knows that they apply different methodologies, techniques, concepts and tools. So, shouldn't we be applying this separation in information security too?Here's the idea to consider: Is it worth (valuable? Efficient?) to organize your information security strategy in two different components, Defense and Internal Security? Defense focusing on external Threats, Internal Security on compliance, policy enforcement, access control? Let me know what you think...

Friday, October 22, 2010

Is it really incompatible?

It was interesting to read Gunnar Peterson's rant this week about firewalls getting the number 1 spot in the CSO budgets this week. For those who haven't seen that, here is the core of it:"I had to check the date to make sure that it wasn't 1995 when I read this

The survey of IT pros and C-level executives from 450 Fortune 1000 companies -- commissioned by FishNet Security -- also found that 45 percent say firewalls are their priority security purchase, followed by antivirus (39 percent), and authentication (31 percent) and anti-malware tools (31 percent).

And what threats are these IT Pros and C-level execs concerned about?

Nearly 70 percent say mobile computing is the biggest threat to security today, closely followed by social networks (68 percent), and cloud computing platforms (35 percent). Around 65 percent rank mobile computing the top threat in the next two years, and 62 percent say cloud computing will be the biggest threat, bumping social networks.

Let's see what do mobile computing, social networking, and cloud computing all have in common? Oh yes, they all bypass the firewall's "controls"!How do you reconcile spending on something (firewalls) that does not address any of your top threats? This dichotomy is infosec's biggest problem. We have plenty of good controls and processes to use, what we don't have is enough talent in infosec to integrate them and put them to use. "I will not disagree with Gunnar that there is a chronic problem of incompatibility between the most common security controls being deployed and major threats/concerns. But I'm also a strong advocate of more careful, data-driven approaches, like the New School guys. And on this case my concern is that Gunnar wants to see a direct cause-effect relation between "purchase priority" and "threats". I believe it's reasonable to expect that, but there are some things to consider that can prevent that from happening.Yes, there should be a connection, but only to the extent of "strategy-related spending". When discussing IT expenses we should remember that budgets are normally split between operations and capital expenses. Depending on how intense is the ongoing infrastructure refresh initiatives you'll see more dollars being spent on stuff like than on things related to the new threats, just because you need to keep things running. If the organization is going through a big physical expansion, for example, it will eventually need to put money on things like networking gear. Would it be wrong just because the current innovation focus (and also the threats) is not on the network infrastructure? I don't think so. Think about this as the "Maslow Pyramid" for IT. You'll spend money on the upper layers only when the lower layers are stable. (I'm purposely ignoring more radical approaches such as the Jericho Forum stuff and cloud-based stuff, as they are not all organizations can afford to quickly break IT paradigms every time there's a new trend out there - yes, those new things can help organizations to move faster and avoid being trapped on the continuous maintenance of the )The fact that there is a disparity between top threats and top expenses might not necessarily be related to lack of understanding, skills or security talent. We can blame security professionals for focusing on infrastructure components only, but it only makes sense to do so when they have enough resources AND the option to allocate them as they want. So, if your budget covers only your operating expenses, how can you even try to introduce radical changes to your security model? Yes, it's probably perpetuating the hamster wheel of pain, but changing the status quo will normally require an initial increase in resources and focus (yes, it's not only about money - sometimes you just don't have time!!)  that not all organizations concede to their CSOs.

Friday, October 1, 2010

If cyberwar and cyberterrorism is true, this is a target

I was reading this post from CBC News about the "flash crash" that occured in Dow Jones last May. The SEC report says it was entirely caused by a mistake from a single firm. Hey, the index fall 1000 points in less than one hour! With all this thing about Stuxnet around, can you imagine the impact of a "stock trading stuxnet"? If a single firm can cause that, a worm capable of doing the same thing with trading systems would cause huge losses to the market, and using Dow Jones as an example to US as a country. To make things worse, the trading systems are also becoming more and more standardized, using open protocols like FIX, what makes it even easier to develop such malware. I can also say that there's a lot of non-IT people developing software for those trading companies, what means that the best practices in software development are probably not being followed.So, there is a huge target, the opportunity and certainly people with means. That's the classic triad for "shit happening".

Applied Behaviour Analysis

Very good post from Alex Hutton, one of the best security posts of the past months, for sure.It really seems that ABA has its place in the infosec field. I'm just curious about why Alex is talking about systems and network traffic as behaviour, when ABA theory has a better place for that, the "environment". Even when we start thinking about actions to change behaviour (from the attackers? "users"?), that's usually done through manipulating the environment. And if we end up finding that those subjects usually have similar behaviours, we'll probably find out that the differences are mostly in the diverse environments they interact with, the organizations.The interesting thing about ABA is that it drives us to experimental control for attempts to change behaviour. The implications would certainly force us into finding ways to verify if our controls can really induce behaviour change. That's one of the key issues we have in our field. If the attackers are behaving "accordingly" (i.e. not performing successful attacks), is that due to our attempts to change their behaviour or because of other external stimuli? One of Richard Bejtlich favorite ideas, the continuous testing by a "red team", seems to be a good away to assess if the stimuli we are generating are really successful in causing behaviour change.Certainly a lot of food for thought. What kind of behaviour change we want to produce and how can we test if the stimuli we generate are appropriate for that?

Tuesday, September 28, 2010

Tokenization as a service

I mentioned a few months ago in a previous post that there was an opportunity for tokenization being offered as a service by big players (I mentioned Visa at that time).Well, it turns out that it's finally coming. Akamai is offering it, and it makes complete sense.

Monday, September 27, 2010

BP spill a Black Swan?

It's really old news, in fact the well is finally closed at this time, but it's interesting to follow the discussion about the BP spill and if it could be considered a "Black Swan".Alex is right to complain about the abuse of the concept. But I like to point to another aspect. People will usually relate the Black Swan concept to the probability of the event ocurring only. A very important aspect of those events is the impact. In fact, the higher than expected impact. As Alex, I also believe that BP was aware of the chances of a spill ocurring at Deepwater. But did they expect the results of the spill? The billions of losses? What I think makes the spill a good example of Black Swan is the fact that the consequences were far higher than the expected. And this aspect generates even more interesting considerations for our infosec discussions.Most of the time spend in risk assessments is over the likelihood of an incident. I don't know why the Impact aspect does not get the same amount of thought...maybe a careful consideration of the outcome of an incident will be seen as FUD? "Are you saying that an accident in a single platform can cause billions of losses for the company? C'mon! That's FUD!!"The Black Swan card is often seen as an excuse when the likelihood of an event was underestimated. Even if sometimes that's true, we should also see it as an indication of lack of resilience, a single incident causing catastrophic results. As someone said one of these days on Twitter, the "failed miserably" expression is too common now. So, instead of trying to reduce the likelihood of those failures, what about working to make then less miserable?

Not using "Risk Management" doesn't mean "no decision making"

I found an old bookmark in my "to blog" folder related to a New School post from David Mortman, "Decision Making Not Analysis Paralysis". I am one of those with second thoughts about our risk management tools. If you're still confident that you can use risk assessments as base of security decisions, I suggest reading "The Drunkard's Walk", by Leonard Mlodinow.I cannot say that I'm 100% sure that risk management is useless. I just don't feel enough confidence that it gives us what we need during the decision making process. So, David is saying that executives usually have only a small fraction of the information about an issue when deciding. He also says "Personally, I like to have some data based rationale for how those decisions get made". The point here is that we, the risk management skepticals, are not arguing against decision making. We are arguing against the illusion of a "data based rationale". If you are deciding something over 10% of the overall data, that's not a lot more than a gut feeling decision. There's the even more negative aspect of believing the decision is fact based when it's just slightly more than guessing.So, let's not throw away the baby with the bath water. Decision making is crucial. What I expect is a method to do that better and without the illusion that it's a fact based rational decision. At this time I don't see risk management as that method.

Thursday, September 23, 2010

Twitter Updates for 2010-09-23

  • Do we really need a firewall on our desktops? / Don't go that way. Every device should be able to defend itself. #

  • unless, off course, you have a totally down-to-port-level-easily-manageable network. Then, maybe. Just maybe. #

Powered by Twitter Tools

Facebook and security economics

Studies about security economics are always interesting, and Ross Anderson is probably the biggest name on that. He just wrote a small but very nice piece for the New York Times about Facebook. You can read it here. I love this part:"Finally, Facebook might lock in its users even more tightly than Microsoft. People want to use the sites their friends use. As one of my students put it, "All the party invitations in Cambridge come through Facebook. If you don't use Facebook you don't get to any parties, so you'll never meet any girls, you won't have any kids and your genes will die out."

Wednesday, September 22, 2010

Twitter Updates for 2010-09-22

  • Alright, and here we are testing the tweets at the blog sidebar ;-) #

  • Very nice piece from Cory Doctorow "Promoting statistical literacy" - Useful not only for security, but for life #

Powered by Twitter Tools

Saturday, September 11, 2010


Funny how sometimes, due to the information overload, we just miss
very interesting stuff being released. Today I was reading an article from the Microsoft Security Research & Defense blog
about how to mitigate the new Adobe exploit with a tool called EMET.
WTF?!?!? I was amazed when I read what EMET is the idea behind it:

Enhanced Mitigation Experience Toolkit

"For those who may be unfamiliar with the tool, EMET provides users
with the ability to deploy security mitigation technologies to
arbitrary applications.  This helps prevent vulnerabilities in those
applications (especially line of business and 3rd party apps) from
successfully being exploited.  By deploying these mitigation
technologies on legacy products, the tool can also help customers
manage risk while they are in the process of transitioning over to
modern, more secure products.  In addition, it makes it easy for
customers to test mitigations against any software and provide
feedback on their experience to the vendor."

Microsoft has developed a series of defenses against the most common
code execution methods used in exploits, such as DEP and ALSR.
However, some of those defenses require that software is recompiled
with new compatible compilers. It seems that some pieces (DLLs) of
Adobe Reader still haven't been recompiled to use ASLR, keeping some
doors open to the exploit writers. So, EMET can be used to force ASLR
to that software even if it was not prepared for that. Of course it
can be deployed by default on everything, as there's a small chance of
breaking stuff, but it is a nice tool for those who want to add some
protection while accepting to have an eventual issue here and there.

Next step from Microsoft could be an automatic assessment on software
installation to verify if EMET is necessary and, if so, keep control
of what is using it so users can try to disable it when an error
occurs. That would be almost transparent while adding a pretty much
amount of security.

Going into the same line, FX announced a great tool to add a layer of
protection for Flash files in Defcon, Blitzableiter. Take a
look at that one too, it can be integrated with Firefox and NoScript,
pretty nice approach.

Wednesday, September 8, 2010

Does anyone still think about honeytokens?

Honeypot technologies are always relegated to a second place or to experimental environments only. However, I was reading about the most common attacks in the Verizon DBIR report: malware stealing data - memory scrappers, etc. All automated stuff searching for "valuable" data! This is exactly the kind of threat that can be easily identified by honeytokens. And it doesn't have to be extremely complicated. A quick and dirty solution that could help a lot:

  1. Create a text file with a bunch (10? 100?) fake credit numbers, all of them with, let's say the same first 10 numbers. There are thousands of credit card number generators out there that can do it. Distribute the file using your regular software distribution tool to all your desktops.

  2. Install a custom signature in your perimeter IDSes searching for those initial numbers.

  3. Run periodically (monthly? weekly? daily?) a job with something like "cat file > /dev/null" that will be enough to bring the contents of the file to memory. Something that could keep the contents in memory for a couple of hours would be best.

  4. Monitor for anything triggering that signature. If anything hits it there is a high chance you have malware like those mentioned in the report running on your desktops.
I know it is very targeted to a specific type of malware, but as it looks like this type of malware is responsible for the majority of the incidents and records in the report, it might be worth the (small) effort.

Exceptions can taint assumptions

Exceptions can taint assumptionsThis phrase came suddenly in my head when I heard someone mentioning something as an assumption that I knew it wasn't in place. That's something that happens quite often with security controls. Someone decides stuff such as "no unapproved software will be allowed to run on the corporate desktops", and from that moment it stops being a goal and starts to be an assumption to remove threat vectors. That's a very good example of threat modelling and risk assessment going wrong, "we don't need to worry about this threat because non-authorized software does not run in our desktops". They often forget about those dozen exceptions filled and approved the week after that rule was instated. So, next time you hear someone mentioning a control as an assumption to disregard a threat vector, consider the exceptions for that control. How many exceptions are necessary to invalidate that assumption?

Create and Share Labs!

This is a very nice alternative for the full Amazon EC2 prices! If you need to bring up some VMs for testing, try LabSlice. It's a very nice way to get VMs up for a good price when we need them only momentarily.

Friday, August 27, 2010

New Role

This blog has been quite silent lately as I haven't been finding anything interesting to write about. Even the Verizon report, there's certainly interesting stuff there, but so many people have talked about it that I don't even feel compelled to do it.Anyway, there's at least one thing to mention. I've just changed to a new role on my job. This week I've started as a "Security Architect". I believe it will be a very interesting job, as I was getting a little tired of having to deal with project implementation details. I really like to work on roadmaps and long term planning for security services, and that's exactly what I'll be doing now. I hope my day job now can bring me new ideas about things to write here. Let's see :-)

Thursday, August 12, 2010

The big FAIL of log analysis

I was trying to find words to add to this post from Anton Chuvakin about the current state of log analysis, caused by the numbers in the last Verizon report. I simply can't find anything to add. He's dead right about everything. If you are interested in log analysis / log management, that's something to read and think (AND DO SOMETHING) about.

Thursday, August 5, 2010

Razorback and IF-MAP?

I was reading about the new framework from SourceFire, Razorback, and I realized it has a lot of similarities with TCG's  IF-MAP. There is a lot of vendors mentioning things go beyond the simple correlation so common in the SIEM tools. It is a drive from CORRELATION to COOPERATION between security tools. That's awesome. Instead of having several tools waiting to receive data from different places, we need a security metadata bus that can be used by other tools. In that way a lot of things that make security hard to do will become far more easy. Firewall rules won't be " to using TCP4567" anymore, but "users from Finance going to the Finance App". We can build blocking and response rules using definitions such as "users infected with malware", "servers containing sensitive information", and far more interesting stuff. What's most important is to have those things following standards, in a way that the infrastructure will become less important, making it easier to apply security independently if things are running in your data center or in the cloud.But, again, only if initiatives like Razorback start working with standards like IF-MAP...

Tuesday, July 27, 2010

Heading to Las Vegas

Here I am going to Las Vegas for Black Hat and DefCON again! It's funny that this time I have really lower expectations for the event. My feeling from the last news in the field is that it's too much the 0-day of the week and buzzword contest (APT/Cloud). Anyway, it's always the place to be when talking about information security, and I hope to be wrong about it. It will also be a great opportunity to meet friends and colleagues. If you are there, please feel free to drop a tweet (@apbarros), I'll be tweeting live there.And let's see that B-Sides stuff. Honestly, from what I've seen from the last editions and the current schedule, it may become the A-Side quite soon. 

Thursday, July 22, 2010

SCADA worm!

As everybody in the field had predicted, malware targetting SCADA system has finally come true. The lucky thing is this one is looking for information to steal only, not actually doing anything. I wonder what outcome could we have if this nasty little thing was designed to force systems to fail.

SCADA systems are one of the most critical blind spots in organizations Today. Few people have access to then and know how they work, so there is a false perception of security about them. Specialized systems, such as SCADA and ATMs, often rely on obscurity as their main security strategy. It's not even something done intentionally, but as result of a neverending vicious cycle. Internal security resources don't know about security on those systems and the specialists in that technology don't understand security. You can think about hiring external consultants to check the systems, but the consultants also don't have much contact with that technology. Of course they won't tell you that, they will run their off-the-shelf tools anyway. The results will tell you nothing, what you will interpret as "secure", perpetuating the notion that there are no security issues with that technology. As there are no security concerns there, the security team won't spend time learning that technology and the specialists will keep saying that this security thing is for those Internet-web-2.0-cloud-stuff guys. Until the next Black Hat briefings or sexy malware.

I wonder when this is going to hit the old mainframe. I must say it will be fun to watch.

Thursday, July 15, 2010

Visa push for truncation and tokenization

It's good to see that Visa is putting additional pressure for truncation and tokenization of card numbers. However, "PCI DSS solutions" in general cost money that the merchants and service providers in general don't want to spend. They make sense from a technical point of view, but they incur in costs that would eventually drive those organizations away from them.


Now, just food for thought: what if the card brands (Visa, Mastercard, Amex) started to offer tokenization services in a cloud based way? The merchant could just use the service to get tokens directly from Visa, who would be responsible for storing the real numbers and providing merchant specific tokens through a web service. The concerns related to hosting that data to a third-party wouldn't be relevant on this scenario, as the brand already has all those numbers anyway. The brands also have their networks already in place, that could also be used for "token request" transactions for the organizations that have big pipes and gateways to those networks and don't want to create a dependency between their highly available payment systems and their Internet connection.


Visa could also use it for additional fraud prevention services (although it could also generate privacy related issues), by correlating the last request for a specific number with the fraudulent payment authorizations using that card. It would also remove the operating and technology support costs from the tokenization solution from the end-user organizations, making it more attractive to be implemented. 


What do you think of it? Does it fly? 

Friday, July 2, 2010

Cryptography and the wrong problems

I was reading Schneier's blog Today as he posted an old text he published on Dark Reading back in 2006, about Cryptography usage. It's interesting how an article of four years ago is still very relevant. I've been seeing some cases where people considers encryption as the most appropriate control to implement, when access control is really the key."Much of the Internet's infrastructure happens automatically, without human intervention. This means that any encryption keys need to reside in software on the network, making them vulnerable to attack. In many cases, the databases are queried so often that they are simply left in plaintext, because doing otherwise would cause significant performance degradation. Real security in these contexts comes from traditional computer security techniques, not from cryptography."Those cases show how frequently controls are implemented in a checklist-based approach, without any attempt to do a threat based assessment first. As Einstein said once, "things should be made as simple as possible, but not any simpler". Although I am among those that think that PCI DSS is a step in the right direction, there are clear misconceptions that come from the heavy push towards encryption in that standard. Applying the wrong control for a threat is as bad as an inefficient or non-existent control, or even worse, due to the false sense of security, added complexity and cost. I'm sure that checklists can help us with the most basic stuff, but when we start touching things such as database encryption, I don't believe we can apply a checklist-based approach.

Friday, May 14, 2010

Tips for auditors

I left this awesome post from this SANS blog pass without saying anything here. It has 10 tips for IT auditors, and in my opinion it nailed down the key issues that I generally have with auditors. Some of the best pieces:

  • Trying to find everything is often a mistake
  • Auditing is never about catching people doing things wrong
  • The primary role of an auditor is to measure and report on risk to the business and business objectives

I really like the last one. It's perfect to remind those auditors that work with that checklist mindset and don't understand that sometimes a non-ticked box doesn't necessarily translate into risks or goes against business objectives. If they could take only one of these tips with them, this one is the most important. The job of security professionals would be quite easier if we could work with auditors that understand that.

Wednesday, April 21, 2010

Brazil and the appetite for private data from Google

Interesting piece from The Register regarding the new tool published by Google that shows how many requests they receive from governments to access and/or remove private data from their systems. According to the tool, the Brazilian government is one of the top requestors.But there is an issue with the article from The Register that I immediately noticed and that is even confirmed in the by Google. What happens is that Orkut, the first social network attempt from Google, is extremely popular in Brazil, much more than Facebook. So, the numbers from the Brazilian authorities would more likely be comparable to numbers related to Google AND Facebook requests in other countries. I bet the difference is not that big or doesn't exist at all if the numbers are compared in that way. The confirmation as from their FAQ:"For Brazil and India, government requests for content removal are high relative to other countries in part because of the popularity of our social networking website, orkut. The majority of the Brazilian and Indian requests for removal of content from orkut relate to alleged impersonation or defamation."

Monday, April 12, 2010


An interesting discussion has been produced by the blog post from HD Moore related to the value of learning assembly for penetration testing. It was intensively discussed on the cisspbr forum, but mostly because of other reasons.As HD said, almost all additional knowledge is useful. I agree with that, but I think we should differentiate between "valuable" and "required". I know people that performed very good penetration tests without executing a single exploit. For those, assembly skills were not necessary at all. But to properly answer that question it is also important to define the real goal of a pentest.First, it is important to differentiate between Risk and vulnerability assessments, Pentests and vulnerability research. The assessments are usually performed with a "white box" perspective, with total collaboration of members of the organization to identify vulnerabilities (and risks). If the organization wants a more complete study of its own security issues, that's the way to go. At the other extreme it is the vulnerability research. On that kind of job you look for vulnerabilities that are still not known (or not publicly disclosed) by the security community. It can be done either in white box or in black box approaches. Usually the organizations that benefit from vulnerability research are those that create technology products (hardware and software), not those that buy them.Pentests, in my opinion, are a mix of these two. As it tries to reproduce the situation of real attacks, it normally uses a black box approach. The organization being tested cannot expect a complete assessment of its vulnerabilities, as some of those will be masked by controls from different layers.  The pentest will be useful to validate the security approach of the organization and that the combination of controls works to prevent compromises. It may use some vulnerability research on custom applications, but usually it won't benefit from researching vulnerabilities in COTS products such as Operating Systems and routers.Probably one of the reasons why the role of each of those activities is so often misunderstood is because pentests are marketed differently by the service providers. Executives always have the impression that someone trying to hack into their networks is the best way to find issues to be fixed. The vendors use that perception to sell their services, and this misunderstanding goes on forever. A good way to break this cycle would be to standardize the pentesting delivery.When you buy a physical safe, for example, you can refer to the UL certification classes. A TL-15 class safe will "resist abuse for 15 minutes from tools such as hand tools, picking tools, mechanical or electric tools, grinding points, carbide drills and devices that apply pressure". What if you could hire a pentest and get a similar classification for your test scope (you external perimeter, for example)?  The time scale would not be the only component, adding to it the attack techniques applied during the test. Those techniques can be lined up in terms of complexity, cost and pre-requisites, and in the end you could get the results that your network was able to "resist to attacks up to techniques class X". Vendors could sell their pentests with different prices and according to their competence level ("we offer pentests up to class Y techniques), so the services (and the results) could be properly compared. It would even give more space to those that want to add vulnerability research to their pentests, as this would probably be one of the highest test levels to be tried against a network..

Wednesday, March 31, 2010

Exploiting PDFs

This PoC from Didier Stevens clearly shows how stupid is to allow PDFs to start new processes. We'll end up creating bloated monsters like the current browsers to deal with these files. Can someone please "strip down" the PDF format to something that makes sense again???

I wonder what happened to "pure data" formats; Most of what people needs to do with scripting in PDFs files could be done with a slightly smarter reader and more metadata (adding a form field such as "date_validated" instead of creating a script to validate the date, or "text_uppercase" instead of using scripts to change the content to upper case).

Wednesday, March 3, 2010

The new school and black swans

I'm currently re-reading "The Black Swan", by Nassim Taleb, in a moment when most information security planning and decision-making techniques look like just plain bullshit to me. So, my mood for accepting absolute truths on this fields is becoming even worse than before.I was reading a post from the "New School of Information Security" blog, which, by the way, is very good. However, there is something from this "new school of thought" that I really have a problem to accept, the idea of measuring the effectiveness of security controls. The post  I was referring to includes an example of new techniques to measure and predict the effectiveness of baseball players.Take, for instance, an affirmation like "80 percent of the league couldn’t have made that catch". Thinking on the nice work from Nassim Taleb, people (and so outfielders) physical attributes are usually only slightly different. Checking the past features from league outfielders should not give you enough information to say something like that, specially considering the interval between the games and the constant training for the athletes. It's too much conclusion based on past data that don't have a direct causality relation with the event you are trying to predict.That is also common on security. With the speed of changes and complexity of IT systems, constant changes of user behaviour due to those new systems (social networks?), it is extremely hard to produce a decent forecast of future events based on past data. Why would all the data about the exploitation of OS and web servers vulnerabilities from the past decade be useful to determine exploitation trends of browser vulnerabilities or XSS on social network websites?We should be a little more skeptical on our ability to forecast events, specially security incidents. The great "new school" I'm waiting to see rising is how to protect our data without relying on magic numbers and formulas. That would be innovation.

Thursday, February 25, 2010

MitB attacks still haven't reached full potential yet

I'm surprised that most of the MitB attacks are still just stealing credentials instead of changing transaction contents on the fly. I can see that credentials have an intrinsic value on the "black market", but the attack model of stealing credentials and then using them to log into the victim account to perform transactions seems too complex for me. Once in the browser, the malware can just change the transaction being performed by the victim, in a way that all the traces (such as IP addresses) would point to his/her computer and not the attacker's. There's also no need to transfer the stolen data from one place to another, so it reduces even more the places where the attacker leaves his tracks.I can see two reasons why they are still not doing that:

  • The malware developers are not closely related to the "money criminals" - They are building software to be used by different "clients", and the best way to implement that portability is to sell credentials only.

  • Stealing credentials just work and can be used multiple times, and people just understand the model.
If any of those conditions change, more sophisticated versions of the attack will probably start to detected too. By now, it is important to note that fighting the "stolen credentials" threat doesn't necessarily mean you are also solving the MitB threat. For that, transaction authentication is necessary.

Very nice tool for pentests

I don't hide it from anybody; when doing pentests, my favorite approach was to simply browse information in open shares until I could find some user credentials there (yes, in big organizations, they are always there: scripts, source code, ini files...). With those in hands, try to see what else I was able to have access to; repeat the process until the whole network is owned. No big hack or exploit here, just basic "low hanging fruit detection".I just noticed a tool that makes that process thousands of times easier: keimpx.The description, from Darknet:keimpx is an open source tool, released under a modified version of Apache License 1.1. It can be used to quickly check for the usefulness of credentials across a network over SMB. Credentials can be:

  • Combination of user / plain-text password.

  • Combination of user / NTLM hash.

  • Combination of user / NTLM logon session token.
If any valid credentials has been discovered across the network after its attack phase, the user is asked to choose which host to connect to and which valid credentials to use, then he will be prompted with an interactive SMB shell where the user can:

  • Spawn an interactive command prompt.

  • Navigate through the remote SMB shares: list, upload, download files, create, remove files, etc.

  • Deploy and undeploy his own service, for instance, a backdoor listening on a TCP port for incoming connections.

  • List users details, domains and password policy.

Wednesday, February 24, 2010

Sure, it is THAT easy!

Two posts in a day...I'm probably sick or something like that :-)I was reading an interesting article by Bill Brenner on CSO Online, "Five Security Missteps Made in the Name of Compliance". Although I don't disagree with what is listed as missteps (in fact I think they are quite correct), something in the last paragraph caught my eye:"The best advice against all these missteps, experts said, is to simply slow down and take careful stock of where the company's greatest risks are. From there, companies need to take careful study of the security tools available to them and figure out before buying them if compatibility with the rest of the network will be an issue."Sure, it is THAT easy! Honestly, he just listed some of the hardest things to do in security. Ok, he is not saying that it's easy, but c'mon! Can you really say that in your business environment you have the option to "simply slow down"? i would love to, but that's something that is not always possible to do. just like checking "where the company's greatest risks are". This one is huge. And I must say that my perception about organization-wide risk assessments is ETI - Expensive, Time consuming and Ineffective. So, you'll have an idea of where those big risks are coming from, not a "careful stock of". There's too much uncertainty ou there and it's better to live knowing that there's a lot of things you don't know instead of dying trying to figure them out.You can conduct careful studies of the tools available, but the "corporate truth" is that in a lot of occasions you will simply work to deploy something that someone else bought or will have to deal with things that are not best of breed because they were part of a bigger deal/suite or simply cheaper. Finally, on checking compatibility with your network before buying, you'll only succeed 100% on that if you run a PoC in your entire environment...I mean, almost never. You'll have to deal with surprises during the implementation. Yes, you can avoid buying Unix stuff to run on Windows boxes, but in big organizations the number of combinations of hardware, OS, middleware, applications AND bizarre settings is incredibly high. Be prepared to deal with those surprises.The point is, Bill is right about the mistakes, but I think he is to optimistic about how to prevent them. Some of them are simply what we need to pay for working in this crazy field. Looking back they will look like mistakes, but most of the times we simply cannot do anything better than that. As I like to say, "it's acceptable to do stupid things, as longs as it is not for stupid reasons".

Tuesday, February 23, 2010

Log management implementation details

OK, I'm trying to get out of from a long hiatus of producing content by putting together a presentation about Log Management: the devil is on the details. I have been working in log management projects for some years by now and I noticed I managed to assemble a nice list of small issues that you find when working on those projects that will normally be responsible for 80% of the headaches. As I'm saying in the presentation, things that the vendors simply don't know how to solve, so they never talk about it :-)Some of the things I'm including there:

  • Windows log collection: the options, the issues with them

  • Credentials (user IDs) management when doing file transfers and connection to DBs

  • Systems inventory (who are my log sources?)

  • Privileges needed to collect logs (DBA rights to get logs???)

  • Purging logs from the sources (who's gonna do it?)

  • and some other stuff
So, if you have an interesting experience on implementing log management systems, please let me know those interesting "details" you had found during the process that caused you problems. It will be interesting to talk about the subject without going into the old "performance / parsing / reporting" discussions. Most of the vendors have figured out how to solve those problems. I want to talk about small things that hurt and still haven't been solved.Hope to get that ready for a TASK meeting or something like that. If I get enough feedback and input, it may grow up to a SecTor or similar submission.

Saturday, January 30, 2010

Theory != reality in Infosec too

I was reading a nice post from Gunnar Peterson about APTs. His making the point that everybody is excited about this "oh huge threat oh oh" stuff from the Google x China incident but in fact we should be worried about properly engineering the systems we depend on. I like his analogy of blaming the big bad wolf instead of the house of straws.But you know what? I think that my current depressed state has changed my way of thinking about security (or changing my way of thinking about security is making me depressed...). I agree with him that the source of the problems is bad security from the deep of the systems we rely on Today, bad (or no) security design in general. But I just think this is a problem we cannot solve. We can see the same issue on several other disciplines, old design and decisions being perpetuated in a way that causes issues to current stuff. However, revolutionary approaches are not (or are almost never) possible due to the way that economy and society works. The technology evolution is also so fast that it would require too many revolutionary processes to solve the recurrent problem of old decisions based on premises no longer valid causing problems to the current state. We simply cannot afford burning everything to ground and start fresh again. All these things are competing for resources and it would be naive to believe we could just choose to build everything with the perfect design.Gunnar uses the example of the Chicago reconstruction after the great fire. I think it is a great example, but it doesn't fit exactly his intention. It shows that once something out of your control happens and puts everything to the ground, you have the choice to start fresh and with a better design. Now, how many times have you got the opportunity to start something from scratch in IT? Hey, wouldn't it be nice to build an OS with no backward compatibility concerns? Ask Microsoft if they don't dream with that every night! :-)Gunnar is asking for something right that is just not practical. Maybe I'm being too cynic and conformist, and I believe we need people who push us to take those revolutionary roads, but when someone does that is usually the exception and not the norm. Those who are dealing with real life issues need to be pragmatic. Yes, we need to protect our straw houses.What I think is more important from Gunnar's post is this line:"The boring stuff is what's important"That's different from trying to re-design everything. There are lot's of boring stuff that we need to do to protect the straw house :-) My first and main example is access control. IMHO there isn't anything more boring in Infosec than Access Control - access reviews, entitlement reporting, fire IDs, privileged accounts tracking, wow, those things kill me. But I must say that doing those things properly will probably reduce a lot more risk than buying the last pretty-pizza-box-with-blinking-lights. The problem will be finding smart people who enjoy that enough to that properly. Today's biggest challenge in Information Security is to find smart people willing to work with boring stuff.That's my last line from my "back to blogging post". Wow, I've just noticed how much I miss doing. Ok, I'm back :-)

Saturday, January 16, 2010


This is a information security blog, but it's also an opportunity to talk about an important cause. Please, take some time to donate (even one dollar) to the victims of the earthquake at Haiti:

RED CROSS: www.redcross.caWORLD VISION CANADA: www.worldvision.caUNICEF: www.unicef.caSALVATION ARMY: www.salvationarmy.caMÉDECINS SANS FRONTIÈRES: