Friday, December 11, 2009
Friday, November 20, 2009
Security decision making
Dear security friends,
I'mplanning for a long time to work on a paper/presentation about securitydecision making. I was planning to talk with different securityprofessionals to hear about how their decision making process works andwhere it can be improved. But I've just realized that Google Wave isthe perfect tool for a collaboration job like that. I will, of course,provide the proper credits to anyone who contributes. :-)
Well, some classification and and taxonomy first. I think we could try to break decision making in:
-Scope: it can be from a single application to a whole organization. I'mquite sure that the process changes from one to another, so it makessense to consider it.
- Type of decision: what is the goal of the decision? The most common are:
- Trade-offs: the famous control x productivity impact
- Cost: should I take the risk or pay to reduce/eliminate it
- Control Prioritization: among all those security controls, which one should I implement first?
- Risk prioritization: among all those risks, which one should I tackle first?
-Security optimization: considering all the resources available, how todeploy them in a way to maximize security (minimize risk)
-Risk measurement: going through the vanilla process of measuringexposure, impact, threat level, likelihood and getting the resultingrisk.
- Quantitative: ROSI
- Benchmarking: comparing what others are doing under similar situations
- Regulatory/compliance: doing because it is required
-Metric based: this triggers the whole discussion about securitymetrics, what should be measured, how and what are the desirable values.
-There are several issues with the risk assessment methodologies. Idon't like the feeling of "educated guess" from the qualitativeassessments and there are a lot of conceptual failures on theROSI side.Also, the data available is not good enough to generate good impact andlikelihood numbers. Some researchers believe we should generate newmodels to avoid these pitfalls
-Prescriptive standards: apply more prescriptive regulations, such asPCI DSS, to reduce the "interpretation" issues from more flexibleframeworks and methodologies.
So,I'll add people that I think will bring value to this discussion.Please feel free to expand the wave. Let's see where it will take us.
(I'malso don't know how to invite some people that I know is testing Wavebut I'm not seeing in my contact list...how do I do it?)
Some interesting references to consider/read about this subject:
Friday, October 23, 2009
Friday, September 25, 2009
Wednesday, September 9, 2009
Tuesday, September 8, 2009
Thursday, September 3, 2009
Wednesday, August 26, 2009
I want to make the certification exams offered by (ISC)2 more respected on a technical level. While I understand that the exams are not focused on technology -- "Security Transcends Technology", even! -- this is not a valid reason for exams that have outdated, misleading, or incorrect material.
I want greater accountability from (ISC)2 to its members. This is focused on (but not limited to) exam procedure and feedback. If there is a problem, it should be acknowledged and addressed in a reasonably transparent manner.
I want the purpose and scope of the (ISC)2 certifications to be well-defined. The CISSP certification is considered the de facto standard for technical security jobs; if it is not designed to do this, there should be clear guidelines from (ISC)2 on where it is appropriate and inappropriate to be gauging the skill and qualifications of a job applicant depending on whether they have the certification.You can sign his petition at http://sethforisc2board.org/
Friday, August 21, 2009
Thursday, August 20, 2009
- Payment processing company (Heartland) had a breach, leaking thousands of credit card information
- Heartland's CEO complains that they went through the regular PCI-DSS audit and the QSA had not pointed out the issues related to the breach
- Security industry goes mad about his complaints: "compliance is not security", "compliant at that time doesn't mean always compliant", "PCI-DSS is just a set of minimum requirements", the QSA report is just information based on their own honesty, etc, etc, and finally, "he should know all that".
Friday, August 14, 2009
I'm sure there is a lot of exaggeration on the effects of an incident. Some business tend to fell more the effects of an incident than others, for instance. We can tell that the retail business can survive pretty much harmless to an incident, like we saw with TJX and so many others. But what about payment services companies?
The last two examples are really interesting, CardSystems and Heartland. CardSystem is out of business because of its incident. Heartland is surviving, but take a look at their share price:
The impact can be zero? Yes it can, but it depends on a series of factors, like the organization business, details of the incident (what type of information has leaked, how it happened) and how the organization dealt with it.
Monday, August 10, 2009
Friday, August 7, 2009
"[...] I listened to yet another conference presenter complaining about security awareness training. He was talking about the difficulty of getting employees at his company to actually follow his security policies: encrypting data on memory sticks, not sharing passwords, not logging in from untrusted wireless networks. "We have to make people understand the risks," he said.
"Fire someone who breaks security procedure, quickly and publicly," I suggested to the presenter. "That'll increase security awareness faster than any of your posters or lectures or newsletters." If the risks are real, people will get it."He is totally right about it. Employees perceive very fast the organization posture on its own rules. Everyday decisions are usually based on personal risks, and not on organization related risks. The employee is thinking mostly about the risk to his performance and to his job, not to the company itself. If people starts to be punished for security policy violations, this "personal risk" starts to be considered on decisions like forwarding internal mail to external accounts and sharing passwords.I had the opportunity to witness the change in people's behaviour because of changes in management posture before. In one of these cases a group of developers used to share passwords among their group to "keep things running while they are away" and were encouraged by their manager to do so. They immediately changed this behaviour as soon as that manager was publicly reprimanded by his director due to promoting bad security practices and warned that it would be formally punished if identified again.The other case, at the same organization, was related to prohibited content being accessed on the Internet. We didn't have content filtering at that time, but by using some simple Perl scripts and Proxy logs I was able to trigger the process of warning managers of abuse from the biggest offenders. The actions taken by those managers (strongly encouraged by higher management) over those warnings triggered a huge change in behaviour from all users, that could be clearly noted in the next month's logs. People just realized that there was a real risk related to that behaviour, so they changed it. An interest fact about this case was that some users went the other way and started using stuff like proxy websites to avoid the controls. The same mechanism (report of users doing that) that triggered this behaviour was also used to reduce it. Users doing that were punished, and the message that Internet access was being monitored and that attempts to abuse it would be punished was clearly received. So, if you want to know what's the best investment on security awareness: real punishment of violations. Change the employee personal risk/reward equation.
Friday, July 24, 2009
- More prescriptive directions (like PCI-DSS)
- Quick and dirty, facts based threat assessment
- Actions prioritization based on immediate outcome, reach (threats and assets related) and increasing value over time
- Outcome based metrics
Friday, July 17, 2009
Friday, July 3, 2009
Friday, June 19, 2009
Thursday, June 11, 2009
Friday, June 5, 2009
Wednesday, May 20, 2009
- Compare the risk from different organizations (benchmarking!)
- Compare the risk for the same organization in different points of time
- Identify a comfortable level of risk that will be pursued by the implementation of security measures
- Identify the results of applying security measures (answering the basic question, "was it helpful/worth doing?")
- Compare the risk from two or more different business processes, components or approaches
- Protect against "black swans" (this one is extremely hard)
- Include "blind spots" from the organization into the risk calculation
- Consider the interdependency of different business and technology processes and components (how much risk are your production systems inheriting from your development systems?)
- Be resilient to the fact that almost all medium/big organizations have very high levels of uncertainty on the different variables usually necessary for a meaningful risk calculation
Tuesday, May 19, 2009
Foranyone who has worked in a “front line” customer facing telephonesupport role, the answer is almost always am emphatic “YES”. I tend toagree with my colleagues for one simple reason - embitterment helps you succeed.
Why do I think IT folks need to have a sprinkle of bitterness be inthis field? The fact is that IT, like roadkill removal, is truly athankless job. Sure, guidance counselors, parents, and the media willall tell you that “Computers are the way to go” for a good salary,benefits, and career advancement. The problem with that mentality isthat it’s not the mid-1980’s anymore. More and more jobs are beingmoved to parts of the world where wages are lower and, to be perfectlyfrank, people are willing to do the crappy jobs that North Americansthink are beneath them.
To be clear, I’m not saying that working in IT is the hardest, orworst, job around. IT workers are taken for granted, much like theaforementioned roadkill removal worker. Most people enjoy driving towork on a road free from dead animals. When an animal gets run over andleft for dead, the roadkill removal person is dispatched to “dispose”of the remains. When was the last time you sent a “thank you” card toyour roadkill removal person? To that end, when was the last time yousent a “thank you” card to a member of your IT department? Show ofhands?
Now let’s jump back to my original topic with a metaphor: an ITcareer is like a human body and, in order for your career to live along and healthy life, you need a nice thick layer of skin to protectyou from infection. The “infection” in this metaphor referrers to theemotional challenges that every IT professional experiences duringtheir career. In order for IT personnel to adequately quote with thecritical thinking required to overcome most IT related challenges, a“thick skin” is a requirement — one that I believe should show up onmost job postings.
Working on the front lines of an IT organization let’s youexperience what it’s like to sympathize, and empathize, with those whoare having the problems. It lets you develop valuable customer serviceand communications skills while you work towards making the customerhappy. Along the way you’ll have numerous bad experiences which willserve as lessons that you can use to make yourself a better person.
No matter what role you hold within an organization, you havecustomers to answer to. This is something that working the front linesforces you to remember. Good or bad, working in the trenches teachesyou valuable life lessons that will only help you grow as an ITprofessional.
The help desk is the best place to see how those incredibly nice projects fail, cause problems or are twisted to be used for different purposes (and bringing different risks). Working there for some time will help to create that "wait a minute, this will cause issues" mindset that is so valuable for the security professional.
Monday, May 11, 2009
Q: What about the organization that says "but we use authorize.net, PayPal, Google Checkout (or whoever) to process our card payments for items we sell on the web. We don't ever handle the card data ourselves, so we don't need to worry about PCI...do we?"
A: Indeed, outsourcing credit card data processing is a very good way of reducing the scope of your PCI compliant environment. However , it is not the same as “outsourcing PCI DSS” since it does not completely shield you from PCI DSS requirements. “Scope reduction” is NOT “PCI elimination.” There are still areas where you must make an effort to comply. However, PCI Qualified Security Assessor (QSA) is the authorized source of this information.Q: Is a QSA the only authorized entity to run a scan or can I as the owner of our business run the scan myself?
A: This is a pure misconception; 100% false. As per PCI DSS requirement 11.2, an approved scanning vendor (PCI ASV vendor) must be used for external (=Internet-visible) scanning. Internal scanning can be performed by yourself or anybody else skilled in using a vulnerability scanner.Q: Do we need to ensure that our third party fulfillment company is PCI DSS compliant as well (especially if they are taking credit card numbers for our customers)?
A: It is hard to say how the contracts are written in such case, but often the answer is indeed “yes.” Moreover, if they take credit cards they need to be compliant and protect the data regardless of their relationship with you. PCI QSA is the authorized source of this information.Q: Is a fax with credit card information that arrives to organization’s fax server considered to be a digital copy of this data?
A: A digital fax containing a credit card number is likely in scope for PCI DSS. There is some debate about the “pre-authorization data”, but protecting credit card information applies to all types of information: print, files, databases, fax, email, logs, etc.
Q: For a small merchant that only processes a handful of transactions a month, are there alternatives to some of the expensive technology requirements (e.g. application firewalls, independent web/db servers, etc)?
A: Outsourcing credit card transactions is likely the right answer in such circumstances.
Friday, May 8, 2009
Friday, May 1, 2009
Thursday, April 30, 2009
- People and process
Hey guys, time to get your eyes out of the debugger. I mean, there's a lot of great content being produced on the validation/verification side, people confirming those very small chances of exploiting a specific product or technology. In other words all those guys "making the theoretical possible". Don't get me wrong, this kind of research is critical to our field, but it seems that everybody now wants to do it. We need more people that can look into the problems in a different perspective, bringing concepts and ideas from other fields, like psychology (Schneier is doing it), biology (Dan Geer) and economy (Ross Anderson). All these fields have evolved a lot and we can get a lot of new ideas from them to apply to security. We can use them not only to improve technology but mostly to improve our processes, our risk management and assessment methodologies and the way that we think about risk and security. How can we still be discussing "compliance x security"? We had Malcolm Gladwell as keynote last year on RSA presenting the ideas from "Blink" (his book at that time) and I still haven't seen anything created in security using that valuable information about how people think. Just think for a minute how those instintive decisions mentioned on Blink affect things like security awareness and incident response. You'll be amazed about how much we can use from that in our job.
There is also an old discussion about the profile of the security professional. This is one of the favourite topics of my friend Andre Fucs. Although I think it's a very important discussion, I'm not really interested in it right now. As I'm listing things that I believe we should work to improve and I included "People" as a component, it is important to mention that.
I'm seeing these days a lot of people bashing Bruce Schneier because he said that there's nothing new in Cloud Computing. Even if I partially agree with the criticisms, I think there is some true in that affirmation too. Yes, there is a lot more flexibility and mobility in the cloud model, but there's nothing new in terms of technology. Almost everything we need to do our jobs have been invented already. We just need to look into our huge toolbox and identify what we need to use under these new conditions.
I think the relation between the cloud and virtualization curious. Virtualization is being pointed as a way to implement the necessary platform independence and resource democratization that characterizes the cloud, but I believe we are just wasting resources by going into that direction. A few years ago Java (or, being more generic, "bytecode" stuff) seemed to be the way to go to achieve that platform independence. So, why put layers over layers of OSes if we can do what is needed using different OSes? Remember the "Write once, run everywhere"? Maybe this is not the best time to talk about java, anyway.
We are also pushing a lot of things to the endpoint. See what is being done with AJAX, all those mashups. And how are we trying to secure the endpoint nightmare? Sandboxes! How will sandboxes work with a technology that requires you to integrate all those things from different sources and trust levels exactly AT the endpoint? I really can't see a sucessful sandbox implementation under Web 2.0 reality.
Why am I talking about virtualization and sandboxing? Because both, when we talk about security, are solutions to a problem that we may know how to solve by better approaches. We are doing that because we are using crappy Operating Systems. I don't want to sound like Ranum and say that we need to write everything from scratch again, but let's assume, for instance, that we have decent Operating Systems; why would I bother to create virtual OS instances when I can put all my applications running above a single (more effective and secure) one? Why should we worry about VMotion when we can just move applications? The mainframe guys are running different applications in the same OS instance for years, being able to secure them against each other and effectively managing resources and performance. Let's learn from those guys before all of them are retired sipping Margheritas in Florida.
Ok, even if we solve the issue inside the same organization, there's still the issue of dealing with multiple entities in the cloud model. Again, the problem is Trust. As I said before, transitive trust is an illusion and if we try to rely on it we will see a whole new generation of security issues arise. I honestly don't know how we will solve it, but one of my bets would be in reputation systems.
In fact, the business model of the cloud is not different from lots of things we do in the "real" world. We trust people and companies without knowing all their employees or all other parts involved in ther business processes. We do that based on reputation. A nice thing about it is that we can leverage some of the cloud characterics to implement huge reputation services. Reputation databases can share, correlate and distribute information just like we do with names on DNS, with small and distributed queries. Let's imagine a new world of possibilities for a moment:
Your dynamic IT provisioning systems constantly gets information about processing costs from cloud services providers. It finds the best prices and acceptable SLAs, triggering the process to transparently move your applications to the best providers, keeping you always at the lowest available "IT utility" cost. Eventually, someone may try to include theirselves in the "providers pool" to receive your data into their premises to abuse it. However, your systems will not only check for prices and SLAs. They check the reputation for each provider, allowing the data to be transfered only to those that match you risk decisions. Just think about a database with reputation from several different providers, like Amazon, Google, GoGrid and McColo.v.2 (!). The database will be constantly fed with information about breaches, infected/compromised systems on each of those providers, vulnerability scanning results, abuse complaints. Everything mixed by mathematical models that will tell you which one you should trust your data to. That's for the cloud. Reputation can even be used to help end users systems to decide the trust level for each application they run (Panda and other AV companies are going in this direction). Future looks promising.
A good call from one of the RSA keynotes was from Cisco CEO John Chambers. He talked about collaboration and integration. I really was expecting to see that at the Expo floor, but there wasn't anything really special. I was expecting to see more about IF-MAP, didn't see anything even from Juniper. Tipping Point CTO Brian Smith presented how their view of how the integration of different products can improve or, in fact, transform the way that we do firewall rules. Getting tags from different systems (reputation based systems?) and building the rules based on tags, that was awesome. One of the few high points of RSA to me. I was planning to do a review of RSA and end up writing something like "my view of the current and future state of information security". It's probably poorly organized, not well fundamented, but I intentionally decided to keep it this way. I want to make it a "food for thought" stuff. As usual, comments are welcome. Have fun.
Wednesday, April 22, 2009
Tuesday, April 21, 2009
Saturday, April 11, 2009
Monday, April 6, 2009
With T.Rob Wyatt of IBM
The Heartland Payments breach is another case where hackers were able to compromise the "soft center" inside the corporate network. One of the major security holes that remains unplugged in many organizations is middleware, especially middleware used for application-to-application and application-to-DB communication.This webinar will feature the expertise of T-Rob Wyatt who is an IBM security consultant focusing on IBM Websphere MQ, which has been implemented by over 15000 enterprises around the world. T-Rob will talk about some of the security problems he has found working with merchants, payment processors and other enterprises, most of which have been missed by PCI assessments, often because PCI QSAs are not familiar enough with MQ series and other middleware to evaluate the security of the configuration.This webinar will be very valuable for merchants, banks, PCI assessors and anyone else who is not sure what middleware vulnerabilities they have and how to make the changes to eliminate them.SPEAKER: T-Rob Wyatt - Senior Managing Consultant, IBMTopics to be discussed include:** What are the major middleware vulnerabilities?** What organizations still have these vulnerabilities?** What is required to eliminate these vulnerabilities?** What should organizations do near term to solve this problem?
- The Information Security profession: I talked for some minutes about it with my friend Fucs. He posted something about it in his blog and started a discussion on Linkedin. I have my own thoughts about it and I'll write about them here too.
- How to improve security as a whole, or how to improve security decision making. I sent a proposal for a RSA presentation on it, that was not accepted. Our current risk assessment and management models don't seem right to me, and I have a perception that most of security decisions, roadmaps and strategies are simply fairy tales. I was glad to see the last rants from Marcus Ranum, where he pointed to a lot of those things. I'm not as pessimist as him, as I think we can find alternative ways to think about security and to have better decisions about it. A lot of the issues he mentioned are old facts about society and corporate culture, they haunted Quality and Safety disciplines far before they started to be a problem on information security. I believe we should look to our past for things like that and try to find how we have managed to find a balanced state for them. Maybe we haven't, and we just need to figure out how to deal with that too.
- The last one, again something from my conversations with Fucs too. This time, some new ideas about botnets Command and Control systems, improving things we presented in 2007 at Black Hat Europe. Conficker has come implementing some of those concepts, and we are seeing how well (or how bad) they worked and what could be done to improve it. I must say we have some great ideas, but I would really like to find something more on the detection and defence side before going into a presentation again about it. Let's see where our chats head us to in the near future.
Thursday, April 2, 2009
Wednesday, April 1, 2009
Monday, March 30, 2009
Friday, March 20, 2009
- Move the virtualized hosts from one server to the others
- Patch it the "idle" server
- Check if it comes back properly
- Gradually puts back the load on that server and checks if there is any impact from the patch
- If everything is ok, go back to step #1 for the next server - repeat until all servers are patched