Wednesday, January 30, 2008

Blind spots and JJ's blog

I was reading Shimel's blog today and followed an indication from him about another security blog, "JJ's Security Uncorked". It was a very nice surprise to find this post about three things that are often forgotten in network inventories, assessments and other processes: Cameras, Controllers and Card Readers. It was particularly interesting for me because those things are listed in an article I'm writing about "Security blind spots", those things that are present in our IT environment and are overlooked by security initiatives and control deployments.Besides the devices, it's also interesting to look into the processes that deal with them. It's funny that I've found several companies with very advanced Infosec programs that overlooked the physical access control world, keeping things like access reviews, segregation of duties and least privilege far away from it. Sometimes the logical access control processes are deeply documented and all the responsibilities are properly defined, but nobody knows exactly who is in charge of controlling badges, tapes from CCTV systems and so on.So, follow JJ's advice and don't forget those three C's. In a few days I'll talk more about other security blind spots too.

Tuesday, January 29, 2008

Axur Blog

Axur is a Brazilian company with huge knowledge about ISO27001/2. Their product, ISMS, is a great solution for those looking for a platform to build their ISMS over. They are blogging in english now, it's a very good source of information about the standards.

Thursday, January 24, 2008

Automated malware analysis

I may be a little late on this, but only today I was presented to NORMAN Sandbox, an automated sandbox that analyses malware that you can submit to it online.(update: credits to Sp0oKeR, who indicated the site to me) The system has very nice features. It can identify what the malware does when executed, like registry and file changes, binding to other processes, outbound network access. It can really simplify the job of analyzing malware.I tried some code that I'm using on an Ethical Hacking test and it was perfectly identified by the system. Here is a sample of the report (A piece of malware from one of the thousand phishing scams I usually receive in my gmail account):Torpedovivo.exe : INFECTED with W32/Downloader (Signature: NO_VIRUS) [ DetectionInfo ]* Sandbox name: W32/Downloader* Signature name: NO_VIRUS* Compressed: YES* TLS hooks: YES* Executable type: Application* Executable file structure: OK[ General information ]* File might be compressed.* Decompressing PKLite.* Creating several executable files on hard-drive.* File length: 57344 bytes.* MD5 hash: 0615fc502feef76ac4efe3936de2b2b8.[ Changes to filesystem ]* Creates file C:WINDOWSiexplorerconfigwin.exe.[ Network services ]* Downloads file from as C:WINDOWSiexplorerconfigwin.exe.* Connects to "" on port 80.* Opens URL:* Downloads file from as C:WINDOWSiexplorerconfigwin32.exe.* Connects to "" on port 80.* Opens URL: /.[ Security issues ]* Starting downloaded file - potential security problem.[ Process/window information ]* Attemps to NULL .* Attemps to NULL C:WINDOWSiexplorerconfigwin.exe .* Creates process "C:WINDOWSiexplorerconfigwin.exe".* Attemps to NULL C:WINDOWSiexplorerconfigwin32.exe .* Creates process "C:WINDOWSiexplorerconfigwin32.exe".[ Signature Scanning ]* C:WINDOWSiexplorerconfigwin.exe (4096 bytes) : no signature detection.(C) 2004-2006 Norman ASA. All Rights Reserved.The material presented is distributed by Norman ASA as an information source only.

Still believe that insider threat is not that big?

Then read this. The French bank Societé Generale lost more than $7 billion (yes, billion!) because of an internal fraud, commited by a single trader. That's an interesting insider threat case!I found this piece particularly interesting:"Axel Pierron, senior analyst at Celent, an international financial research and consulting firm, was stunned that a trader could be involved in such a massive fraud 13 years after the Barings Bank collapse.

“The situation reveals that banks, despite the implementation of sophisticated risk management solutions, are still under the threat that an employee with a good understanding of the risk management processes can getting round them to hide his losses,” he said."

 I can bet that this case also includes access control and segregation of duties issues. That clearly shows that companies are not properly monitoring their internal environments, not only in a network perspective but also in business applications.

Friday, January 18, 2008

Peterson's method to incite security

I was reading this post from Gunnar Peterson about how to improve application security levels in an organization. He mentions a curious strategy to induce competion between different development teams. In a certain way his method works with a motivation that is vey curious for us: the right to "remain insecure". But in a nice way.He proposed that a certain number of applications are evaluated and the most insecure needs to have all its vulnerabilities fixed. Of course that it will cause some headaches to the business that depends on that app and, mostly, to the manager responsible for that software. So, during the development phase the development team will try to avoid vulnerabilities, so they won't fall in the last position of the "competition". As every team will do it, the average level of security will be improved. That's really a nice approach. One thought, however, i that it needs a strong high management support. The first exception given will throw it all away.An important thing to say is that this kind of competition can also be used to other teams that need to follow any kind of security behavior. We can make the business team with more inapropriate Internet access cases attend to mandatory security training, or the IT support team with more vulnerabilities on their workstations and/or servers needing to fix all of them. By using this approach the organization can estimulate security built in on several disciplines without having to deal with all problems at once.

Thursday, January 17, 2008

French methodology for Information Security Risk Management

I've just received a link pointing to a Risk Management methodology used by the French government called "EBIOS": Expression of Needs and Identification of Security Objectives.There isn't anything revolutionary on this, being a good work of putting together things like ISO27002 and the Common Criteria / ISO15408. However, the site also has an open source applications developed to help those that are using the methodology on their risk management initiatives.The tool is basically designed to aid on a risk assessment process. It uses the structure of the methodology to indicate the information that needs to be gathered about the system and/or organization being assessed. Very interesting and, most important, it's free.

Wednesday, January 16, 2008

Patching Oracle?

OK, so Oracle DBAs are not patching their databases.  Why does that happen?I can see a number of factors here:- Bad security professionals that believe that "Oracle is very secure" and just worry about patching Microsoft, the source of all evil things in earth.- Terrorist DBAs that are always saying that "patching the DB shouldn't be done, it's working now and it will stop for sure if we try it"- Bad press, making a lot of noise about patches for one product but putting Oracle mass patch releases in very small headlines.Put those together and you'll see what happens on most organizations. DBMS systems are usually what I'm calling "security blind spots": a risk source that usually is not taken into account by risk assessments and management processes, or even by auditors. It's a problem, but nobody knows it is there. Why should someone fix a problem that nobody is worried about?That's exactly what used to happen with SQL Server. Can you remember when that changed? SLAMMER is the answer. When an "Slammer for Oracle" is released we will see everybody quickly including Oracle in their vulnerability management processes and checklists. Oracle will probably need to release its patches more frequently and in a better way to install them too, just as Microsoft had to do (and did).And there is one thing that almost nobody noticed: Microsoft SQL Server 2005 has only one registered vulnerability in CVE. Is security still a reason to choose Oracle instead of MSSQL?

Tuesday, January 15, 2008

Good discussion on OTP/2FA for online banking

I'm having a good conversation about OTP/2FA for online banking in the cisspforum mail list. Tim Bass and Martin Wehlou incredibly good professionals and are adding valuable points to the subject.Martin posted (01/2007) in his blog a very good explanation about the problem that the banks are trying to solve with OTP solutions. He also suggests the use of signed e-mail messages to confirm transactions. I really apreciate the idea, even because e-mail clients like Outlook make digital certificates usage quite simple to the regular user.I'm adding their blogs in my blogroll.

Monday, January 14, 2008


I've just read from the Symantec Security Response Weblog that they detected a trojan that behaves exactly like what I predicted a few years ago: it dynamicly changes the content from wire-transfer transactions, defeating two factor authentication mechanisms. It was also part of my Black Hat presentation last year.What will happen to the two-factor authentication fever if this attack starts to spread? I believe that we will start to see some challenge-response solutions that include data from the transaction appearing. It's one of the best solutions to use against this attack vector.

Wednesday, January 9, 2008

SQL Injection worm/bot?

I was reading at SANS ISC diary about mass compromises by SQL Injection. It seems to be something automated, maybe a botnet or even a worm. What kind of automated threat this is isn't really what matters here. The most important fact here is that we are now seeing SQL Injection attacks being used by malware. This is really interesting news. It shows that those old vulnerabilities on Operating Systems and/or services are not the only way to do that now. We were already seeing some cases of malware targeted to user-related technologies, like those using XSS vulnerabilities. Most of them require user interaction, like the user browsing to an infected website. But SQL Injection attacks don't require that. It's a clear situation that shows that attacks are "climbing the layers", as I said here. Some of the current cases are mixing the exploit of SQL Injection vulnerabilities with local vulnerabilities, but there is something that I haven't seen anyone mentioning that is also important to note.Today, almost all new applications are using some kind of SOA/Web 2.0 technology. Today it's quite common to find Web and applications servers that can go out to the Internet through HTTP/HTTPS (after all, they need to access other webservices out there). Wise firewall administrators will set their rules to allow access only to specified web servers, but we always knew that it's not what usually happens. So, rules like "My Servers -> Internet, port 80, accept" are starting to appear in several rulebases around world. Put this together with the rise of application based attacks worms and we will start to see pretty serious incidents around the world.So, take some time to review your firewall rulebase. Can your web and application servers be used by malware to spread an mass infection? Remember, good rules are "least privilege" rules. And don't forget to monitor your outgoing traffic and check it for attacks too.

Tuesday, January 8, 2008

Security Policies organization

Today I read in a forum someone asking about the best way to write an organization's security policy; should it be a long and complete document or a simpler one with just a couple of pages?I was answering the question when I realized I could post here some of the approaches I have been taking to this problem.I believe that a security policy should describe how security works for the organization. So, it must contain information about the basic security processes, like risk management and exemption approval. My rule of thumb is to identify all the security processes of the organization and describe then in separate documents (Policies). Over them there would be a one page documents with the main directives, i.e., the main security rules to the organization. This document (Directives / General Policy) would be something very similar to those Mission/Vision/Values statements that lots of companies like to hang on their walls.Besides Policies and Directives, I also like to work with two more focused sets of documents, one for the technical public and other for the general public. The technical documents (Guidelines, Standards, Architecture) would fill the need for technical requirements that need to be addressed by IT teams, like IT support and developers. The "general public" document is usually called "Information Security Manual", and it will contain all the rules that every person working for the organizations (employees, consultants..."associates"?) must know. It needs to be written in a more friendly language, usually produced as a nice booklet. This should include a page with some sort of commitment that the person will sign after reading, generating the evidence that everybody is aware of their responsibilities regarding information security.
Ok, but what about the content for all those documents? ISO27002 can help you a lot on the directives and policies. The technical part will depend a lot on the technology that your organization uses, but good resources are SANS, Microsoft Technet and thousands of others. The Information Security Manual should be produced by "internal marketing" guys, with your support. It's very good to see the results from this work. Those marketing guys can really make security stuff seem nice to normal people :-)

Thursday, January 3, 2008

Always getting back to basics

I always like to read people trying to look again to more basic issues on security. This approach permits us to find more elegant solutions and is the way to the revolutionary ideas, those that we look at later and think "oh, but it was so obvious!".I'm always discussing with my clients about why they need to deploy new controls instead of improving the processes and tools that they are already using. Amrit Williams has just written about how well companies are capable of managing their IT environment. Yes, this is a very basic thing to think about, and that's why I like it.Amrit says that "it is quite common for an organization to be blind to 15-30% of their computing devices at any given point in time". He is completely right, even more when he mentions not-so-common systems like mainframes, PDAs, Unix flavors. People tend to feel comfortable when they get the grip over their Windows environments, but that's just the beginning. I'm tired to find networks completely vulnerable on those less common platforms, while relatively secure on their Windows servers and workstations. People that are automatically controlling software deployment on Windows servers but are sharing the root password for the Linux boxes. At his post he asks the reader to try to answer these questions: "how many devices are actively deployed in my environment right now and how many of those do I actively manage?"Yes, we'll be dealing with several new problems during 2008. However, don't forget to look at the basics before going for the next solution-in-a-box trend. You can be sure that improving some basic processes can help you more on increasing your security level than adding another protection layer (and complexity to the environment).

The threat from user applications

Since the WMF vulnerability in January 2006 the client applications seemed to become the next target for malware and malicious attackers. I wrote about the evolution of threats and related vulnerabilities at that time.  So, it's not very surprising to see here and here that people are worried about vulnerabilities in software other than the OSes. Microsoft has established a good updating system. A lot of people have complaints about it, but it works. There wasn't any new worms exploiting unpatched Microsoft vulnerabilities for a long time. What concerns me most, however, is that software that are as ubiquitous as Windows are not being updated as they should. Adobe software, like Acrobat and Flash Player, are the most critical ones. There is also the Java Virtual Machine.Almost all of them already have an automatic updating system. The problem is that we have software from several vendors installed, what multiplies the installed agents that automatically check and update them. Take a look at your Windows start up points and you'll notice several agents that constantly check for updates of your installed software. Do you have a way to quickly verify if all those things are up to date?If this is a problem for the home user, imagine it for big corporations. When I talk with CSOs about their patch management strategies they are always proud to show how quickly they can update their main servers. Some of them can update Microsoft software on their workstations very fast too. But it's interesting to see that almost nobody have a plan or process in place to update things like Adobe Acrobat or the JVM. Even more interesting is the fact that they don't seem to understand how important that is.We still haven't seem a big incident based on vulnerabilities at those products. With all the opportunities to share content generated by social networks, blogs and feed, it isn't hard to see the possibilities to make that happen. It will be funny to watch desperate CSOs trying to explain how lots of computers became infected even after those massive investments on WSUS, BigFix, younameitpatchproduct and antivirus. But after the usual crisis there will be a huge opportunity window opened for new products to try to solve that.  Once again we will see people buying little boxes as pills to solve their pain.  And once again they will spend more than necessary and will remember that they should have thought about another thing first: the process.And not only the security guys will realize that they weren't doing things right. Auditors will hush to update their checklists and add more questions about patch management. Hey, weren't they already asking about it? OK, I'm still waiting to meet an auditor that asks about how a company deals with patching applications other than Operating Systems. I'm sure that most of them even know that there is a risk related to that.And once again we look at a "new" threat appearing at the horizon with that "deja vous" feeling. If everything happens in the way they happened before, everybody will be OK. The problem is that the world is also changing. This new threat is appearing in a world dealing with increasingly stronger cybercriminals and targeted attacks. Mixing those factors can really bring us problems more serious that those we faced in the past.