Tuesday, February 22, 2011

Oddjob, new?

A lot of noise out there about "Oddjob", a new malware that exploits the open session of the victim in online banking websites:

http://www.net-security.org/malware_news.php?id=1636

http://www.theregister.co.uk/2011/02/22/oddjob_banking_trojan/

Interesting, despite the fact that it's SOOOO 2005, 2007... :-)

What all those flames can tell us about the Infosec industry?

The RSA Conference ended again bringing only a small bunch of interesting sessions and a lot of "anti-new-buzzword" products, just like the last editions. One thing that seemed to be different this year was the irascible state that a lot of professionals in the field are. I saw a lot of heated arguments during panels, discussions and even twitter discussions. Is it because we are getting more passionate about our work or is it just a symptom of the frustration of fighting an uphill battle?
 
I think this heated discussions show the push for change in deeply ingrained concepts in our field. The famous "High/Medium/Low" risk management fallacy, for example, is one of those. The ultimate trust in the perimeter and the anti-buzzword tools are also changing to something else. What are those things changing to? There are proposals and new ideas everywhere, but it's still not clear what will win in each case. But there are good hints out there. We have heard a lot about "situation awareness" and "meaningful data/metrics", among other things. Those are probably some of the concepts that will be taught as core components of information security to the future professionals.
 
Now, about that: how can we ensure that those things will be assimilated by major drivers of security education, such as the major certifications (e.g. CISSP)? After all, it doesn't make sense to talk so much about the next generation risk management and decision making methodologies if people will still be studying ALE to pass an exam. We need to break that cycle, ASAP.

Thursday, February 10, 2011

Lenny Zeltser on Information Security — 5 Addictions of Information Security Professionals

5 Addictions of Information Security Professionals

Like most other disciplines, information security has its share of practices that are performed out of habit and might be detrimental to the organization. Here are a few such “addictions” that I have come to witness into world of information security:

  • Long security policies: Process and detail-oriented individuals that we are, we cannot help but create wordy documents that might satisfy some auditors, but are too long to be read by others. Keep security policies and procedures short.
  • Strict security mandates: Security documentation often codifies the desirable state of the security program without accounting for practical limitations of the business and humans who there. The policies are often unrealistic or overly-strict, making it impractical for people to follow them. Make security documents realistic.
  • Information security gadgets: Since infosec professionals often have engineering or technical backgrounds, we love technology. As the result, we are easily excited to hear about new security gizmos that promise to take care of a security issue du jour. We forget that people and process are other critical elements of a security program. Exercise restraint when deploying new security tools.
  • Best practices: We love making references to “best practices” without considering the extent to which they are applicable to the occasion or have been shown to actually reduce risk. Along these lines, we attempt to implement high-level frameworks such as ISO 27001/27002 without customizing them for the situation. Be judicious when picking which security controls you adapt.
  • Prevention of security incidents: We often think in terms of preventing security incidents, setting ourselves up for failure. A more practical approach might be to focus on making it more costly to bypass your defenses and investing effort into breach detection and incident response. Reexamine the success factors of your security program.

For more thoughts along these lines, take a look at my earlier posts:

Lenny Zeltser

What a great post from Lenny Zeltser. I rarely see people mentioning the stuff he points out in "Strict security mandates" and "Best practices"; In my opinion they are among the biggest issues in security groups Today. Both can be seen as the "by the book security syndrome" that affects a lot of security managers.

Wednesday, February 9, 2011

Security Management guidance

Earlier Today I was thinking about how to provide guidance on information security decision making to new managers on the field. I realized that this is not the only area of information security that we don't have enough guidance available for new security managers. There's almost no content on security team organization, security operations, human resources (apart from the old discussions around certifications and "hiring hackers"), relationship with other groups within the organization and the development of a security strategy. 

I know there's a lot of books, courses and other resources related to what an organization should do about information about security. However, the guidance for the manager who will conduct that group, focusing on the "how" instead of "what", is just not there. 

Now, imagine yourself as a brand new security manager, with the new mission to assemble a security team for a big company. You know everything about PCI, ISO2700x, Firewalls, IDSes, cryptography. But how should you start? How many people do you need? What roles will they have? Can you define a job description and describe the kind of professional you want in each of those roles? What about the relationship with the auditors? How should you conduct that? By the way, what will you do next? 

Those are the answers that I feel are not easily available for those starting in these positions. It seems to be content good enough for a book. I wonder, is it writable? Or is it the kind of content that depends so much of the context that will be useless? If you are planning to become a security manager, or has just started in the position, do you think this kind of guidance would be valuable to you?

I hope to see some good content like that on RSA next week. If are attending and have an opinion about this, please drop me a line, we can talk about it there.

 

 

 

Monday, February 7, 2011

The never-ending discussion of security models

It has been very interesting to follow the discussions about information security models between Alex Hutton, Josh Corman, Gunnar Peterson and other big names of the field. I'm excited with the opportunity to watch some of the debates around the subject that will take place at RSA this year. I hope I can spend some time talking with them about it too.
 
I've always been interested in security strategy and decision making. Although I worked with the committee representing Brazil in the ISO/IEC groups responsible for the 27000 standards family, I've always been sceptical about risk assessment basement models. I've worked in several risk assessment projects, initiatives and processes, and I found it always end up being just a disguise for guesstimates, personal opinion, or just backwards numbers justification ("The risk is high, now let's find out the variables values that justify it"). On the other side, the proposed alternatives always looked like a good fit only for specific situations or contexts, never being able to completely replace the risk assessment based methodologies.
 
One of these days I was reading "The Grand Design", from Stephen Hawking and Leonard Mlodinow. Hawking goes through the recent advances in theoretical physics and the quest for a Great Unified Theory of everything, a theory that would be able to put together Einstein's relativity and quantum mechanics. He ends up saying that we may never find something like that, but the current set of theories we have would work exactly like that, each one being used for its specific context. Isn't that cheating, you might ask?
 
No, if we think that those theories are nothing more than models trying to represent reality. As the flat maps we use everyday, even knowing the Earth is round, models can be useful even if they are not 100% accurate. They only need to be as accurate as our needs. We don't even know if it's possible to build the ultimate perfect model (we could be in a position in the universe that would not allow us to identify the missing stuff, like additional dimensions), but only approximations. That's a very valid argument for me, and I immediately thought it has its parallels in information security too.
 
So, does it mean we won't find the perfect model for information security? Yes, it may be. I believe we will never be able to build a complete (and useful) model of information security, but we can build a set of models that can be used in specific contexts to drive our efforts in a more effective way. Risk assessments might not be useful in some contexts, but they are certainly more useful where good data is available for the variable values on the Risk equation. Threat modeling, compliance/baseline based security, everything can be used together as a set of models that helps us improve security, with empirical data and metrics to measure verifiable results.
 
And what's the perfect mix of models, where should we use each of those tools? I don't know, but accepting that we might have to use all (or many) of them to get what we want seems to be an important step in the right direction.
 
And let the RSA discussions begin!

Friday, February 4, 2011

Interesting Citrix/Windows information disclosure vulnerability

It seems to be just a small vulnerability and maybe fixable by simple configuration, but I found it neat. I was playing around with a application session on a Citrix environment thas has been properly "locked down" in a way that I could not see the contents of the local drives. Well, I should not be able to see that, but in fact, I found an interesting (and probably possible to be automatically exploited by client side scripts) way to bypass those controls. I started by opening a common file browse dialog (such as those you get when you choose "Save As") and using it to create a new shortcut (right button, "new", "shortcut") somewhere where I could write to. By doing it I've got the "Create Shortcut Wizard":



It seems that Windows provide some sort of "auto-complete" feature in that text box where I should input the full path for the shortcut. So, by typing just "C:" without pressing Enter I've got a drop down list with everything, folders and files, under "C:". I could potentially see everything on that volume (the NTFS permissions still work - specifically the "X" bit - I can't list the contents of the Administrator profile folder) just by typing the path and waiting for Windows to show me the options in that drop box. You can see how it looks like in the pictures below:




As the browsing restriction is, AFAIK, related to the explorer.exe process only, it ends up that I could also go ahead and create shortcuts to those files and use them to access the files through the published Citrix applications. Small issue, but knowing that administrators tend to leave important scripts, dump files and other stuff on servers and those often "hidden" files in published web folders, it could end up being used by a regular Citrix user to access privileged information.
Is anyone aware of any settings that could be used to disable this "path name suggestion" feature on Windows?
UPDATE: The Group Policy item used to hide drives is called Hide these specified drives in My Computer, and it's described here. However, I couldn't find anything so far that prevents the behaviour I described above.

Infosec’s Flu « The New School of Information Security

Interesting paralel made by Adam Shostack on the new school blog:

In “Close Look at a Flu Outbreak Upends Some Common Wisdom,” Nicholas Bakalar writes:

If you or your child came down with influenza during the H1N1, or swine flu, outbreak in 2009, it may not have happened the way you thought it did.

A new study of a 2009 epidemic at a school in Pennsylvania has found that children most likely did not catch it by sitting near an infected classmate, and that adults who got sick were probably not infected by their own children.

Closing the school after the epidemic was under way did little to slow the rate of transmission, the study found, and the most common way the disease spread was a through child’s network of friends.

The work he discusses is “Role of social networks in shaping disease transmission during a community outbreak of 2009
H1N1 pandemic influenza
” by Simon Cauchemeza, Achuyt Bhattaraib, Tiffany L. Marchbanksc, Ryan P. Faganb, Stephen Ostroffc, Neil M. Fergusona, David Swerdlowb, and the Pennsylvania H1N1 working group.

The first thing that comes to mind is that closing schools is a best practice. It’s something that makes so much sense that it’s hard to argue against, even if it does no good. The next thing is look at what happens when they have data available to them. They can study their prescriptions and test to see if they did any good. But note how detailed the data is: social graphs, seating charts. This isn’t something we would obviously get from more detailed breach notices. It’s going to require in-depth investigations, and investigators who talk about their methods. VERIS is a step in this direction, and I’m looking forward to seeing critiques or even competitors that can help us move forward and learn.


But the data we have is the data we have, and while we work to get more, there’s a good deal that we can probably learn from what’s out there. We just have to be willing to ask if our practices really work.

 

He is right pointing that study has far more data than what we usually have available in security today. Unfortunately, if we proceed with the comparison, we will see that information about service providers (hosting, cloud, vulnerability management), development shops, supporting platforms and products and even training investments would be necessary in order to allow us to get conclusions such as those from the Flu study. I'm not saying that it's impossible or worthless, but we would need far more information sharing in order to achieve that level.

Thursday, February 3, 2011

Great piece by Anton Aylward on IT Architecture

It is on Anton's blog as a October 2010 post, but for some reason I just got it now on Google Reader. As I'm currently in an Architecture role, it is an interesting reading, I couldn't agree more with him, specially after going through TOGAF training:

A friend ang colleague who is also a security guru and much better qualified than me and who admits that he is not a huge fan of enterprise architecture frameworks doesn’t think that “enterprise
architecture” is on a completely solid footing; he points out that it’s a major business for Gartner, following their takeover of Meta Group.

He asks “Anton, you’re a systems engineer and hence familiar with large-scale modelling and design: what’s your take on the widespread use of ‘architecture’? Is it over-egging the pudding?”

Probably.
Its certainly is a heavily abused term. One that has been hijacked by marketing and owes more to articles in glossy magazines than engineering substance.

One thing we are all aware of is that IT as a whole is subject to CHANGE. In fact Change Management is a very necessary skill in all aspects of the profession even if the practitioners do not admit to it.

Disciplines like ITIL bring change management to the forefront, and even in InfoSec we are well aware that unauthorized or unmanaged change can be precipitous.

The point here is that most of what engineers get taught about architecture is. on the one hand, about change in that they are building something new, but they don’t seem to be be taught to
“build-for-change”, that is changes to come. In fact it seems bl00dy difficult to get engineers to build for maintenance and repair. Its taken heavy financial incentive to get engineers to design for easy of assembly on some - not all - production lines. And that may conflict with ease of maintenance!

Sidebar: All those Salvation Army Special PCs I’ve worked on, for
myself and as a social contribution, machines I’ve worked on for
clients, have stuck me as things that skin my knuckles when I try to
change hard drives, fuses or twist ribbon cables into a 21st century
artists rendering of a Gordian knot. Ease of maintenance they are not.

I can understand how civil engineers or shipbuilders fail to consider ‘ease of change’; the effort to reposition a highway or a city because of a change in the requirement is high. However Jane Jacobs showed that city planners also don’t seem very inclined to learn form past mistakes and work to political rather than social agendas.

The Irony is that software and much of IT is in such heavy flux and so much of it is as amenable to change as silly-putty. Yet designers still have a mindset that is rooted in a more physical and more fixed world. Change, to many “software architects” means changing your button-bar or
your colour scheme. Big deal.

Well OK, many are tied in by vendor ideas: can you change your mail user agent and still access your mail repository? Notionally I can because I keep all my mail on an IMAP server, but in reality the vendor-specific configuration means that if I changed between Thunderbird, KMail, Gmail,
Outlook, Pegasus, Elm, Balsa, Eudora or Evolution then I would have the better part of a day wasted in setting up all my accounts and the rebuilding of the local indexes. Not quite “vendor lock-in” but a high enough threshold to make change a chore.

Now lets take a look at something else.

The Open Group Architecture Framework (TOGAF) is a framework - a detailed method and a set of supporting tools - for developing an enterprise architecture.

There’s a lot of material at
http://www.opengroup.org/architecture/togaf8-doc/arch/toc.html
That TOC has a nice diagram showing “change management” last of all and not discussing it, and not addressing “designing for change”.

But to address the question my colleague raised I focused on this:

http://www.opengroup.org/architecture/togaf8-doc/arch/chap30.html

“IT Architecture” and “IT Architect” are widely used but poorly defined
terms in the IT industry today. They are used to denote a variety of
practices and skills applied in a wide variety of IT domains. There is a
need for better classification to enable more implicit understanding of
what type of architect/architecture is being described.

This lack of uniformity leads to difficulties for organizations seeking
to recruit or assign/promote staff to fill positions in the architecture
field. Because of the different usages of terms, there is often
misunderstanding and miscommunication between those seeking to recruit
for, and those seeking to fill, the various roles of the architect.

I can’t agree more!

In fact this is an excellent page. But as I said to begin with, the conceptual model omits the fundamental assumption that is implicit in IT, that of change.

And Information Security is even more so!