From Mr. Geer's infinite wisdom
I managed again to finish reading a Dan Geer speech without my brain exploding. This one is his "Trade-Offs in Cyber Security" one (9 October 13, UNCC). As usual incredibly well written and dense, very dense. There's a specific point I liked because of stuff I was planning to right about:
"But only rarely do we ask our Legislatures to make mitigation effective. Instead, over and over again we ask our Legislatures to make failure impossible. When you embark on making failure impossible, and that includes delivering on statements like "Never again," you are forced into cost-benefit analyses where at least one of the variables is infinite. It is not heartless to say that if every human life is actually priceless, then it follows that there will never be enough money. One is not anti-government to say that doing a good job at preventing terrorism is better than doing a perfect job."
That's how we usually react to breaches, while simultaneously preaching there's no 100% security or "zero risk". This just doesn't make sense. As we're always talking about lessons learned exercises after incidents, how frequently have you seen one of those cases end up with this conclusion: "breach aligned to previously assessed residual risk, no further actions required"?
If that's not happening we are either not assessing risk correctly or we haven't done the appropriate job to ensure risk acceptance is well understood by decision makers. We are turning "small risk accepted" situations into "never again". This is how we end up with scenarios like the derivatives crisis (I was about to say Black S***, but I want to ensure no kittens are harmed during thi post writing). There's a big difference between almost impossible and impossible, even if they initially look the same. As Dan Geer also said in this same speech, "Proving a negative requires omniscience". Do you know absolutely everything that goes on your network?