Light Blue Touchpaper now supports TLS, so as to protect passwords and authentication cookies from eavesdropping. TLS support is provided by the Pound load-balancer, because Varnish (our reverse-proxy cache) does not support TLS.
The configuration is intended to be a reasonable trade-off between security and usability, and gets an A– on the Qualys SSL report. The cipher suite list is based on the very helpful Qualys Security Labs recommendations and Apache header re-writing sets the HttpOnly and Secure cookie flags to resist cookie hijacking.
As always, we might have missed something, so if you notice problems such as incompatibilities with certain browsers, then please let us know on <firstname.lastname@example.org>. or in the comments.
I’m at the IEEE Symposium on Security and Privacy, known in the trade as “Oakland” even though it’s now moved to San Jose. It’s probably the leading event every year in information security. I will try to liveblog it in followups to this post.
The main reason for passwords appearing in headlines are large password breaches. What about being able to fearlessly publish scrambled passwords as they are stored on servers and still keep passwords hidden even if they were “123456”, “password”, or “iloveyou”.
I will be trying to liveblog Financial Cryptography 2014. I just gave a keynote talk entitled “EMV – Why Payment Systems Fail” summarising our last decade’s research on what goes wrong with Chip and PIN. There will be a paper on this out in a few months; meanwhile here’s the slides and here’s our page of papers on bank security.
The sessions of refereed papers will be blogged in comments to this post.
Passwords have not really changed since they were first used. Let’s go down the memory lane a bit and then analyse how password systems work and how they could be improved. You may say – forget passwords, OTP is the way forward. My next question would then be: So why do we use OTP in combination with passwords, when they are so good?
Continue reading Anatomy of Passwords
I’m at SOUPS 2013, including the Workshop on Risk Perception in IT Security and Privacy, and will be liveblogging them in followups to this post.
Yesterday I received the NSA award for the Best Scientific Cybersecurity Paper of 2012 for my IEEE Oakland paper “The science of guessing.” I’m honored to have been recognised by the distinguished academic panel assembled by the NSA. I’d like to again thank Henry Watts, Elizabeth Zwicky, and everybody else at Yahoo! who helped me with this research while I interned there, as well as Richard Clayton and Ross Anderson for their support and supervision throughout.
On a personal note, I’d be remiss not to mention my conflicted feelings about winning the award given what we know about the NSA’s widespread collection of private communications and what remains unknown about oversight over the agency’s operations. Like many in the community of cryptographers and security engineers, I’m sad that we haven’t better informed the public about the inherent dangers and questionable utility of mass surveillance. And like many American citizens I’m ashamed we’ve let our politicians sneak the country down this path.
In accepting the award I don’t condone the NSA’s surveillance. Simply put, I don’t think a free society is compatible with an organisation like the NSA in its current form. Yet I’m glad I got the rare opportunity to visit with the NSA and I’m grateful for my hosts’ genuine hospitality. A large group of engineers turned up to hear my presentation, asked sharp questions, understood and cared about the privacy implications of studying password data. It affirmed my feeling that America’s core problems are in Washington and not in Fort Meade. Our focus must remain on winning the public debate around surveillance and developing privacy-enhancing technology. But I hope that this award program, established to increase engagement with academic researchers, can be a small but positive step.
Last weekend, my wife and I were in Milton Keynes where we bought a cradle as a present for our new granddaughter. They had only the demo model in the shop, but sold us one to pick up from their store in Cambridge. So yesterday I went into John Lewis with the receipt, to be told by the official that as I couldn’t show the card with which the purchase was made, they needed photo-id. I told him that along with over a million others I’d resisted the previous government’s ID card proposals, the last government had lost the election, and I didn’t carry ID on principle. The response was the usual nonsense: that I should have read the terms and conditions (but when I studied the receipt later it said nothing about ID) and that he was just doing his job (but John Lewis prides itself on being employee-owned, so in theory at least he is a partner in the firm). I won’t be shopping there again anytime soon.
We get harassed more and more by security theatre, by snooping and by bullying. What’s the best way to push back? Why can businesses be so pointlessly annoying?
Perhaps John Lewis are consciously pro-Labour given their history as a co-op; but it’s not prudent to advertise that in a three-way marginal like Cambridge, let alone in the leafy southern suburbs where they make most of their money. Or perhaps it’s just incompetence. When my wife phoned later to complain, the customer services people apologised and said we should have been told when we bought the thing that we’d need to show ID. She offered to post the cradle to our daughter, but then rung back later to say they’d lost the order and would need our paperwork. So that’s another 30-mile round-trip to their depot. But if they’re incompetent, why should I trust them enough to buy their food?
I invite the chairman, Charlie Mayfield, to explain by means of a follow-up to this post whether this was policy or cockup. Will he continue to demand photo-id even from customers who have a principled objection? Will he tell us who in the firm imposed this policy, and show us the training material that was prepared to ensure that counter staff would explain it properly to customers?
Research in the Security Group has uncovered various flaws in systems, despite them being certified as secure. Sometimes the certification criteria have been inadequate and sometimes the certification process has been subverted. Not only do these failures affect the owners of the system but when evidence of certification comes up in court, the impact can be much wider.
There’s a variety of approaches to certification, ranging from extremely generic (such as Common Criteria) to highly specific (such as EMV), but all are (at least partially) descendants of a report by Willis H. Ware – “Security Controls for Computer Systems”. There’s much that can be learned from this report, particularly the rationale for why certification systems are set up as the way they are. The differences between how Ware envisaged certification and how certification is now performed is also informative, whether these differences are for good or for ill.
Along with Mike Bond and Ross Anderson, I have written an article for the “Lost Treasures” edition of IEEE Security & Privacy where we discuss what can be learned, about how today’s certifications work and should work, from the Ware report. In particular, we explore how the failure to follow the recommendations in the Ware report can explain why flaws in certified banking systems were not detected earlier. Our article, “How Certification Systems Fail: Lessons from the Ware Report” is available open-access in the version submitted to the IEEE. The edited version, as appearing in the print edition (IEEE Security & Privacy, volume 10, issue 6, pages 40–44, Nov‐Dec 2012. DOI:10.1109/MSP.2012.89) is only available to IEEE subscribers.
Today the UK Information Commissioner’s Office levied a record £250k fine against Sony over their 2011 Playstation Network breach in which 77 million passwords were stolen. Sony stated that they hashed the passwords, but provided no details. I was hoping that investigators would reveal what hash algorithm Sony used, and in particular if they salted and iterated the hash. Unfortunately, the ICO’s report failed to provide any such details:
The Commissioner is aware that the data controller made some efforts to protect account passwords, however the data controller failed to ensure that the Network Platform service provider kept up with technical developments. Therefore the means used would not, at the time of the attack, be deemed appropriate, given the technical resources available to the data controller.
Given how often I see password implementations use a single iteration of MD5 with no salt, I’d consider that to be the most likely interpretation. It’s inexcusable though for a 12-page report written at public expense to omit such basic technical details. As I said at the time of the Sony Breach, it’s important to update breach notification laws to require that password hashing details be disclosed in full. It makes a difference for users affected by the breach, and it might help motivate companies to get these basic security mechanics right.