I’m at the launch in London of the new campaign for medical privacy, MedConfidential.org. Sam Smith and I will be liveblogging the day’s events in comments below. For background, see here, here, here and here. Most of today’s audience are from groups for whom medical privacy is particularly important, such as charities dealing with rape victims, substance abuse, sexual health and child wefare.
Posts filed under 'Security economics
Last weekend, my wife and I were in Milton Keynes where we bought a cradle as a present for our new granddaughter. They had only the demo model in the shop, but sold us one to pick up from their store in Cambridge. So yesterday I went into John Lewis with the receipt, to be told by the official that as I couldn’t show the card with which the purchase was made, they needed photo-id. I told him that along with over a million others I’d resisted the previous government’s ID card proposals, the last government had lost the election, and I didn’t carry ID on principle. The response was the usual nonsense: that I should have read the terms and conditions (but when I studied the receipt later it said nothing about ID) and that he was just doing his job (but John Lewis prides itself on being employee-owned, so in theory at least he is a partner in the firm). I won’t be shopping there again anytime soon.
We get harassed more and more by security theatre, by snooping and by bullying. What’s the best way to push back? Why can businesses be so pointlessly annoying?
Perhaps John Lewis are consciously pro-Labour given their history as a co-op; but it’s not prudent to advertise that in a three-way marginal like Cambridge, let alone in the leafy southern suburbs where they make most of their money. Or perhaps it’s just incompetence. When my wife phoned later to complain, the customer services people apologised and said we should have been told when we bought the thing that we’d need to show ID. She offered to post the cradle to our daughter, but then rung back later to say they’d lost the order and would need our paperwork. So that’s another 30-mile round-trip to their depot. But if they’re incompetent, why should I trust them enough to buy their food?
I invite the chairman, Charlie Mayfield, to explain by means of a follow-up to this post whether this was policy or cockup. Will he continue to demand photo-id even from customers who have a principled objection? Will he tell us who in the firm imposed this policy, and show us the training material that was prepared to ensure that counter staff would explain it properly to customers?
Today, the UK Cards Association (UKCA) published their summary of bank fraud for 2012. This provides an important insight into banking fraud, and the level of detail which the UK banks provide is very welcome. The UKCA figures go back to 2007, but I’ve collected the figures from previous releases going back to 2004. This data reveals some interesting trends, especially related to the deployment of new security technologies.
The overall fraud losses in 2012 are £475.3m, up 11% from the 2011 level, but for the purposes of comparison it is helpful to exclude the losses from phone banking since these figures were only available since 2009 (and are only 2.7% of the total). If we look at the resulting trend in total fraud (£462.7m) we can see that while there was an increase in 2012, that is from a starting position of a 10-year low in 2011 so isn’t a reason to panic. We are still far from the peak in 2008 of £704.3m.
[You may have noticed the miniaturised graph in line with the text above, which an an example of a sparkline and I'll be using these throughout this post to more clearly show trends in the data. Each graph shows the change in a single value over the 2004–2012 period, and is followed by the figure for 2012 in red.]
However, there is a large omission in the UKCA data – it records losses of the banks and merchants but not customers. If a customer is a victim of fraud, but the bank refuses to refund them (because the bank claims the customer was negligent), we won’t see it in these figures – as confirmed by a UKCA representative in an interview on BBC Radio Merseyside on 2007-02-19. We don’t know how much is missing from the fraud statistics as a result, but from the Financial Services Authority statistics we can see that there were 483,666 complaints in the first half of 2012 against firms regarding disputed charges, so the sums in question could be substantial. But despite this limitation, the statistics from the UKCA are valuable, especially in that it gives a break down of fraud by type.
Last week I spoke at a conference on digital health at the Scottish parliament. The talks are now online; my talk is here, and my slides here. At present, medical records in Scotland are organised differently under its fourteen different health boards, with wide variations in privacy, safety and functionality. Needless to say, officials in Edinburgh see this as an opportunity for centralisation; they want to follow the sad story in England. The political dynamic north of the border is much the same: officials want to grab all the data, GPs are not keen, but the public’s not paying attention.
If you’re interested in these issues, save April 24th in your diary; there will be a big medical privacy event in London organised by a number of NGOs.
Yesterday the European Commission launched its new draft directive on cybersecurity, on a webpage which omits a negative Opinion of the Impact Assessment Board. This directive had already been widely leaked, and I wrote about it in an EDRi Enditorial. There are at least two serious problems with it.
The first is that it will oblige Member States to set up single “competent authorities” for technical expertise, international liasion, security breach reporting and CERT functions. In the UK, these functions are distributed across GCHQ, MI5/CPNI, the new NCA, the ICO and various private-sector bodies. And the UK is relatively centralised; in Germany, for example, there’s a constitutional separation between police and intelligence functions. Centralisation will not just damage the separation of powers essential in any democracy, but will also harm operational effectiveness. Most of our critical infrastructure is in the hands of foreign companies, from O2 through EDF to Google; moving cybersecurity cooperation from the current loose association of private-public partnerships to a centralised, classified system will make it harder for most of them to play.
Second, whereas security-breach notification laws in the USA require firms to report breaches to affected citizens, articles 14 and 15 instead require breach notification to the “competent authority”. Notification requirements can be changed later by order (14.5-7) and the “competent authorities” only have to tell us if they determine it’s in the “public interest” (14.4). So instead of empowering us, it will empower the spooks. But that’s not all. Member States must “ensure that the competent authorities have the power to require market operators and public administrations to: (a) provide information needed to assess the security of their networks and information systems, including documented security policies; and (b) undergo a security audit carried out by a qualified independent body or national authority and make the results thereof available to the competent authority” (15.2). States must also “ensure that competent authorities have the power to issue binding instructions to market operators and public administrations” (15.3) Now as Parliament has just criticised the Home Office’s attempt to take powers to order firms like Google and Facebook to disclose user data by means of the Communications Data Bill, I hope everyone will think long and hard about the implications of passing this Directive as it stands. It’s yet another unfortunate step towards the militarisation of cyberspace.
I’m delighted to announce that my book Security Engineering – A Guide to Building Dependable Distributed Systems is now available free online in its entirety. You may download any or all of the chapters from the book’s web page.
I’ve long been an advocate of open science and open publishing; all my scientific papers go online and I no longer even referee for publications that sit behind a paywall. But some people think books are different. I don’t agree.
The first edition of my book was also put online four years after publication by agreement with the publishers. That took some argument but we found that sales actually increased; for serious books, free online copies and paid-for paper copies can be complements, not substitutes. We are all grateful to authors like David MacKay for pioneering this. So when I wrote the second edition I agreed with Wiley that we’d treat it the same way, and here it is. Enjoy!
Computers are getting exponentially faster, yet the human brain is constant! Surely password crackers will eventually beat human memory…
I’ve heard this fallacy repeated enough times, usually soon after the latest advance in hardware for password cracking hits the news, that I’d like to definitively debunk it. Password cracking is certainly getting faster. In my thesis I charted 20 years of password cracking improvements and found an increase of about 1,000 in the number of guesses per second per unit cost that could be achieved, almost exactly a Moore’s Law-style doubling every two years. The good news though is that password hash functions can (and should) co-evolve to get proportionately costlier to evaluate over time. This is a classic arms race and keeping pace simply requires regularly increasing the number of iterations in a password hash. We can even improve against password cracking over time using memory-bound functions, because memory speeds aren’t increasing nearly as quickly and are harder to parallellise. The scrypt() key derivation function is a good implementation of a memory-bound password hash and every high security application should be using it or something similar.
The downside of this arms race is that password hashing will never get any cheaper to deploy (even in inflation-adjusted terms). Hashing a password must be as slow and costly in real terms 20 years from now or else security will be lower. Moore’s Law will never reduce the expense of running an authentication system because security depends on this expense. It also needs to be a non-negligible expense. Achieving any real security requires that password verification take on the order of hundreds of milliseconds or even whole seconds. Unfortunately this hasn’t been the experience of the past 20 years. MD5 was launched over 20 years ago and is still the most common implementation I see in the wild, though it’s gone from being relatively expensive to evaluate to extremely cheap. Moore’s Law has indeed broken MD5 as a password hash and no serious application should still use it. Human memory isn’t more of a problem today than it used to be though. The problem is that we’ve chosen to let password verification become too cheap.
Yesterday, banking security vendor Thales sent this DMCA takedown request to John Young who runs the excellent Cryptome archive. Thales want him to remove an equipment manual that has been online since 2003 and which was valuable raw material in research we did on API security.
Banks use hardware security modules (HSMs) to manage the cryptographic keys and PINs used to authenticate bank card transactions. These used to be thought to be secure. But their application programming interfaces (APIs) had become unmanageably complex, and in the early 2000s Mike Bond, Jolyon Clulow and I found that by sending sequences of commands to the machine that its designers hadn’t anticipated, it was often possible to break the device spectacularly. This became a thriving field of security research.
But while API security has been a goldmine for security researchers, it’s been an embarrassment for the industry, in which Thales is one of two dominant players. Hence the attempt to close down our mine. As you’d expect, the smaller firms in the industry, such as Utimaco, would prefer HSM APIs to be open (indeed, Utimaco sent two senior people to a Dagstuhl workshop on APIs that we held a couple of months ago). Even more ironically, Thales’s HSM business used to be the Cambridge startup nCipher, which helped our research by giving us samples of their competitors’ products to break.
If this case ever comes to court, the judge might perhaps consider the Lexmark case. Lexmark sued Static Control Components (SCC) for DMCA infringement in order to curtail competition. The court found this abusive and threw out the case. I am not a lawyer, and John Young must clearly take advice. However this particular case of internet censorship serves no public interest (as with previous attempts by the banking industry to censor security research).
Last week, I gave a talk at the Center for Information Technology Policy at Princeton. My goal was to expand my usual research talk on passwords with broader predictions about where authentication is going. From the reaction and discussion afterwards one point I made stood out: authenticating humans is becoming a machine learning problem.
Problems with passwords are well-documented. They’re easy to guess, they can be sniffed in transit, stolen by malware, phished or leaked. This has led to loads of academic research seeking to replace passwords with something, anything, that fixes these “obvious” problems. There’s also a smaller sub-field of papers attempting to explain why passwords have survived. We’ve made the point well that network economics heavily favor passwords as the incumbent, but underestimated how effectively the risks of passwords can be managed in practice by good machine learning.
From my brief time at Google, my internship at Yahoo!, and conversations with other companies doing web authentication at scale, I’ve observed that as authentication systems develop they gradually merge with other abuse-fighting systems dealing with various forms of spam (email, account creation, link, etc.) and phishing. Authentication eventually loses its binary nature and becomes a fuzzy classification problem. (more…)