Category Archives: Security economics

Social-science angles of security

Moore's Law won't kill passwords

Computers are getting exponentially faster, yet the human brain is constant! Surely password crackers will eventually beat human memory…

I’ve heard this fallacy repeated enough times, usually soon after the latest advance in hardware for password cracking hits the news, that I’d like to definitively debunk it. Password cracking is certainly getting faster. In my thesis I charted 20 years of password cracking improvements and found an increase of about 1,000 in the number of guesses per second per unit cost that could be achieved, almost exactly a Moore’s Law-style doubling every two years. The good news though is that password hash functions can (and should) co-evolve to get proportionately costlier to evaluate over time. This is a classic arms race and keeping pace simply requires regularly increasing the number of iterations in a password hash. We can even improve against password cracking over time using memory-bound functions, because memory speeds aren’t increasing nearly as quickly and are harder to parallellise. The scrypt() key derivation function is a good implementation of a memory-bound password hash and every high security application should be using it or something similar.

The downside of this arms race is that password hashing will never get any cheaper to deploy (even in inflation-adjusted terms). Hashing a password must be as slow and costly in real terms 20 years from now or else security will be lower. Moore’s Law will never reduce the expense of running an authentication system because security depends on this expense. It also needs to be a non-negligible expense. Achieving any real security requires that password verification take on the order of hundreds of milliseconds or even whole seconds. Unfortunately this hasn’t been the experience of the past 20 years. MD5 was launched over 20 years ago and is still the most common implementation I see in the wild, though it’s gone from being relatively expensive to evaluate to extremely cheap. Moore’s Law has indeed broken MD5 as a password hash and no serious application should still use it. Human memory isn’t more of a problem today than it used to be though. The problem is that we’ve chosen to let password verification become too cheap.

Yet more banking industry censorship

Yesterday, banking security vendor Thales sent this DMCA takedown request to John Young who runs the excellent Cryptome archive. Thales want him to remove an equipment manual that has been online since 2003 and which was valuable raw material in research we did on API security.

Banks use hardware security modules (HSMs) to manage the cryptographic keys and PINs used to authenticate bank card transactions. These used to be thought to be secure. But their application programming interfaces (APIs) had become unmanageably complex, and in the early 2000s Mike Bond, Jolyon Clulow and I found that by sending sequences of commands to the machine that its designers hadn’t anticipated, it was often possible to break the device spectacularly. This became a thriving field of security research.

But while API security has been a goldmine for security researchers, it’s been an embarrassment for the industry, in which Thales is one of two dominant players. Hence the attempt to close down our mine. As you’d expect, the smaller firms in the industry, such as Utimaco, would prefer HSM APIs to be open (indeed, Utimaco sent two senior people to a Dagstuhl workshop on APIs that we held a couple of months ago). Even more ironically, Thales’s HSM business used to be the Cambridge startup nCipher, which helped our research by giving us samples of their competitors’ products to break.

If this case ever comes to court, the judge might perhaps consider the Lexmark case. Lexmark sued Static Control Components (SCC) for DMCA infringement in order to curtail competition. The court found this abusive and threw out the case. I am not a lawyer, and John Young must clearly take advice. However this particular case of internet censorship serves no public interest (as with previous attempts by the banking industry to censor security research).

Authentication is machine learning

Last week, I gave a talk at the Center for Information Technology Policy at Princeton. My goal was to expand my usual research talk on passwords with broader predictions about where authentication is going. From the reaction and discussion afterwards one point I made stood out: authenticating humans is becoming a machine learning problem.

Problems with passwords are well-documented. They’re easy to guess, they can be sniffed in transit, stolen by malware, phished or leaked. This has led to loads of academic research seeking to replace passwords with something, anything, that fixes these “obvious” problems. There’s also a smaller sub-field of papers attempting to explain why passwords have survived. We’ve made the point well that network economics heavily favor passwords as the incumbent, but underestimated how effectively the risks of passwords can be managed in practice by good machine learning.

From my brief time at Google, my internship at Yahoo!, and conversations with other companies doing web authentication at scale, I’ve observed that as authentication systems develop they gradually merge with other abuse-fighting systems dealing with various forms of spam (email, account creation, link, etc.) and phishing. Authentication eventually loses its binary nature and becomes a fuzzy classification problem. Continue reading Authentication is machine learning

Who will screen the screeners?

Last time I flew through Luton airport it was a Sunday morning, and I went up to screening with a copy of the Sunday Times in my hand; it’s non-metallic after all. The guard by the portal asked me to put it in the tray with my bag and jacket, and I did so. But when the tray came out, the newspaper wasn’t there. I approached the guard and complained. He tried to dismiss me but I was politely insistent. He spoke to the lady sitting at the screen; she picked up something with a guilty look sideways at me, and a few seconds later my paper came down the rollers. As I left the screening area, there were two woman police constables, and I wondered whether I should report the attempted theft of a newspaper. As my flight was leaving in less than an hour, I walked on by. But who will screen the screeners?

This morning I once more flew through Luton, and I started to suspect it wouldn’t be the airport’s management. This time the guard took exception to the size of the clear plastic bag holding my toothpaste, mouthwash and deodorant, showing me with glee that it has half a centimetre wider than the official outline on a card he had right to hand. I should mention that I was using a Sainsbury’s freezer bag, a standard item in our kitchen which we’ve used for travel for years. No matter; the guard gleefully ordered me to buy an approved one for a pound from a slot machine placed conveniently beside the belt. (And we thought Ryanair’s threat to charge us a pound to use the loo was just a marketing gimmick.) But what sort of signal do you give to low-wage security staff if the airport merely sees security as an excuse to shake down the public? And after I got through to the lounge and tried to go online, I found that the old Openzone service (which charged by the minute) is no longer on offer; instead Luton Airport now demands five pounds for an hour’s access. So I’m writing this blog post from Amsterdam, and next time I’ll probably fly from Stansted.

Perhaps one of these days I’ll write a paper on “Why Security Usability is Hard”. Meanwhile, if anyone reading this is near Amsterdam on Monday, may I recommend the Amderdam Privacy Conference? Many interesting people will be talking about the ways in which governments bother us. (I’m talking about how the UK government is trying to nobble the Data Protection Regulation in order to undermine health privacy.)

The Perils of Smart Metering

Alex Henney and I have decided to publish a paper on smart metering that we prepared in February for the Cabinet Office and for ministers. DECC is running a smart metering project that is supposed to save energy by replacing all Britain’s gas and electricity meters with computerised ones by 2019, and to cost only £11bn. Yet the meters will be controlled by the utilities, whose interest is to maximise sales volumes, so there is no realistic prospect that the meters will save energy. What’s more, smart metering already exhibits all the classic symptoms of a failed public-sector IT project.

The paper we release today describes how, when Ed Milliband was Secretary of State, DECC cooked the books to make the project appear economically worthwhile. It then avoided the control procedures that are mandatory for large IT procurements by pretending it was not an IT project but an engineering project. We have already written on the security economics of smart meters, their technical security, the privacy aspects and why the project is failing.

We managed to secure a Cabinet Office review of the project which came up with a red traffic light – a recommendation that the project be abandoned. However DECC dug its heels in and the project appears to be going ahead. Hey, we did our best. The failure should be evident in time for the next election; just remember, you read it here first.

Password cracking, part II: when does password cracking matter?

Yesterday, I took a critical look at the difficulty of interpreting progress in password cracking. Today I’ll make a broader argument that even if we had good data to evaluate cracking efficiency, recent progress isn’t a major threat the vast majority of web passwords. Efficient and powerful cracking tools are useful in some targeted attack scenarios, but just don’t change the economics of industrial-scale attacks against web accounts. The basic mechanics of web passwords mean highly-efficient cracking doesn’t offer much benefit in untargeted attacks. Continue reading Password cracking, part II: when does password cracking matter?

Password cracking, part I: how much has cracking improved?

Password cracking has returned to the news, with a thorough Ars Technica article on the increasing potency of cracking tools and the third Crack Me If You Can contest at this year’s DEFCON. Taking a critical view, I’ll argue that it’s not clear exactly how much password cracking is improving and that the cracking community could do a much better job of measuring progress.

Password cracking can be evaluated on two nearly independent axes: power (the ability to check a large number of guesses quickly and cheaply using optimized software, GPUs, FPGAs, and so on) and efficiency (the ability to generate large lists of candidate passwords accurately ranked by real-world likelihood using sophisticated models). It’s relatively simple to measure cracking power in units of hashes evaluated per second or hashes per second per unit cost. There are details to account for, like the complexity of the hash being evaluated, but this problem is generally similar to cryptographic brute force against unknown (random) keys and power is generally increasing exponentially in tune with Moore’s law. The move to hardware-based cracking has enabled well-documented orders-of-magnitude speedups.

Cracking efficiency, by contrast, is rarely measured well. Useful data points, some of which I curated in my PhD thesis, consist of the number of guesses made against a given set of password hashes and the proportion of hashes which were cracked as a result. Ideally many such points should be reported, allowing us to plot a curve showing the marginal returns as additional guessing effort is expended. Unfortunately results are often stated in terms of the total number of hashes cracked (here are some examples). Sometimes the runtime of a cracking tool is reported, which is an improvement but conflates efficiency with power. Continue reading Password cracking, part I: how much has cracking improved?

The rush to 'anonymised' data

The Guardian has published an op-ed I wrote on the risks of anonymised medical records along with a news article on CPRD, a system that will make our medical records available for researchers from next month, albeit with the names and addresses removed.

The government has been pushing for this since last year, having appointed medical datamining enthusiast Tim Kelsey as its “transparency tsar”. There have been two consultations on how records should be anonymised, and how effective it could be; you can read our responses here and here (see also FIPR blog here). Anonymisation has long been known to be harder than it looks (and the Royal Society recently issued a authoritative report which said so). But getting civil servants to listen to X when the Prime Minister has declared for Not-X is harder still!

Despite promises that the anonymity mechanisms would be open for public scrutiny, CPRD refused a Freedom of Information request to disclose them, apparently fearing that disclosure would damage security. Yet research papers written using CPRD data will surely have to disclose how the data were manipulated. So the security mechanisms will become known, and yet researchers will become careless. I fear we can expect a lot more incidents like this one.

Call for Papers: eCrime Researchers Summit

I have the privilege of serving as co-chair of the program committee for the Anti-Phishing Working Group’s eCrime Researchers Summit, to be held October 23-24 in Las Croabas, Puerto Rico. This has long been one of my favorite conferences to participate in, because it is held in conjunction with the APWG general meeting. This ensures that participation in the conference is evenly split between academia and industry, which leads to in-depth discussions of the latest trends in online crime. It also provides a unique audience for academic researchers to discuss their work, which can foster future collaboration.

Some of my joint work with Richard Clayton appearing at this conference has been discussed on this blog, from measuring the effectiveness of website take-down in fighting phishing to uncovering the frequent lack of cooperation between security firms. As you will see from the call for papers, the conference seeks submissions on all aspects of online crime, not just phishing. Paper submissions are due August 3, so get to work so we can meet up in Puerto Rico this October!
Continue reading Call for Papers: eCrime Researchers Summit