Plaintext Password Reminders

There was a public outcry followed by ICO “making enquiries” when Troy Hunt published a post about Tesco’s plaintext password reminders exactly a month ago.

I wanted to use the reference for a text I was writing last week when someone asked me about online accounts of Companies House. At that moment I said to myself, wait a second. Companies House sends plaintext reminders as well. How strange. I sent a link to a short post to ComputerWorld. They in turn managed to get a statement from Companies House that includes:

“… although it is [Companies House] certified to the ISO 27001 standard and adheres to the government’s Security Policy Framework, it will carry out a review of its systems in order to establish whether there is a threat to companies’ confidential information.” Continue reading Plaintext Password Reminders

The Perils of Smart Metering

Alex Henney and I have decided to publish a paper on smart metering that we prepared in February for the Cabinet Office and for ministers. DECC is running a smart metering project that is supposed to save energy by replacing all Britain’s gas and electricity meters with computerised ones by 2019, and to cost only £11bn. Yet the meters will be controlled by the utilities, whose interest is to maximise sales volumes, so there is no realistic prospect that the meters will save energy. What’s more, smart metering already exhibits all the classic symptoms of a failed public-sector IT project.

The paper we release today describes how, when Ed Milliband was Secretary of State, DECC cooked the books to make the project appear economically worthwhile. It then avoided the control procedures that are mandatory for large IT procurements by pretending it was not an IT project but an engineering project. We have already written on the security economics of smart meters, their technical security, the privacy aspects and why the project is failing.

We managed to secure a Cabinet Office review of the project which came up with a red traffic light – a recommendation that the project be abandoned. However DECC dug its heels in and the project appears to be going ahead. Hey, we did our best. The failure should be evident in time for the next election; just remember, you read it here first.

Chip and Skim: cloning EMV cards with the pre-play attack

November last, on the Eurostar back from Paris, something struck me as I looked at the logs of ATM withdrawals disputed by Alex Gambin, a customer of HSBC in Malta. Comparing four grainy log pages on a tiny phone screen, I had to scroll away from the transaction data to see the page numbers, so I couldn’t take in the big picture in one go. I differentiated pages instead using the EMV Unpredictable Number field – a 32 bit field that’s supposed to be unique to each transaction. I soon got muddled up… it turned out that the unpredictable numbers… well… weren’t. Each shared 17 bits in common and the remaining 15 looked at first glance like a counter. The numbers are tabulated as follows:

F1246E04
F1241354
F1244328
F1247348

And with that the ball started rolling on an exciting direction of research that’s kept us busy the last nine months. You see, an EMV payment card authenticates itself with a MAC of transaction data, for which the freshly generated component is the unpredictable number (UN). If you can predict it, you can record everything you need from momentary access to a chip card to play it back and impersonate the card at a future date and location. You can as good as clone the chip. It’s called a “pre-play” attack. Just like most vulnerabilities we find these days some in industry already knew about it but covered it up; we have indications the crooks know about this too, and we believe it explains a good portion of the unsolved phantom withdrawal cases reported to us for which we had until recently no explanation.

Mike Bond, Omar Choudary, Steven J. Murdoch, Sergei Skorobogatov, and Ross Anderson wrote a paper on the research, and Steven is presenting our work as keynote speaker at Cryptographic Hardware and Embedded System (CHES) 2012, in Leuven, Belgium. We discovered that the significance of these numbers went far beyond this one case.

Continue reading Chip and Skim: cloning EMV cards with the pre-play attack

Password cracking, part II: when does password cracking matter?

Yesterday, I took a critical look at the difficulty of interpreting progress in password cracking. Today I’ll make a broader argument that even if we had good data to evaluate cracking efficiency, recent progress isn’t a major threat the vast majority of web passwords. Efficient and powerful cracking tools are useful in some targeted attack scenarios, but just don’t change the economics of industrial-scale attacks against web accounts. The basic mechanics of web passwords mean highly-efficient cracking doesn’t offer much benefit in untargeted attacks. Continue reading Password cracking, part II: when does password cracking matter?

Password cracking, part I: how much has cracking improved?

Password cracking has returned to the news, with a thorough Ars Technica article on the increasing potency of cracking tools and the third Crack Me If You Can contest at this year’s DEFCON. Taking a critical view, I’ll argue that it’s not clear exactly how much password cracking is improving and that the cracking community could do a much better job of measuring progress.

Password cracking can be evaluated on two nearly independent axes: power (the ability to check a large number of guesses quickly and cheaply using optimized software, GPUs, FPGAs, and so on) and efficiency (the ability to generate large lists of candidate passwords accurately ranked by real-world likelihood using sophisticated models). It’s relatively simple to measure cracking power in units of hashes evaluated per second or hashes per second per unit cost. There are details to account for, like the complexity of the hash being evaluated, but this problem is generally similar to cryptographic brute force against unknown (random) keys and power is generally increasing exponentially in tune with Moore’s law. The move to hardware-based cracking has enabled well-documented orders-of-magnitude speedups.

Cracking efficiency, by contrast, is rarely measured well. Useful data points, some of which I curated in my PhD thesis, consist of the number of guesses made against a given set of password hashes and the proportion of hashes which were cracked as a result. Ideally many such points should be reported, allowing us to plot a curve showing the marginal returns as additional guessing effort is expended. Unfortunately results are often stated in terms of the total number of hashes cracked (here are some examples). Sometimes the runtime of a cracking tool is reported, which is an improvement but conflates efficiency with power. Continue reading Password cracking, part I: how much has cracking improved?

The rush to 'anonymised' data

The Guardian has published an op-ed I wrote on the risks of anonymised medical records along with a news article on CPRD, a system that will make our medical records available for researchers from next month, albeit with the names and addresses removed.

The government has been pushing for this since last year, having appointed medical datamining enthusiast Tim Kelsey as its “transparency tsar”. There have been two consultations on how records should be anonymised, and how effective it could be; you can read our responses here and here (see also FIPR blog here). Anonymisation has long been known to be harder than it looks (and the Royal Society recently issued a authoritative report which said so). But getting civil servants to listen to X when the Prime Minister has declared for Not-X is harder still!

Despite promises that the anonymity mechanisms would be open for public scrutiny, CPRD refused a Freedom of Information request to disclose them, apparently fearing that disclosure would damage security. Yet research papers written using CPRD data will surely have to disclose how the data were manipulated. So the security mechanisms will become known, and yet researchers will become careless. I fear we can expect a lot more incidents like this one.

Analysis of FileVault 2 (Apple's full disk encryption)

With the launch of Mac OS X 10.7 (Lion), Apple has introduced a volume encryption mechanism known as FileVault 2.

During the past year Joachim Metz, Felix Grobert and I have been analysing this encryption mechanism. We have identified most of the components in FileVault 2’s architecture and we have also built an open source tool that can read volumes encrypted with FileVault 2. This tool can be useful to forensic investigators (who know the encryption password or recovery token) that need to recover some files from an encrypted volume but cannot trust or load the MAC OS that was used to encrypt the data. We have also made an analysis of the security of FileVault 2.

A few weeks ago we have made public this paper on eprint describing our work. The tool to recover data from encrypted volumes is available here.

Source Ports in ARF Reports

Long time readers may recall my posts from Jan 2010 about the need for security logging to include source port numbers — because of the growth of ‘Carrier Grade NAT’ (CGN) systems that share one IPv4 address between hundreds, possibly thousands, of users. These systems are widely used by the mobile companies and the ‘exhaustion‘ of IPv4 address space will lead to many other ISPs deploying them.

A key impact of CGNs is that if you want to trace back “who did that” you may need to have recorded not only an IP address and an accurate timestamp, but also to be able to provide the source port of the connection. Failure to provide the source port will mean that an ISP using CGN will not be able to do any tracing, because they will be unable to distinguish between hundreds of possible perpetrators. In June 2011 the IETF published an RFC (6302) which sets out chapter and verse for this issue and sets out Best Practice for security logging systems.

Earlier this year, at the M3AAWG meeting in San Francisco, I talked with the people who have developed the Abuse Reporting Format (ARF). The idea of ARF is that abuse reports will be in standard format — allowing the use of automation at both sender and receiver. Unfortunately ARF didn’t include a field for the source port….

… but it does now, because RFC 6692 has recently been published. My name is on it, but in reality all of the work on it that mattered was done by Murray Kucherawy who wrote the initial draft, who has tweaked the text to address working group concerns and who has guided it through the complexities of the IETF process. Thanks to Murray, the mechanisms for dealing with abuse have now become just a little bit better.

Online traceability: Who did that?

Consumer Focus have recently published my expert report on the issues that arise when attempting to track down people who are using peer to peer (P2P) systems to share copyright material without appropriate permissions. They have submitted this report to Ofcom who have been consulting on how they should regulate this sort of tracking down when the Digital Economy Act 2010 (DEA) mechanisms that are intended to prevent unlawful file sharing finally start to be implemented, probably sometime in 2014.

The basic idea behind the DEA provisions is that the rights holders (or more usually specialist companies) will join the P2P systems and download files that are being shared unlawfully. Because the current generation of P2P systems fails to provide any real anonymity, the rights holders will learn the IP addresses of the wrongdoers. They will then consult public records at RIPE (and the other Regional Internet Registries) to learn which ISPs were allocated the IP addresses. Those ISPs will then be approached and will be obliged, by the DEA, to consult their records and tell the appropriate account holder that someone using their Internet connection has been misbehaving. There are further provisions for telling the rights holders about repeat offenders, and perhaps even for “technical measures” to disrupt file sharing traffic.

From a technical point of view, the traceability part of the DEA process can (in principle) be made to work in a robust manner. However, there’s a lot of detail to get right in practice, both in recording the data generated by the P2P activity and within the ISPs systems — and history shows that mistakes are often made. I have some first hand experience of this, my report refers to how I helped the police track down a series of traceability mistakes that were made in a 2006 murder case! Hence I spend many pages in my report explaining what can go wrong and I set out in considerable detail the sort of procedures that I believe that Ofcom should insist upon to ensure that mistakes are rare and are rapidly detected.

My report also explains the difficulties (in many cases the insuperable difficulties) that the account holder will have in determining the individual who was responsible to the P2P activity. Consumer Focus takes the view that “this makes the proposed appeals process flawed and potentially unfair and we ask Government to rethink this process”. Sadly, there’s been no sign so far that this sort of criticism will derail the DEA juggernaut, although some commentators are starting to wonder if the rights holders will see the process as passing a cost/benefit test.

Call for Papers: Internet Censorship and Control

I am co-editing a special edition of IEEE Internet Computing on Internet Censorship and Control. We are looking for short (up to 5,000 words) articles on the technical, social, and political mechanisms and impacts of Internet censorship and control. We’re soliciting both technical and social science articles, and especially encourage those that combine the two. Appropriate topics include

  • explorations of how the Internet’s technical, social, and political structures impact its censorship and control;
  • evaluations of how existing technologies and policies affect Internet censorship and control;
  • proposals for new technologies and policies;
  • discussions on how proposed technical, legal, or governance changes to the Internet will impact censorship and control;
  • analysis of techniques, methodologies, and results of monitoring Internet censorship and control; and
  • examinations of trade-offs between control and freedom, and how these sides can be balanced.

Please email the guest editors a brief description of the article you plan to submit by 15 August 2012. For further details, see the full CFP. Please distribute this CFP, and use this printable flyer if you wish.