All posts by Ross Anderson

Phone hacking, technology and policy

Britain’s phone hacking scandal touches many issues of interest to security engineers. Murdoch’s gumshoes listened to celebs’ voicemail messages using default PINs. They used false-pretext phone calls – blagging – to get banking and medical records.

We’ve known for years that private eyes blag vast amounts of information (2001 book, from page 167; 2006 ICO Report). Centralisation and the ‘Cloud’ are making things worse. Twenty years ago, your bank records were available only in your branch; now any teller at any branch can look them up. The dozen people who work at your doctor’s surgery used to be able to keep a secret, but the 840,000 staff with a logon to our national health databases?

Attempts to fix the problem using the criminal justice system have failed. When blagging was made illegal in 1995, the street price of medical records actually fell from £200 to £150! Parliament increased the penalty from fines to jail in 2006 but media pressure scared ministers off implementing this law.

Our Database State report argued that the wholesale centralisation of medical and other records was unsafe and illegal; and the NHS Population Demographics Service database appears to be the main one used to find celebs’ ex-directory numbers. Celebs can opt out, but most of them are unaware of PDS abuse, so they don’t. Second, you can become a celeb instantly if you are a victim of crime, war or terror. Third, even if you do opt out, the gumshoes can just bribe policemen, who have access to just about everything.

In future, security engineers must pay much more attention to compartmentation (even the Pentagon is now starting to get it), and we must be much more wary about the risk that law-enforcement access to information will be abused.

Can we Fix Federated Authentication?

My paper Can We Fix the Security Economics of Federated Authentication? asks how we can deal with a world in which your mobile phone contains your credit cards, your driving license and even your car key. What happens when it gets stolen or infected?

Using one service to authenticate the users of another is an old dream but a terrible tar-pit. Recently it has become a game of pass-the-parcel: your newspaper authenticates you via your social networking site, which wants you to recover lost passwords by email, while your email provider wants to use your mobile phone and your phone company depends on your email account. The certification authorities on which online trust relies are open to coercion by governments – which would like us to use ID cards but are hopeless at making systems work. No-one even wants to answer the phone to help out a customer in distress. But as we move to a world of mobile wallets, in which your phone contains your credit cards and even your driving license, we’ll need a sound foundation that’s resilient to fraud and error, and usable by everyone. Where might this foundation be? I argue that there could be a quite surprising answer.

The paper describes some work I did on sabbatical at Google and will appear next week at the Security Protocols Workshop.

A Merry Christmas to all Bankers

The bankers’ trade association has written to Cambridge University asking for the MPhil thesis of one of our research students, Omar Choudary, to be taken offline. They complain it contains too much detail of our No-PIN attack on Chip-and-PIN and thus “breaches the boundary of responsible disclosure”; they also complain about Omar’s post on the subject to this blog.

Needless to say, we’re not very impressed by this, and I made this clear in my response to the bankers. (I am embarrassed to see I accidentally left Mike Bond off the list of authors of the No-PIN vulnerability. Sorry, Mike!) There is one piece of Christmas cheer, though: the No-PIN attack no longer works against Barclays’ cards at a Barclays merchant. So at least they’ve started to fix the bug – even if it’s taken them a year. We’ll check and report on other banks later.

The bankers also fret that “future research, which may potentially be more damaging, may also be published in this level of detail”. Indeed. Omar is one of my coauthors on a new Chip-and-PIN paper that’s been accepted for Financial Cryptography 2011. So here is our Christmas present to the bankers: it means you all have to come to this conference to hear what we have to say!

Wikileaks, security research and policy

A number of media organisations have been asking us about Wikileaks. Fifteen years ago we kicked off the study of censorship resistant systems, which inspired the peer-to-peer movement; we help maintain Tor, which provides the anonymous communications infrastructure for Wikileaks; and we’ve a longstanding interest in information policy.

I have written before about governments’ love of building large databases of sensitive data to which hundreds of thousands of people need access to do their jobs – such as the NHS spine, which will give over 800,000 people access to our health records. The media are now making the link. Whether sensitive data are about health or about diplomacy, the only way forward is compartmentation. Medical records should be kept in the surgery or hospital where the care is given; and while an intelligence analyst dealing with Iraq might have access to cables on Iraq, Iran and Saudi Arabia, he should have no routine access to stuff on Korea or Brazil.

So much for the security engineering; now to policy. No-one questions the US government’s right to try one of its soldiers for leaking the cables, or the right of the press to publish them now that they’re leaked. But why is Wikileaks treated as the leaker, rather than as a publisher?

This leads me to two related questions. First, does a next-generation censorship-resistant system need a more resilient technical platform, or more respectable institutions? And second, if technological change causes respectable old-media organisations such as the Guardian and the New York Times to go bust and be replaced by blogs, what happens to freedom of the press, and indeed to freedom of speech?

Resumption of the crypto wars?

The Telegraph and Guardian reported yesterday that the government plans to install deep packet inspection kit at ISPs, a move considered and then apparently rejected by the previous government (our Database State report last year found their Interception Modernisation Programme to be almost certainly illegal). An article in the New York Times on comparable FBI/NSA proposals makes you wonder whether policy is being coordinated between Britain and America.

In each case, the police and spooks argue that they used to have easy access to traffic data — records of who called whom and when — so now people communicate using facebook, gmail and second life rather than with phones, they should be allowed to harvest data about who wrote on your wall, what emails appeared on your gmail inbox page, and who stood next to you in second life. This data will be collected on everybody and will be available to investigators who want to map suspects’ social networks. A lot of people opposed this, including the Lib Dems, who promised to “end the storage of internet and email records without good reason” and wrote this into the Coalition Agreement. The Coalition seems set to reinterpret this now that the media are distracted by the spending review.

We were round this track before with the debate over key escrow in the 1990s. Back then, colleagues and I wrote of the risks and costs of insisting that communications services be wiretap-ready. One lesson from the period was that the agencies clung to their old business model rather than embracing all the new opportunities; they tried to remain Bletchley Park in the age of Google. Yet GCHQ people I’ve heard recently are still stuck in the pre-computer age, having learned nothing and forgotten nothing. As for the police, they can’t really cope with the forensics for the PCs, phones and other devices that fall into their hands anyway. This doesn’t bode well, either for civil liberties or for national security.

Research, public opinion and patient consent

Paul Thornton has brought to my attention some research that the Department of Health published quietly at the end of 2009 (and which undermines Departmental policy).

It is the Summary of Responses to the Consultation on the Additional Uses of Patient Data undertaken following campaigning by doctors, NGOs and others about the Secondary Uses Service (SUS). SUS keeps summaries of patient care episodes, some of them anonymised, and makes them available for secondary uses; the system’s advocates talk about research, although it is heavily used for health service management, clinical audit, answering parliamentary questions and so on. Most patients are quite unaware that tens of thousands of officials have access to their records, and the Database State report we wrote last year concluded that SUS is almost certainly illegal. (Human-rights and data-protection law require that sensitive data, including health data, be shared only with the consent of the data subject or using tightly restricted statutory powers whose effects are predictable to data subjects.)

The Department of Health’s consultation shows that most people oppose the secondary use of their health records without consent. The executive summary tries to spin this a bit, but the data from the report’s body show that public opinion remains settled on the issue, as it has been since the first opinion survey in 1997. We do see some signs of increasing sophistication: now a quarter of patients don’t believe that data can be anonymised completely, versus 15% who say that sharing is “OK if anonymised” (p 23). And the views of medical researchers and NHS administrators are completely different; see for example p 41. The size of this gap suggests the issue won’t get resolved any time soon – perhaps until there’s an Alder-Hey-type incident that causes a public outcry and forces a reform of SUS.

Who controls the off switch?

We have a new paper on the strategic vulnerability created by the plan to replace Britain’s 47 million meters with smart meters that can be turned off remotely. The energy companies are demanding this facility so that customers who don’t pay their bills can be switched to prepayment tariffs without the hassle of getting court orders against them. If the Government buys this argument – and I’m not convinced it should – then the off switch had better be closely guarded. You don’t want the nation’s enemies to be able to turn off the lights remotely, and eliminating that risk could just conceivably be a little bit more complicated than you might at first think. (This paper follows on from our earlier paper On the security economics of electricity metering at WEIS 2010.)