All posts by Ross Anderson

Scrambling for Safety 2012

On the first of April, the Sunday Times carried a story that the Home Secretary planned to expand the scope of the Regulation of Investigatory Powers Act. Some thought this was an April Fool, but no: security minister James Brokenshire confirmed the next day that it was for real. This led to much media coverage; here is a more detailed historical timeline.

There have been eight previous Scrambling for Safety conferences organised while the UK government was considering the RIP Act and the regulations that followed it. The goal is to bring together different stakeholders interested in surveillance policy for an open exchange of views. The conference is open to the public, but you have to register here.

Here is the programme and the event website.

Risk and privacy in payment systems

I’ve just given a talk on Risk and privacy implications of consumer payment innovation (slides) at the Federal Reserve Bank’s payments conference. There are many more attendees this year; who’d have believed that payment systems would ever become sexy? Yet there’s a lot of innovation, and regulators are starting to wonder. Payment systems now contain many non-bank players, from insiders like First Data, FICO and Experian to service firms like PayPal and Google. I describe a number of competitive developments and argue that although fraud may increase, so will welfare, so there’s no reason to panic. For now, bank supervisors should work on collecting better fraud statistics, so that if there ever is a crisis the response can be well-informed.

Social authentication – harder than it looks!

This is the title of a paper we’ll be presenting next week at the Financial Crypto conference (slides). There is also coverage in the New Scientist.

Facebook has a social authentication mechanism where you may be asked to recognise some of your friends from photos as part of the login process. We analysed this and found it to be vulnerable to guessing by your friends, and also to modern face-recognition systems. Most people want privacy only from those close to them; if you’re having an affair then you want your partner to not find out but you don’t care if someone in Mongolia learns about it. And if your partner finds out and becomes your ex, then you don’t want them to be able to cause havoc on your account. Celebrities are similar, except that everyone is their friend (and potentially their enemy).

Second, if someone outside your circle of friends is doing a targeted attack on you, then by friending your friends they can get some access to your social circle to collect photos, which they might use in image-recognition software or even manually to pass the test.
Continue reading Social authentication – harder than it looks!

Metrics for dynamic networks

There’s a huge literature on the properties of static or slowly-changing social networks, such as the pattern of friends on Facebook, but almost nothing on networks that change rapidly. But many networks of real interest are highly dynamic. Think of the patterns of human contact that can spread infectious disease; you might be breathed on by a hundred people a day in meetings, on public transport and even in the street. Yet if we were facing a flu pandemic, how could we measure whether the greatest spreading risk came from high-order static nodes, or from dynamic ones? Should we close the schools, or the Tube?

Today we unveiled a paper which proposes new metrics for centrality in dynamic networks. We wondered how we might measure networks where mobility is of the essence, such as the spread of plague in a medieval society where most people stay in their villages and infection is carried between them by a small number of merchants. We found we can model the effects of mobility on interaction by embedding a dynamic network in a larger time-ordered graph to which we can apply standard graph theory tools. This leads to dynamic definitions of centrality that extend the static definitions in a natural way and yet give us a much better handle on things than aggregate statistics can. I spoke about this work today at a local workshop on social networking, and the paper’s been accepted for Physical Review E. It’s joint work with Hyoungshick Kim.

Bankers’ Christmas present

Every Christmas we give our friends in the banking industry a wee present. Sometimes it’s the responsible disclosure of a vulnerability, which we publish the following February: 2007’s was PED certification, 2008’s was CAP while in 2009 we told the banking industry of the No-PIN attack. This year too we have some goodies in the hamper: watch our papers at Financial Crypto 2012.

In other years, we’ve had arguments with the bankers’ PR wallahs. In 2010, for example, their trade association tried to censor one of our students’ thesis. That saga also continues; Britain’s bankers tried once more to threaten us so we told them once more to go away. We have other conversations in progress with bankers, most of them thankfully a bit more constructive.

This year’s Christmas present is different: it’s a tale with a happy ending. Eve Russell was a fraud victim whom Barclays initially blamed for her misfortune, as so often happens, and the Financial Ombudsman Service initially found for the bank as it routinely does. Yet this was clearly not right; after many lawyers’ letters, two hearings at the ombudsman, two articles in The Times and a TV appearance on Rip-off Britain, Eve won. This is the first complete case file since the ombudsman came under the Freedom of Information Act; by showing how the system works, it may be useful to fraud victims in the future.

(At Eve’s request, I removed the correspondence and case papers from my website on 5 Oct 2015. Eve was getting lots of calls and letters from other fraud victims and was finally getting weary. I have left just the article in the Times.)

Privacy event on Wednesday

I will be talking in London on Wednesday at a workshop on Anonymity, Privacy, and Open Data about the difficulty of anonymising medical records properly. I’ll be on a panel with Kieron O’Hara who wrote a report on open data for the Cabinet Office earlier this year, and a spokesman from the ICO.

This will be the first public event on the technology and policy issues surrounding anonymisation since yesterday’s announcement that the government will give wide access to anonymous versions of our medical records. I’ve written extensively on the subject: for an overview, see my book chapter which explores the security of medical systems in general from p 282 and the particular problems of using “anonymous” records in research from p 298. For the full Monty, start here.

Anonymity is hard enough if the data controller is capable, and motivated to try hard. In the case of the NHS, anonymity has always been perfunctory; the default is to remove patient names and addresses but leave their postcodes and dates of birth. This makes it easy to re-identify about 99% of patients (the exceptions are mostly twins, soldiers, students and prisoners). And since I wrote that book chapter, the predicted problems have come to pass; for example the NHS lost a laptop containing over eight million patients’ records.

Here we go again

The Sunday media have been trailing a speech by David Cameron tomorrow about giving us online access to our medical records and our kids’ school records, and making anonymised versions of them widely available to researchers, companies and others. Here is coverage in the BBC, the Mail and the Telegraph; there’s also a Cabinet Office paper. The measures are supported by the CEO of Glaxo and opposed by many NGOs.

If the Government is going to “ensure all NHS patients can access their personal GP records online by the end of this Parliament”, they’ll have to compel the thousands of GPs who still keep patient records on their own machines to transfer them to centrally-hosted facilities. The systems are maintained by people who have to please the Secretary of State rather than GPs, and thus become progressively less useful. This won’t just waste doctors’ time but will have real consequences for patient safety and the quality of care.

We’ve seen this repeatedly over the lifetime of NPfIT and its predecessor the NHS IM&T strategy. Officials who can’t develop working systems become envious of systems created by doctors; they wrest control, and the deterioration starts.

It’s astounding that a Conservative prime minister could get the idea that nationalising something is the best way to make it work better. It’s also astonishing that a Government containing Liberals who believe in human rights, the rule of law and privacy should support the centralisation of medical records a mere two years after the Joseph Rowntree Reform Trust, a Liberal charity, produced the Database State report which explained how the centralisation of medical records (and for that matter children’s records) destroys privacy and contravenes human-rights law. The coming debate will no doubt be vigorous and will draw on many aspects of information security, from the dreadful security usability (and safety usability) of centrally-purchased NHS systems, through the real hazards of coerced access by vulnerable patients, to the fact that anonymisation doesn’t really work. There’s much more here. Of course the new centralisation effort will probably fail, just like the last two; health informatics is a hard problem, and even Google gave up. But our privacy should not depend on the government being incompetent at wrongdoing. It should refrain from wrongdoing in the first place.

Trusted Computing 2.1

We’re steadily learning more about the latest Trusted Computing proposals. People have started to grok that building signed boot into UEFI will extend Microsoft’s power over the markets for AV software and other security tools that install around boot time; while ‘Metro’ style apps (i.e. web/tablet/html5 style stuff) could be limited to distribution via the MS app store. Even if users can opt out, most of them won’t. That’s a lot of firms suddenly finding Steve Ballmer’s boot on their jugular.

We’ve also been starting to think about the issues of law enforcement access that arose during the crypto wars and that came to light again with CAs. These issues are even more wicked with trusted boot. If the Turkish government compelled Microsoft to include the Tubitak key in Windows so their intelligence services could do man-in-the-middle attacks on Kurdish MPs’ gmail, then I expect they’ll also tell Microsoft to issue them a UEFI key to authenticate their keylogger malware. Hey, I removed the Tubitak key from my browser, but how do I identify and block all foreign governments’ UEFI keys?

Our Greek colleagues are already a bit cheesed off with Wall Street. How happy will they be if in future they won’t be able to install the security software of their choice on their PCs, but the Turkish secret police will?

PhD Studentship in Mobile Payments

We’ve been offered funding for a PhD student to work at the University of Cambridge Computer Laboratory on the security of mobile payments, starting in April 2012.

The objective is to explore how we can make mobile payment systems dependable despite the presence of malware. Research topics include the design of next-generation secure element hardware, trustworthy user interfaces, and mechanisms to detect and recover from compromise. Relevant skills include Android, payment protocols, human-computer interaction, hardware and software security, and cryptography.

As the sponsor wishes to start the project by April, we strongly encourage applications by 28 October 2011 (although candidates who do not need a visa to work in the UK might conceivably apply as late as early December). Enquiries should be directed to Ross Anderson.

Trusted Computing 2.0

There seems to be an attempt to revive the “Trusted Computing” agenda. The vehicle this time is UEFI which sets the standards for the PC BIOS. Proposed changes to the UEFI firmware spec would enable (in fact require) next-generation PC firmware to only boot an image signed by a keychain rooted in keys built into the PC. I hear that Microsoft (and others) are pushing for this to be mandatory, so that it cannot be disabled by the user, and it would be required for OS badging. There are some technical details here and here, and comment here.

These issues last arose in 2003, when we fought back with the Trusted Computing FAQ and economic analysis. That initiative petered out after widespread opposition. This time round the effects could be even worse, as “unauthorised” operating systems like Linux and FreeBSD just won’t run at all. (On an old-fashioned Trusted Computing platform you could at least run Linux – it just couldn’t get at the keys for Windows Media Player.)

The extension of Microsoft’s OS monopoly to hardware would be a disaster, with increased lock-in, decreased consumer choice and lack of space to innovate. It is clearly unlawful and must not succeed.