Monthly Archives: July 2008

PET Award 2008

At last year’s Privacy Enhancing Technologies Symposium (PETS), I presented the paper “Sampled Traffic Analysis by Internet-Exchange-Level Adversaries”, co-authored with Piotr Zieliński. In it, we discussed the risk of traffic-analysis at Internet exchanges (IXes). We then showed that given even a small fraction of the data passing through an IX it was still possible to track a substantial proportion of anonymous communications. Our results are summarized in a previous blog post and full details are in the paper.

Our paper has now been announced as a runner-up for the Privacy Enhancing Technologies Award. The prize is presented annually, for research which makes an outstanding contribution to the field. Microsoft, the sponsor of the award, have further details and summaries of the papers in their press release.

Congratulations to the winners, Arvind Narayanan and Vitaly Shmatikov, for “Robust De-Anonymization of Large Sparse Datasets”; and the other runner-ups, Mira Belenkiy, Melissa Chase, C. Chris Erway, John Jannotti, Alptekin Küpçü, Anna Lysyanskaya and Erich Rachlin, for “Making P2P Accountable without Losing Privacy”.

Finland privacy judgment

In a case that will have profound implications, the European Court of Human Rights has issued a judgment against Finland in a medical privacy case.

The complainant was a nurse at a Finnish hospital, and also HIV-positive. Word of her condition spread among colleagues, and her contract was not renewed. The hospital’s access controls were not sufficient to prevent colleages accessing her record, and its audit trail was not sufficient to determine who had compromised her privacy. The court’s view was that health care staff who are not involved in the care of a patient must be unable to access that patient’s electronic medical record: “What is required in this connection is practical and effective protection to exclude any possibility of unauthorised access occurring in the first place.” (Press coverage here.)

A “practical and effective” protection test in European law will bind engineering, law and policy much more tightly together. And it will have wide consequences. Privacy compaigners, for example, can now argue strongly that the NHS Care Records service is illegal. And what will be the further consequences for the Transformational Government initiative – the “Database State”?

Metrics for security and performance in low-latency anonymity systems

In Tor, and in other similar anonymity systems, clients choose a random sequence of computers (nodes) to route their connections through. The intention is that, unless someone is watching the whole network at the same time, the tracks of each user’s communication will become hidden amongst that of others. Exactly how a client chooses nodes varies between system to system, and is important for security.

If someone is simultaneously watching a user’s traffic as it enters and leaves the network, it is possible to de-anonymise the communication. As anyone can contribute nodes, this could occur if the first and last node for a connection is controlled by the same person. Tor takes some steps to avoid this possibility e.g. no two computers on the same /16 network may be chosen for each connection. However, someone with access to several networks could circumvent this measure.

Not only is route selection critical for security, but it’s also a significant performance factor. Tor nodes vary dramatically in their capacity, mainly due to their network connections. If all nodes were chosen with equal likelihood, the slower ones would cripple the network. This is why Tor weights the selection probability for a node proportional to its contribution to the network bandwidth.

Because of the dual importance of route selection, there are a number of proposals which offer an alternative to Tor’s bandwidth-weighted algorithm. Later this week at PETS I’ll be presenting my paper, co-authored with Robert N.M. Watson, “Metrics for security and performance in low-latency anonymity systems”. In this paper, we examine several route selection algorithms and evaluate their security and performance.

Intuitively, a route selection algorithm which weights all nodes equally appears the most secure because an attacker can’t make their node count any more than the others. This has been formalized by two measures: Gini coefficient and entropy. In fact the reality is more complex — uniform node selection resists attackers with lots of bandwidth, whereas bandwidth-weighting is better against attackers with lots of nodes (e.g. botnets).

Our paper explores the probability of path compromise of different route selection algorithms, when under attack by a range of different adversaries. We find that none of the proposals are optimal against all adversaries, and so summarizing effective security in terms of a single figure is not feasible. We also model the performance of the schemes and show that bandwidth-weighting offers both low latency and high resistance to attack by bandwidth-constrained adversaries.

Update (2008-07-25):
The slides (PDF 2.1M) for my presentation are now online.

Personal Internet Security: follow-up report

The House of Lords Science and Technology Committee have just completed a follow-up inquiry into “Personal Internet Security”, and their report is published here. Once again I have acted as their specialist adviser, and once again I’m under no obligation to endorse the Committee’s conclusions — but they have once again produced a useful report with sound conclusions, so I’m very happy to promote it!

Their initial report last summer, which I blogged about at the time, was — almost entirely — rejected by the Government last autumn (blog article here).

The Committee decided that in the light of the Government’s antipathy they would hold a rapid follow-up inquiry to establish whether their conclusions were sound or whether the Government was right to turn them down, and indeed, given the speed of change on the Internet, whether their recommendations were still timely.

The written responses broadly endorsed the Committee’s recommendations, with the main areas of controversy being liability for software vendors, making the banks statutorily responsible for phishing/skimming fraud, and how such fraud should be reported.

There was one oral session where, to everyone’s surprise, two Government ministers turned up and were extremely conciliatory. Baroness Vadera (BERR) said that the report “was somewhat more interesting than our response” and Vernon Coaker (Home Office) apologised to the Committee “if they felt that our response was overdefensive” adding “the report that was produced by this Committee a few months ago now has actually helped drive the agenda forward and certainly the resubmission of evidence and the re-thinking that that has caused has also helped with respect to that. So may I apologise to all of you; it is no disrespect to the Committee or to any of the members.

I got the impression that the ministers were more impressed with the Committee’s report than were the civil servants who had drafted the Government’s previous formal response. Just maybe, some of my comments made a difference?

Given this volte face, the Committee’s follow-up report is also conciliatory, whilst recognising that the new approach is very much in the “jam tomorrow” category — we will all have to wait to see if they deliver.

The report is still in favour of software vendor liability as a long term strategy to improving software security, and on a security breach notification law the report says “we hold to our view that data security breach notification legislation would have the twin impacts of increasing incentives on businesses to avoid data loss, and should a breach occur, giving individuals timely information so that they can reduce the risk to themselves“. The headlines have been about the data lost by the Government, but recent figures from the ICO show that private industry is doing pretty badly as well.

The report also revisits the recommendations relating to banking, reiterating the committee’s view that “the liability of banks for losses incurred by electronic fraud should be underpinned by legislation rather than by the Banking Code“. The reasoning is simple, the banks choose the security mechanisms and how much effort they put into detecting patterns of fraud, so they should stand the losses if these systems fail. Holding individuals liable for succumbing to ever more sophisticated attacks is neither fair, nor economically efficient. The Committee also remained concerned that where fraud does take place, reports are made to the banks, who then choose whether or not to forward them to the police. They describe this approach as “wholly unsatisfactory and that it risks undermining public trust in the police and the Internet“.

This is quite a short report, a mere 36 paragraphs, but comes bundled with the responses received, all of which from Ross Anderson and Nicholas Bohm, through to the Metropolitan Police and Symantec are well worth reading to understand more about a complex problem, yet one where we’re beginning to see the first glimmers of consensus as to how best to move forward.