Metrics for security and performance in low-latency anonymity systems

In Tor, and in other similar anonymity systems, clients choose a random sequence of computers (nodes) to route their connections through. The intention is that, unless someone is watching the whole network at the same time, the tracks of each user’s communication will become hidden amongst that of others. Exactly how a client chooses nodes varies between system to system, and is important for security.

If someone is simultaneously watching a user’s traffic as it enters and leaves the network, it is possible to de-anonymise the communication. As anyone can contribute nodes, this could occur if the first and last node for a connection is controlled by the same person. Tor takes some steps to avoid this possibility e.g. no two computers on the same /16 network may be chosen for each connection. However, someone with access to several networks could circumvent this measure.

Not only is route selection critical for security, but it’s also a significant performance factor. Tor nodes vary dramatically in their capacity, mainly due to their network connections. If all nodes were chosen with equal likelihood, the slower ones would cripple the network. This is why Tor weights the selection probability for a node proportional to its contribution to the network bandwidth.

Because of the dual importance of route selection, there are a number of proposals which offer an alternative to Tor’s bandwidth-weighted algorithm. Later this week at PETS I’ll be presenting my paper, co-authored with Robert N.M. Watson, “Metrics for security and performance in low-latency anonymity systems”. In this paper, we examine several route selection algorithms and evaluate their security and performance.

Intuitively, a route selection algorithm which weights all nodes equally appears the most secure because an attacker can’t make their node count any more than the others. This has been formalized by two measures: Gini coefficient and entropy. In fact the reality is more complex — uniform node selection resists attackers with lots of bandwidth, whereas bandwidth-weighting is better against attackers with lots of nodes (e.g. botnets).

Our paper explores the probability of path compromise of different route selection algorithms, when under attack by a range of different adversaries. We find that none of the proposals are optimal against all adversaries, and so summarizing effective security in terms of a single figure is not feasible. We also model the performance of the schemes and show that bandwidth-weighting offers both low latency and high resistance to attack by bandwidth-constrained adversaries.

Update (2008-07-25):
The slides (PDF 2.1M) for my presentation are now online.

Personal Internet Security: follow-up report

The House of Lords Science and Technology Committee have just completed a follow-up inquiry into “Personal Internet Security”, and their report is published here. Once again I have acted as their specialist adviser, and once again I’m under no obligation to endorse the Committee’s conclusions — but they have once again produced a useful report with sound conclusions, so I’m very happy to promote it!

Their initial report last summer, which I blogged about at the time, was — almost entirely — rejected by the Government last autumn (blog article here).

The Committee decided that in the light of the Government’s antipathy they would hold a rapid follow-up inquiry to establish whether their conclusions were sound or whether the Government was right to turn them down, and indeed, given the speed of change on the Internet, whether their recommendations were still timely.

The written responses broadly endorsed the Committee’s recommendations, with the main areas of controversy being liability for software vendors, making the banks statutorily responsible for phishing/skimming fraud, and how such fraud should be reported.

There was one oral session where, to everyone’s surprise, two Government ministers turned up and were extremely conciliatory. Baroness Vadera (BERR) said that the report “was somewhat more interesting than our response” and Vernon Coaker (Home Office) apologised to the Committee “if they felt that our response was overdefensive” adding “the report that was produced by this Committee a few months ago now has actually helped drive the agenda forward and certainly the resubmission of evidence and the re-thinking that that has caused has also helped with respect to that. So may I apologise to all of you; it is no disrespect to the Committee or to any of the members.

I got the impression that the ministers were more impressed with the Committee’s report than were the civil servants who had drafted the Government’s previous formal response. Just maybe, some of my comments made a difference?

Given this volte face, the Committee’s follow-up report is also conciliatory, whilst recognising that the new approach is very much in the “jam tomorrow” category — we will all have to wait to see if they deliver.

The report is still in favour of software vendor liability as a long term strategy to improving software security, and on a security breach notification law the report says “we hold to our view that data security breach notification legislation would have the twin impacts of increasing incentives on businesses to avoid data loss, and should a breach occur, giving individuals timely information so that they can reduce the risk to themselves“. The headlines have been about the data lost by the Government, but recent figures from the ICO show that private industry is doing pretty badly as well.

The report also revisits the recommendations relating to banking, reiterating the committee’s view that “the liability of banks for losses incurred by electronic fraud should be underpinned by legislation rather than by the Banking Code“. The reasoning is simple, the banks choose the security mechanisms and how much effort they put into detecting patterns of fraud, so they should stand the losses if these systems fail. Holding individuals liable for succumbing to ever more sophisticated attacks is neither fair, nor economically efficient. The Committee also remained concerned that where fraud does take place, reports are made to the banks, who then choose whether or not to forward them to the police. They describe this approach as “wholly unsatisfactory and that it risks undermining public trust in the police and the Internet“.

This is quite a short report, a mere 36 paragraphs, but comes bundled with the responses received, all of which from Ross Anderson and Nicholas Bohm, through to the Metropolitan Police and Symantec are well worth reading to understand more about a complex problem, yet one where we’re beginning to see the first glimmers of consensus as to how best to move forward.

Security psychology

I’m currently in the first Workshop on security and human behaviour; at MIT, which brings together security engineers, psychologists and others interested in topics ranging from deception through usability to fearmongering. Here’s the agenda and here are the workshop papers.

The first session, on deception, was fascinating. It emphasised the huge range of problems, from detecting deception in interpersonal contexts such as interrogation through the effects of context and misdirection to how we might provide better trust signals to computer users.

Over the past seven years, security economics has gone from nothing to a thriving research field with over 100 active researchers. Over the next seven I believe that security psychology should do at least as well. I hope I’ll find enough odd minutes to live blog this first workshop as it happens!

[Edited to add:] See comments for live blog posts on the sessions; Bruce Schneier is also blogging this event.

An improved clock-skew measurement technique for revealing hidden services

In 2006 I published a paper on remotely estimating a computer’s temperature, based on clock skew. I showed that by inducing load on a Tor hidden service, an attacker could cause measurable changes in clock skew and so allow the computer hosting the service to be re-identified. However, it takes a very long time (hours to days) to obtain a sufficiently accurate clock-skew estimate, even taking a sample every few seconds. If measurements are less granular than the 1 kHz TCP timestamp clock source I used, then it would take longer still.

This limits the attack since in many cases TCP timestamps may be unavailable. In particular, Tor hidden services operate at the TCP layer, stripping all TCP and IP headers. If an attacker wants to estimate clock skew over the hidden service channel, the only directly available clock source may be the 1 Hz HTTP timestamp. The quantization noise in this case is three orders of magnitude above the TCP timestamp case, making the approach I used in the paper effectively infeasible.

While visiting Cambridge in summer 2007, Sebastian Zander developed an improved clock skew measurement technique which would dramatically reduce the noise of clock-skew measurements from low-frequency clocks. The basic idea, shown below, is to only request timestamps very close to a clock transition, where the quantization noise is lowest. This requires the attacker to firstly lock-on to the phase of the clock, then keep tracking it even when measurements are distorted by network jitter.

Synchronized vs random sampling

Sebastian and I wrote a paper — An Improved Clock-skew Measurement Technique for Revealing Hidden Services — describing this technique, and showing results from testing it on a Tor hidden service installed on PlanetLab. The measurements show a large improvement over the original paper, with two orders of magnitude lower noise for low-frequency clocks (like the HTTP case). This approach will allow previous attacks to be executed faster, and make previously infeasible attacks possible.

The paper will be presented at the USENIX Security Symposium, San Jose, CA, US, 28 July – 1 August 2008.

Operational security failure

A shocking article appeared yesterday on the BMJ website. It recounts how auditors called 45 GP surgeries asking for personal information about 51 patients. In only one case were they asked to verify their identity; the attack succeeded against the other 50 patients.

This is an old problem. In 1996, when I was advising the BMA on clinical system safety and privacy, we trained the staff at one health authority to detect false-pretext phone calls, and they found 30 a week. We reported this to the Department of Health, hoping they’d introduce some operational security measures nationwide; instead the Department got furious at us for treading on their turf and ordered the HA to stop cooperating (the story’s told in my book). More recently I confronted the NHS chief executive, David Nicholson, and patient tsar Harry Cayton, with the issue at a conference early last year; they claimed there wasn’t a problem nowadays now that people have all these computers.

What will it take to get the Department of Health to care about patient privacy? Lack of confidentiality already costs lives, albeit indirectly. Will it require a really high-profile fatality?

Slow removal of child sexual abuse image websites

On Friday last week The Guardian ran a story on an upcoming research paper by Tyler Moore and myself which will be presented at the WEIS conference later this month. We had determined that child sexual abuse image websites were removed from the Internet far slower than any other category of content we looked at, excepting illegal pharmacies hosted on fast-flux networks; and we’re unsure if anyone is seriously trying to remove them at all!
Continue reading Slow removal of child sexual abuse image websites

"Covert channel vulnerabilities in anonymity systems" wins best thesis award

My PhD thesis “Covert channel vulnerabilities in anonymity systems” has been awarded this year’s best thesis prize by the ERCIM security and trust management working group. The announcement can be found on the working group homepage and I’ve been invited to give a talk at their upcoming workshop, STM 08, Trondheim, Norway, 16–17 June 2008.

Update 2007-07-07: ERCIM have also published a press release.

J-PAKE: From Dining Cryptographers to Jugglers

Password Authenticated Key Exchange (PAKE) is one of the central topics in cryptography. It aims to address a practical security problem: how to establish secure communication between two parties solely based on their shared password without requiring a Public Key Infrastructure (PKI).

The solution to the above problem is very useful in practice — in fact, so useful that it spawns a lot “fights” over patents. Many techniques were patented, including the well-known Encrypted Key Exchange (EKE) and Simple Password Exponential Key Exchange (SPEKE). A secondary problem is technical; both the EKE and SPEKE protocols have subtle but worrying technical limitations (see the paper for details).

At the 16th Workshop on Security Protocols held in April 2008, Cambridge, UK, I presented a new solution (joint work with Peter Ryan) called Password Authenticated Key Exchange by Juggling (or J-PAKE). The essence of the protocol design inherits from the earlier work on solving the Dining Cryptographers problem; we adapted the same juggling technique to the two-party case to solve the PAKE problem. To our best knowledge, this design is significantly different from all past PAKE solutions.

Intuitively, the J-PAKE protocol works like a juggling game between two people — if we regard a public key as a “ball”. In round one, each person throws two ephemeral public keys (“balls”) to each other. In round 2, each person combines the available public keys and the password to form a new public key, and throws the new “ball” to each other.

After round 2, the two parties can securely compute a common session key, if they supplied the same passwords. Otherwise, the protocol leaks nothing more than: “the supplied passwords at two sides are not the same”. In other words, one can prove his knowledge of the password without revealing it. A Java implementation of the protocol on a MacBook Pro laptop shows that the total computation time at each side is merely 75 ms.

We hope this protocol is of usefulness to security engineers. For example, compared with SSL/TLS, J-PAKE is potentially much more resistant against phishing attacks, not to mention that it is PKI-free. Since this protocol is the result of an academic research project, we didn’t — and have no intention to — patent it. As explained in the paper, J-PAKE even has technical advantages over the patented EKE and SPEKE in terms of security, with comparable efficiency. It has been submitted as a follow-up to the possible future extension of IEEE P1363.2.

We believe the PAKE research is important and has strong practical relevance. This post is to facilitate discussions on this subject. The paper can be viewed here. Any comments or questions are welcome.

Update

  • 2008-06-28: a crude J-PAKE demo source code (.java) by me. (link broken)
  • 2008-11-04: a more refined J-PAKE in C and OpenSSL by Ben Laurie.
  • 2008-11-11: possible applications of J-PAKE in VPN and browser by James.
  • 2009-02-08: public group parameters for 112-bit and 128-bit security can be found in the comments.
  • 2009-03-15: fixed the broken link to the old Java file. Here is the link to the Java demo code.
  • 2010-04-17: a journal version of the paper available on IACR. No technical change to the protocol.
  • 2010-10-25: the journal version of the paper is accepted to publish on the TCS journal – Springer Transactions on Computational Science, the special issue on “Security in Computing”, 2011.
  • 2010-11-25: Sebastien Martini reported an implementation issue of J-PAKE in OpenSSL and OpenSSH. The issue is not applicable to the Java demo code that I wrote. As stated in the last paragraph of p. 11 in the paper, one shall check the element lies within the specified group. Stefan Arentz implemented a fix in OpenSSL. Official OpenSSL and OpenSSH patches can be found here and here.
  • 2011-01-11: Mozilla built J-PAKE into the base product of Firefox 4 ( beta 8 and later). More details here.
  • 2012-01-18: Today, Mohsen Toorani uploadeda paper on IACR eprint to claim several attacks on J-PAKE. My response can be found here.
  • 2012-07-21: Phil Clay contributed a Java implementation of J-PAKE to bouncycastle.
  • 2013-02-24: J-PAKE included into bouncycastle 1.48.
  • 2013-03-28: a code example to show how to use J-PAKE in bouncycastle
  • 2013-05-21: Submitted two Internet Drafts to IETF (one on J-PAKE and the other one on Schnorr NIZK Proof)
  • 2013-12-30: a code example to show how to implement J-PAKE using Elliptic Curve (or ECDSA-like group setting)
  • 2014-04-17: J-PAKE included into VMware NSS Cryptographic Module
  • 2014-10-27: J-PAKE adopted by the ISO/IEC standard (11770-4) following the ISO/IEC SC27 meeting held in Mexico City, October 20-24, 2014
  • 2014-12-26: My response to Mohsen Toorani’s IEEE ISCC’14 paper “Security Analysis of J-PAKE”.
  • 2015-04-29: J-PAKE included into Python
  • 2015-05-08: Standardization of J-PAKE in ISO/IEC 11770-4 in process. The first working draft (WD1) passed reviews by ISO/IEC SC27 at Kuching Malaysia, May 5-8, 2015.
  • 2015-05-19: Here is an independent formal analysis of J-PAKE by other researchers published at Oakland’2015. Their results are consistent with the original J-PAKE paper.
  • 2015-05-30: J-PAKE included in BarcoSilex BA414E Public Key crypto engine
  • 2015-05-31: Firefox is upgrading Sync 1.1 (using J-PAKE to transfer a full-entropy AES key between sync devices) to new Sync 1.5 (using user-defined passwords as encryption keys). But Pale moon decides to stay with Sync 1.1.
  • 2015-07-28: the Thread Technical white paper is public. It describes a technique that securely enrols a new device to the network based on J-PAKE. The technique is used in Google Nest thermostat products.
  • 2017-10-06: J-PAKE published in ISO/IEC 11770-4 as an international standard and also published in RFC 8236. See here.

PED vulnerability paper receives "Most Practical Paper" award at Oakland

In February, Steven Murdoch, Ross Anderson and I reported our findings on system-level failures of widely deployed PIN Entry Devices (PED) and the Chip and PIN scheme as a whole. Steven is in Oakland presenting the work described in our paper at the IEEE Symposium on Security and Privacy (slides).

We are very pleased that we are the recipients of the new “Most Practical Paper” award of the conference, given to “the paper most likely to immediately improve the security of current environments and systems”. Thanks to everyone who supported this work!

IEEE Security & Privacy Magazine Award

Twisty little passages, all alike

Last month, on the 4th April, I published a document describing how the Phorm system worked and blogged about what I thought of the scheme. The document had been run past Phorm’s technical people to ensure it was correct, but — it turns out — there were still a handful of errors in it. A number of helpful people pointed out that I’d misdescribed third-party cookies (which didn’t matter much because Phorm specifically uses first-party cookies), and I’d managed to reference RFC2695 rather than RFC2965 !

In my original document, I’d waved my hands a little bit about how the system worked if people had blocked cookies for specific domains, and so I swapped some more email with Phorm to better understand, and then published a revised version on 23rd April — so that the correct information would be available to accompany FIPR’s press release and paper on the various laws that the Phorm system breaks. However, there was one final thing that wasn’t dealt with by press time, and that’s now been explained to me….

The Phorm system does some of its tracking magic by redirecting browser requests using HTTP 307 responses. When this was first explained to me at the meeting with Phorm there were two redirections (a scan of my notes is here), but having thought about this for a while, I asked for it to be explained to me again later on, and it turned out that I had previously been misled, and that there were in fact three redirections (here’s my notes of this part of the meeting).

It now turns out, following my further emails with Phorm, that there are in fact FOUR redirections occurring! This is not because my notes are rubbish — but because Phorm have managed to recall more of the detail of their own system!

For full details of how I understand the system works (at least until some more detail comes to light), see the latest version of my explanatory document, but to give you a flavour of it, consider an example visit to www.cnn.com:

  • The user wants to visit www.cnn.com, but their request does not contain a cookie (for www.cnn.com) with a Phorm unique identifier within it. They are redirected (ONE) by the Phorm system to www.webwise.net.
  • The user visits webwise.net by following the redirection. If they do not have a Phorm identifier cookie, then they will be issued with a new identifier and redirected (TWO) elsewhere on webwise.net.
  • The user visits webwise.net for the second time. If they still don’t have a Phorm identifier cookie then their IP address is marked as wishing to opt-out and they will be redirected to www.cnn.com and they won’t be redirected again for at least 30 minutes. If they do have a cookie (or if they had one at the previous stage) they are redirected (THREE) to a special URL within www.cnn.com.
  • The user visits the special URL, which the Phorm system redirects to a fake version of www.cnn.com that sets a www.cnn.com cookie with their Phorm identifier in it, and redirects (FOUR) them to the URL they wanted to visit all along.

For the moment, this appears to be the grand total; there can be up to four redirections, and it is deducible from this description what happens if you refuse (or delete) cookies in the webwise.net and www.cnn.com domains. It is also apparent that if you resolve webwise.net to 127.0.0.1 that you’ll never get past the first redirection; and you will need to rely on the Phorm system spotting these repeated failures and turning off redirection for your IP address.

direct adjective: Straightforward in manner or conduct; upright, honest.

indirect adjective: Mechanism by which Phorm fools your system into accepting tracking cookies from third-party websites, even when those websites promise never to track you!