Monthly Archives: May 2007

Should there be a Best Practice for censorship?

A couple of weeks ago, right at the end of the Oxford Internet Institute conference on The Future of Free Expression on the Internet, the question was raised from the platform as to whether it might be possible to construct a Best Current Practice (BCP) framework for censorship?

If — the argument ran — IF countries were transparent about what they censored, IF there was no overblocking (the literature’s jargon for collateral damage), IF it was done under a formal (local) legal framework, IF there was the right of appeal to correct inadvertent errors, IF … and doubtless a whole raft more of “IFs” that a proper effort to develop a BCP would establish. IF… then perhaps censorship would be OK.

I spoke against the notion of a BCP from the audience at the time, and after some reflection I see no reason to change my mind.

There will be many more subtle arguments — much as there are will be more IFs to consider, but I can immediately see two insurmountable objections.

The first is that a BCP will inevitably lead to far more censorship, but now with the apparent endorsement of a prestigious organisation: “The OpenNet Initiative says that blocking the political opposition’s websites is just fine!” Doubtless some of the IFs in the BCP will address open political processes, and universal human rights … but it will surely come down to quibbling about language: terrorist/freedom-fighter; assassination/murder; dissent/rebellion; opposition/traitor.

The second, and I think the most telling, objection is that it will reinforce the impression that censoring the Internet can actually be achieved! whereas the evidence piles up that it just isn’t possible. All of the schemes for blocking content can be evaded by those with technical knowledge (or access to the tools written by others with that knowledge). Proxies, VPNs, Tor, fragments, ignoring resets… the list of evasion technologies is endless.

One of the best ways of spreading data to multiple sites is to attempt to remove it, and every few years some organisation demonstrates this again. Although ad hoc replication doesn’t necessarily scale — there’s plenty of schemes in the literature for doing it on an industrial scale.

It’s cliched to trot out John Gilmore’s observation that “the Internet treats censorship as a defect and routes around it“, but over-familiarity with the phrase should not hide its underlying truth.

So, in my view, a BCP will merely be used by the wicked as a fig-leaf for their activity, and by the ignorant to prop up their belief that it’s actually possible to block the content they don’t believe should be visible. A BCP is a thoroughly bad idea, and should not be further considered.

Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

Users of the Tor anonymous communication system are at risk of being tracked by an adversary who can monitor both the traffic entering and leaving the network. This weakness is well known to the designers and currently there is no known practical way to resist such attacks, while maintaining the low-latency demanded by applications such as web browsing. For this reason, it seems intuitively clear that when selecting a path through the Tor network, it would be beneficial to select the nodes to be in different countries. Hopefully government-level adversaries will find it problematic to track cross-border connections as mutual legal assistance is slow, if it even works at all. Non-government adversaries might also find that their influence drops off at national boundaries too.

Implementing secure IP-based geolocation is hard, but even if it were possible, the technique might not help and could perhaps even harm security. The PET Award nominated paper, “Location Diversity in Anonymity Networks“, by Nick Feamster and Roger Dingledine showed that international Internet connections cross a comparatively small number of tier-1 ISPs. Thus, by forcing one or more of these companies to co-operate, a large proportion of connections through an anonymity network could be traced.

The results of Feamster and Dingledine’s paper suggest that it may be better to bounce anonymity traffic around within a country, because it is less likely that there will be a single ISP monitoring incoming and outgoing traffic to several nodes. However, this only appears to be the case because they used BGP data to build a map of Autonomous Systems (ASes), which roughly correspond to ISPs. Actually, inter-ISP traffic (especially in Europe) might travel through an Internet eXchange (IX), a fact not apparent from BGP data. Our paper, “Sampled Traffic Analysis by Internet-Exchange-Level Adversaries“, by Steven J. Murdoch and Piotr ZieliƄski, examines the consequences of this observation.

Continue reading Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

Distance bounding against smartcard relay attacks

Steven Murdoch and I have previously discussed issues concerning the tamper resistance of payment terminals and the susceptibility of Chip & PIN to relay attacks. Basically, the tamper resistance protects the banks but not the customers, who are left to trust any of the devices they provide their card and PIN to (the hundreds of different types of terminals do not help here). The problem some customers face is that when fraud happens, they are the ones being blamed for negligence instead of the banks owning up to a faulty system. Exacerbating the problem is the impossibility of customers to prove they have not been negligent with their secrets without the proper data that the banks have, but refuse to hand out.

Continue reading Distance bounding against smartcard relay attacks

Results of global Internet filtering survey

At their conference in Oxford, the OpenNet Initiative have released the results from their first global Internet filtering survey. This announcement has been widely covered in the media.

Out of the 41 countries surveyed, 25 were found to impose filtering, though the topics blocked and extent of blocking varies dramatically.

Results can be seen on the filtering map and an URL checker. The full report, including detailed country and region summaries, will be published in the book “Access Denied: The Practice and Policy of Global Internet Filtering“.

How quickly are phishing websites taken down?

Tyler Moore and myself have a paper (An Empirical Analysis of the Current State of Phishing Attack and Defence) accepted at this year’s Workshop on the Economics of Information Security (WEIS 2007) in which we examine how long phishing websites remain available before the impersonated bank gets them “taken-down”.

Continue reading How quickly are phishing websites taken down?

Follow the money, stupid

The Federal Reserve commissioned me to research and write a paper on fraud, risk and nonbank payment systems. I found that phishing is facilitated by payment systems like eGold and Western Union which make the recovery of stolen funds more difficult. Traditional payment systems like cheques and credit card payments are revocable; cheques can bounce and credit card charges can be charged back. However some modern systems provide irrevocability without charging an appropriate risk premium, and this attracts the bad guys. (After I submitted the paper, and before it was presented on Friday, eGold was indicted.)

I also became convinced that the financial market controls used to fight fraud, money laundering and terrorist finance have become unbalanced as they have been beefed up post-9/11. The modern obsession with ‘identity’ – of asking even poor people living in huts in Africa for an ID document and two utility bills before they can open a bank account – is not only ridiculous and often discriminatory. It’s led banks and regulators to take their eye off the ball, and to replace risk reduction with due diligence.

In real life, following the money is just as important as following the man. It’s time for the system to be rebalanced.