Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

Users of the Tor anonymous communication system are at risk of being tracked by an adversary who can monitor both the traffic entering and leaving the network. This weakness is well known to the designers and currently there is no known practical way to resist such attacks, while maintaining the low-latency demanded by applications such as web browsing. For this reason, it seems intuitively clear that when selecting a path through the Tor network, it would be beneficial to select the nodes to be in different countries. Hopefully government-level adversaries will find it problematic to track cross-border connections as mutual legal assistance is slow, if it even works at all. Non-government adversaries might also find that their influence drops off at national boundaries too.

Implementing secure IP-based geolocation is hard, but even if it were possible, the technique might not help and could perhaps even harm security. The PET Award nominated paper, “Location Diversity in Anonymity Networks“, by Nick Feamster and Roger Dingledine showed that international Internet connections cross a comparatively small number of tier-1 ISPs. Thus, by forcing one or more of these companies to co-operate, a large proportion of connections through an anonymity network could be traced.

The results of Feamster and Dingledine’s paper suggest that it may be better to bounce anonymity traffic around within a country, because it is less likely that there will be a single ISP monitoring incoming and outgoing traffic to several nodes. However, this only appears to be the case because they used BGP data to build a map of Autonomous Systems (ASes), which roughly correspond to ISPs. Actually, inter-ISP traffic (especially in Europe) might travel through an Internet eXchange (IX), a fact not apparent from BGP data. Our paper, “Sampled Traffic Analysis by Internet-Exchange-Level Adversaries“, by Steven J. Murdoch and Piotr Zieliński, examines the consequences of this observation.

Continue reading Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

Distance bounding against smartcard relay attacks

Steven Murdoch and I have previously discussed issues concerning the tamper resistance of payment terminals and the susceptibility of Chip & PIN to relay attacks. Basically, the tamper resistance protects the banks but not the customers, who are left to trust any of the devices they provide their card and PIN to (the hundreds of different types of terminals do not help here). The problem some customers face is that when fraud happens, they are the ones being blamed for negligence instead of the banks owning up to a faulty system. Exacerbating the problem is the impossibility of customers to prove they have not been negligent with their secrets without the proper data that the banks have, but refuse to hand out.

Continue reading Distance bounding against smartcard relay attacks

Results of global Internet filtering survey

At their conference in Oxford, the OpenNet Initiative have released the results from their first global Internet filtering survey. This announcement has been widely covered in the media.

Out of the 41 countries surveyed, 25 were found to impose filtering, though the topics blocked and extent of blocking varies dramatically.

Results can be seen on the filtering map and an URL checker. The full report, including detailed country and region summaries, will be published in the book “Access Denied: The Practice and Policy of Global Internet Filtering“.

How quickly are phishing websites taken down?

Tyler Moore and myself have a paper (An Empirical Analysis of the Current State of Phishing Attack and Defence) accepted at this year’s Workshop on the Economics of Information Security (WEIS 2007) in which we examine how long phishing websites remain available before the impersonated bank gets them “taken-down”.

Continue reading How quickly are phishing websites taken down?

Follow the money, stupid

The Federal Reserve commissioned me to research and write a paper on fraud, risk and nonbank payment systems. I found that phishing is facilitated by payment systems like eGold and Western Union which make the recovery of stolen funds more difficult. Traditional payment systems like cheques and credit card payments are revocable; cheques can bounce and credit card charges can be charged back. However some modern systems provide irrevocability without charging an appropriate risk premium, and this attracts the bad guys. (After I submitted the paper, and before it was presented on Friday, eGold was indicted.)

I also became convinced that the financial market controls used to fight fraud, money laundering and terrorist finance have become unbalanced as they have been beefed up post-9/11. The modern obsession with ‘identity’ – of asking even poor people living in huts in Africa for an ID document and two utility bills before they can open a bank account – is not only ridiculous and often discriminatory. It’s led banks and regulators to take their eye off the ball, and to replace risk reduction with due diligence.

In real life, following the money is just as important as following the man. It’s time for the system to be rebalanced.

Extreme online risks

An article in the Guardian, and a more detailed story in PC Pro, give the background to Operation Ore. In this operation, hundreds (and possibly thousands) of innocent men were raided by the police on suspicion of downloading child pornography, when in fact they had simply been victims of credit card fraud. The police appear to have completely misunderstood the forensic evidence; once the light began to dawn, it seems that they closed ranks and covered up. These stories follow an earlier piece in PC Pro which first brought the problem to public attention in 2005.

Recently we were asked by the Lords Science and Technology Committee whether failures of online security caused real problems, or were exaggerated. While there is no doubt that many people talk up the threats, here is a real case in which online fraud has done much worse harm than simply emptying bank accounts. Having the police turn up at six in the morning, search your house, tell your wife that you’re a suspected pedophile, and with social workers in tow to interview your children, must be a horrific experience. Over thirty men have killed themselves. At least one appears to have been innocent. As this story develops, I believe it will come to be seen as the worst policing scandal in the UK for many years.

I remarked recently that it was a bad idea for the police to depend on the banks for expertise on card fraud, and to accept their money to fund such investigations as the banks wanted carried out. Although Home Office and DTI ministers say they’re happy with these arrangements, the tragic events of Operation Ore show that the police should not compromise their independence and their technical capability for short-term political or financial convenience. The results can simply be tragic.

Debug mode = hacking tool?

We have recently been implementing an attack on ZigBee communication. The ZigBee chip we have been using works pretty much like any other — it listens on a selected channel and when there is a packet being transmitted, the data is stored in internal buffer. When the whole packet is received, an interrupt is signalled and micro-controller can read out the whole packet at once.

What we needed was a bit more direct access to the MAC layer. The very first idea was to find another chip as we could not do anything at the level of abstraction described. On the second thought, we carefully read the datasheet and found out that there is an “unbuffered mode” for receiving, as well as transmitting data. There is a sentence that reads “Un-buffered mode should be used for evaluation / debugging purposes only”, but why not to give it a go.

It took a while (the datasheet does not really get the description right, there are basic factual mistakes, and the micro-controller was a bit slower to serve hardware interrupts than expected) but we managed to do what we wanted to do — get interesting data before the whole packet is transmitted.

This was not the first occasion when debug mode or debug information saved us from a defeat when implementing an attack. This made me think a bit.

This sort of approach exactly represents the original meaning of hacking and hackers. It seems that this sort of activity is slowly returning to universities as more and more people are implementing attacks to demonstrate their ideas. It is not so much popular (my impression) to implement complicated systems like role-based access control systems because real life shows that there will be “buffer overflows” allowing all the cleverness to be bypassed. Not many people are interested in doing research into software vulnerabilities either. On the other hand, more attacks on hardware (stealthy, subtle ones) are being devised and implemented.

The second issue is much more general. Is it the case that there will always be a way to get around the official (or intended) application interface? Surely, there are products that restrict access to, or remove, debugging options when the product is prepared for production — smart-cards are a typical example. But disabling debug features introduces very strong limitations. It is very hard or even impossible to check correct functionality of the product (hardware chip, piece of software) — something not really desirable when the product should be used as a component in larger systems. And definitely not desirable for hackers …

There aren’t that many serious spammers any more

I’ve recently been analysing the incoming email traffic data for Demon Internet, a large(ish) UK ISP, for the first four weeks of March 2007. The raw totals show a very interesting picture:

Email & Spam traffic at Demon Internet, March 2007

The top four lines are the amount of incoming email that was detected as “spam” by the Cloudmark technology that Demon now uses. The values lie in a range of 5 to 13 million items per day, with the day of the week being irrelevant, and huge swings from day to day. See how 5 million items on Saturday 18th is followed by 13 million items on Monday 20th!

The bottom four lines are the amount of incoming email that was not detected as spam (and it also excludes incoming items with a “null” sender, which will be bounces, almost certainly all “backscatter” from remote sites “bouncing” spam with forged senders). The values here are between about 2 and 4 million items a day, with a clear pattern being followed from week to week, with lower values at the weekends.

There’s an interesting rise in non-spam email on Tuesday 27th, which corresponds to a new type of “pump and dump” spam (mainly in German) which clearly wasn’t immediately spotted as spam. By the next day, things were back to normal.

The figures and patterns are interesting in themselves, but they show how summarising an average spam value (it was in fact 73%) hides a much more complex picture.

The picture is also hiding a deeper truth. There’s no “law of large numbers” operating here. That is to say, the incoming spam is not composed of lots of individual spam gangs, each doing their own thing and thereby generating a fairly steady amount of spam from day to day. Instead, it is clear that very significant volumes of spam is being sent by a very small number of gangs, so that as they switch their destinations around: today it’s .uk, tomorrow it’s aol.com and on Tuesday it will be .de (hmm, perhaps that’s why they hit .demon addresses? a missing $ from their regular expression!).

If there’s only a few large gangs operating — and other people are detecting these huge swings of activity as well — then that’s very significant for public policy. One can have sympathy for police officers and regulators faced with the prospect of dealing with hundreds or thousands of spammers; dealing with them all would take many (rather boring and frustrating) lifetimes. But if there are, say, five, big gangs at most — well that’s suddenly looking like a tractable problem.

Spam is costing us [allegedly] billions (and is a growing problem for the developing world), so there’s all sorts of economic and diplomatic reasons for tackling it. So tell your local spam law enforcement officials to have a look at the graph of Demon Internet’s traffic. It tells them that trying to do something about the spammers currently makes a lot of sense — and that by just tracking down a handful of people, they will be capable of making a real difference!

TK Maxx and banking regulation

Today’s news coverage of the theft of 46m credit card numbers from TK Maxx underlines a number of important issues in security, economics and regulation. First, US cardholders are treated much better than customers here – over there, the store will have to write to them and apologise. Here, cardholders might not have been told at all were it not that some US cardholders also had their data stolen from the computer centre in Watford. We need a breach reporting law in the UK; even the ICO agrees.

Second, from the end of this month, UK citizens won’t be able to report bank or card fraud to the police; you’ll have to report it to the bank instead, which may or may not then report it to the police. (The Home Office wants to massage the crime statistics downwards, while the banks want to be able to control and direct such police investigations as take place.)

Third, this week the UK government agreed to support the EU Payment Services Directive, which (unless the European Parliament amends it) looks set to level down consumer protection against card fraud in Europe to the lowest common denominator.

Oh, and I think it’s disgraceful that the police’s Dedicated Cheque and Plastic Crime Unit is jointly funded and staffed by the banks. The Financial Ombudsman service, which is also funded by the banks, is notoriously biased against cardholders, and it’s not acceptable for the police to follow them down that path. When bankers tell customers who complain about fraud ‘Our systems are secure so it must be your fault’, that’s fraud. Police officers should not side with fraudsters against their victims. And it’s not just financial crime investigations that suffer because policemen leave it to the banks to investigate and adjudicate card fraud; when policemen don’t understand fraud, they screw up elsewhere too. For example, there have been dozens of cases where people whose credit card numbers were stolen and used to buy child pornography were wrongfully prosecuted, including at least one tragic case.

Devote your day to democracy

The Open Rights Group are looking for volunteers to observe electronic voting/counting pilots, being tested in eleven areas around the UK during the May 3, 2007 elections. Richard and I have volunteered for Bedford pilot, but there are still many other areas that need help. If you have the time to spare, find out the details and sign the pledge. You will need to be fast; the deadline for registering as an observer is April 4, 2007.

The e-voting areas are:

  • Rushmoor
  • Sheffield
  • Shrewsbury & Atcham
  • South Bucks
  • Swindon (near Wroughton, Draycot Foliat, Chisledon)

and the e-counting pilot areas are:

  • Bedford
  • Breckland
  • Dover
  • South Bucks
  • Stratford-upon-Avon
  • Warwick (near Leek Wootton, Old Milverton, Leamington)

One of the strongest objections against e-voting and e-counting is the lack of transparency. The source code for the voting computers is rarely open to audit, and even if it is, voters have no assurance that the device they are using has been loaded with the same software as was validated. To try to find out more about how the e-counting system will work, I sent a freedom of information request to Bedford council.

If you would like to find out more about e-voting and e-counting systems, you might like to consider making your own request, but remember that public bodies are permitted 20 working days (about a month) to reply, so there is not much time before the election. For general information on the Freedom of Information Act, see the guide book from the Campaign for Freedom of Information.