Monthly Archives: April 2007

Extreme online risks

An article in the Guardian, and a more detailed story in PC Pro, give the background to Operation Ore. In this operation, hundreds (and possibly thousands) of innocent men were raided by the police on suspicion of downloading child pornography, when in fact they had simply been victims of credit card fraud. The police appear to have completely misunderstood the forensic evidence; once the light began to dawn, it seems that they closed ranks and covered up. These stories follow an earlier piece in PC Pro which first brought the problem to public attention in 2005.

Recently we were asked by the Lords Science and Technology Committee whether failures of online security caused real problems, or were exaggerated. While there is no doubt that many people talk up the threats, here is a real case in which online fraud has done much worse harm than simply emptying bank accounts. Having the police turn up at six in the morning, search your house, tell your wife that you’re a suspected pedophile, and with social workers in tow to interview your children, must be a horrific experience. Over thirty men have killed themselves. At least one appears to have been innocent. As this story develops, I believe it will come to be seen as the worst policing scandal in the UK for many years.

I remarked recently that it was a bad idea for the police to depend on the banks for expertise on card fraud, and to accept their money to fund such investigations as the banks wanted carried out. Although Home Office and DTI ministers say they’re happy with these arrangements, the tragic events of Operation Ore show that the police should not compromise their independence and their technical capability for short-term political or financial convenience. The results can simply be tragic.

Debug mode = hacking tool?

We have recently been implementing an attack on ZigBee communication. The ZigBee chip we have been using works pretty much like any other — it listens on a selected channel and when there is a packet being transmitted, the data is stored in internal buffer. When the whole packet is received, an interrupt is signalled and micro-controller can read out the whole packet at once.

What we needed was a bit more direct access to the MAC layer. The very first idea was to find another chip as we could not do anything at the level of abstraction described. On the second thought, we carefully read the datasheet and found out that there is an “unbuffered mode” for receiving, as well as transmitting data. There is a sentence that reads “Un-buffered mode should be used for evaluation / debugging purposes only”, but why not to give it a go.

It took a while (the datasheet does not really get the description right, there are basic factual mistakes, and the micro-controller was a bit slower to serve hardware interrupts than expected) but we managed to do what we wanted to do — get interesting data before the whole packet is transmitted.

This was not the first occasion when debug mode or debug information saved us from a defeat when implementing an attack. This made me think a bit.

This sort of approach exactly represents the original meaning of hacking and hackers. It seems that this sort of activity is slowly returning to universities as more and more people are implementing attacks to demonstrate their ideas. It is not so much popular (my impression) to implement complicated systems like role-based access control systems because real life shows that there will be “buffer overflows” allowing all the cleverness to be bypassed. Not many people are interested in doing research into software vulnerabilities either. On the other hand, more attacks on hardware (stealthy, subtle ones) are being devised and implemented.

The second issue is much more general. Is it the case that there will always be a way to get around the official (or intended) application interface? Surely, there are products that restrict access to, or remove, debugging options when the product is prepared for production — smart-cards are a typical example. But disabling debug features introduces very strong limitations. It is very hard or even impossible to check correct functionality of the product (hardware chip, piece of software) — something not really desirable when the product should be used as a component in larger systems. And definitely not desirable for hackers …

There aren’t that many serious spammers any more

I’ve recently been analysing the incoming email traffic data for Demon Internet, a large(ish) UK ISP, for the first four weeks of March 2007. The raw totals show a very interesting picture:

Email & Spam traffic at Demon Internet, March 2007

The top four lines are the amount of incoming email that was detected as “spam” by the Cloudmark technology that Demon now uses. The values lie in a range of 5 to 13 million items per day, with the day of the week being irrelevant, and huge swings from day to day. See how 5 million items on Saturday 18th is followed by 13 million items on Monday 20th!

The bottom four lines are the amount of incoming email that was not detected as spam (and it also excludes incoming items with a “null” sender, which will be bounces, almost certainly all “backscatter” from remote sites “bouncing” spam with forged senders). The values here are between about 2 and 4 million items a day, with a clear pattern being followed from week to week, with lower values at the weekends.

There’s an interesting rise in non-spam email on Tuesday 27th, which corresponds to a new type of “pump and dump” spam (mainly in German) which clearly wasn’t immediately spotted as spam. By the next day, things were back to normal.

The figures and patterns are interesting in themselves, but they show how summarising an average spam value (it was in fact 73%) hides a much more complex picture.

The picture is also hiding a deeper truth. There’s no “law of large numbers” operating here. That is to say, the incoming spam is not composed of lots of individual spam gangs, each doing their own thing and thereby generating a fairly steady amount of spam from day to day. Instead, it is clear that very significant volumes of spam is being sent by a very small number of gangs, so that as they switch their destinations around: today it’s .uk, tomorrow it’s aol.com and on Tuesday it will be .de (hmm, perhaps that’s why they hit .demon addresses? a missing $ from their regular expression!).

If there’s only a few large gangs operating — and other people are detecting these huge swings of activity as well — then that’s very significant for public policy. One can have sympathy for police officers and regulators faced with the prospect of dealing with hundreds or thousands of spammers; dealing with them all would take many (rather boring and frustrating) lifetimes. But if there are, say, five, big gangs at most — well that’s suddenly looking like a tractable problem.

Spam is costing us [allegedly] billions (and is a growing problem for the developing world), so there’s all sorts of economic and diplomatic reasons for tackling it. So tell your local spam law enforcement officials to have a look at the graph of Demon Internet’s traffic. It tells them that trying to do something about the spammers currently makes a lot of sense — and that by just tracking down a handful of people, they will be capable of making a real difference!