How quickly are phishing websites taken down?

Tyler Moore and myself have a paper (An Empirical Analysis of the Current State of Phishing Attack and Defence) accepted at this year’s Workshop on the Economics of Information Security (WEIS 2007) in which we examine how long phishing websites remain available before the impersonated bank gets them “taken-down”.

Continue reading How quickly are phishing websites taken down?

Follow the money, stupid

The Federal Reserve commissioned me to research and write a paper on fraud, risk and nonbank payment systems. I found that phishing is facilitated by payment systems like eGold and Western Union which make the recovery of stolen funds more difficult. Traditional payment systems like cheques and credit card payments are revocable; cheques can bounce and credit card charges can be charged back. However some modern systems provide irrevocability without charging an appropriate risk premium, and this attracts the bad guys. (After I submitted the paper, and before it was presented on Friday, eGold was indicted.)

I also became convinced that the financial market controls used to fight fraud, money laundering and terrorist finance have become unbalanced as they have been beefed up post-9/11. The modern obsession with ‘identity’ – of asking even poor people living in huts in Africa for an ID document and two utility bills before they can open a bank account – is not only ridiculous and often discriminatory. It’s led banks and regulators to take their eye off the ball, and to replace risk reduction with due diligence.

In real life, following the money is just as important as following the man. It’s time for the system to be rebalanced.

Extreme online risks

An article in the Guardian, and a more detailed story in PC Pro, give the background to Operation Ore. In this operation, hundreds (and possibly thousands) of innocent men were raided by the police on suspicion of downloading child pornography, when in fact they had simply been victims of credit card fraud. The police appear to have completely misunderstood the forensic evidence; once the light began to dawn, it seems that they closed ranks and covered up. These stories follow an earlier piece in PC Pro which first brought the problem to public attention in 2005.

Recently we were asked by the Lords Science and Technology Committee whether failures of online security caused real problems, or were exaggerated. While there is no doubt that many people talk up the threats, here is a real case in which online fraud has done much worse harm than simply emptying bank accounts. Having the police turn up at six in the morning, search your house, tell your wife that you’re a suspected pedophile, and with social workers in tow to interview your children, must be a horrific experience. Over thirty men have killed themselves. At least one appears to have been innocent. As this story develops, I believe it will come to be seen as the worst policing scandal in the UK for many years.

I remarked recently that it was a bad idea for the police to depend on the banks for expertise on card fraud, and to accept their money to fund such investigations as the banks wanted carried out. Although Home Office and DTI ministers say they’re happy with these arrangements, the tragic events of Operation Ore show that the police should not compromise their independence and their technical capability for short-term political or financial convenience. The results can simply be tragic.

Debug mode = hacking tool?

We have recently been implementing an attack on ZigBee communication. The ZigBee chip we have been using works pretty much like any other — it listens on a selected channel and when there is a packet being transmitted, the data is stored in internal buffer. When the whole packet is received, an interrupt is signalled and micro-controller can read out the whole packet at once.

What we needed was a bit more direct access to the MAC layer. The very first idea was to find another chip as we could not do anything at the level of abstraction described. On the second thought, we carefully read the datasheet and found out that there is an “unbuffered mode” for receiving, as well as transmitting data. There is a sentence that reads “Un-buffered mode should be used for evaluation / debugging purposes only”, but why not to give it a go.

It took a while (the datasheet does not really get the description right, there are basic factual mistakes, and the micro-controller was a bit slower to serve hardware interrupts than expected) but we managed to do what we wanted to do — get interesting data before the whole packet is transmitted.

This was not the first occasion when debug mode or debug information saved us from a defeat when implementing an attack. This made me think a bit.

This sort of approach exactly represents the original meaning of hacking and hackers. It seems that this sort of activity is slowly returning to universities as more and more people are implementing attacks to demonstrate their ideas. It is not so much popular (my impression) to implement complicated systems like role-based access control systems because real life shows that there will be “buffer overflows” allowing all the cleverness to be bypassed. Not many people are interested in doing research into software vulnerabilities either. On the other hand, more attacks on hardware (stealthy, subtle ones) are being devised and implemented.

The second issue is much more general. Is it the case that there will always be a way to get around the official (or intended) application interface? Surely, there are products that restrict access to, or remove, debugging options when the product is prepared for production — smart-cards are a typical example. But disabling debug features introduces very strong limitations. It is very hard or even impossible to check correct functionality of the product (hardware chip, piece of software) — something not really desirable when the product should be used as a component in larger systems. And definitely not desirable for hackers …

There aren’t that many serious spammers any more

I’ve recently been analysing the incoming email traffic data for Demon Internet, a large(ish) UK ISP, for the first four weeks of March 2007. The raw totals show a very interesting picture:

Email & Spam traffic at Demon Internet, March 2007

The top four lines are the amount of incoming email that was detected as “spam” by the Cloudmark technology that Demon now uses. The values lie in a range of 5 to 13 million items per day, with the day of the week being irrelevant, and huge swings from day to day. See how 5 million items on Saturday 18th is followed by 13 million items on Monday 20th!

The bottom four lines are the amount of incoming email that was not detected as spam (and it also excludes incoming items with a “null” sender, which will be bounces, almost certainly all “backscatter” from remote sites “bouncing” spam with forged senders). The values here are between about 2 and 4 million items a day, with a clear pattern being followed from week to week, with lower values at the weekends.

There’s an interesting rise in non-spam email on Tuesday 27th, which corresponds to a new type of “pump and dump” spam (mainly in German) which clearly wasn’t immediately spotted as spam. By the next day, things were back to normal.

The figures and patterns are interesting in themselves, but they show how summarising an average spam value (it was in fact 73%) hides a much more complex picture.

The picture is also hiding a deeper truth. There’s no “law of large numbers” operating here. That is to say, the incoming spam is not composed of lots of individual spam gangs, each doing their own thing and thereby generating a fairly steady amount of spam from day to day. Instead, it is clear that very significant volumes of spam is being sent by a very small number of gangs, so that as they switch their destinations around: today it’s .uk, tomorrow it’s aol.com and on Tuesday it will be .de (hmm, perhaps that’s why they hit .demon addresses? a missing $ from their regular expression!).

If there’s only a few large gangs operating — and other people are detecting these huge swings of activity as well — then that’s very significant for public policy. One can have sympathy for police officers and regulators faced with the prospect of dealing with hundreds or thousands of spammers; dealing with them all would take many (rather boring and frustrating) lifetimes. But if there are, say, five, big gangs at most — well that’s suddenly looking like a tractable problem.

Spam is costing us [allegedly] billions (and is a growing problem for the developing world), so there’s all sorts of economic and diplomatic reasons for tackling it. So tell your local spam law enforcement officials to have a look at the graph of Demon Internet’s traffic. It tells them that trying to do something about the spammers currently makes a lot of sense — and that by just tracking down a handful of people, they will be capable of making a real difference!

TK Maxx and banking regulation

Today’s news coverage of the theft of 46m credit card numbers from TK Maxx underlines a number of important issues in security, economics and regulation. First, US cardholders are treated much better than customers here – over there, the store will have to write to them and apologise. Here, cardholders might not have been told at all were it not that some US cardholders also had their data stolen from the computer centre in Watford. We need a breach reporting law in the UK; even the ICO agrees.

Second, from the end of this month, UK citizens won’t be able to report bank or card fraud to the police; you’ll have to report it to the bank instead, which may or may not then report it to the police. (The Home Office wants to massage the crime statistics downwards, while the banks want to be able to control and direct such police investigations as take place.)

Third, this week the UK government agreed to support the EU Payment Services Directive, which (unless the European Parliament amends it) looks set to level down consumer protection against card fraud in Europe to the lowest common denominator.

Oh, and I think it’s disgraceful that the police’s Dedicated Cheque and Plastic Crime Unit is jointly funded and staffed by the banks. The Financial Ombudsman service, which is also funded by the banks, is notoriously biased against cardholders, and it’s not acceptable for the police to follow them down that path. When bankers tell customers who complain about fraud ‘Our systems are secure so it must be your fault’, that’s fraud. Police officers should not side with fraudsters against their victims. And it’s not just financial crime investigations that suffer because policemen leave it to the banks to investigate and adjudicate card fraud; when policemen don’t understand fraud, they screw up elsewhere too. For example, there have been dozens of cases where people whose credit card numbers were stolen and used to buy child pornography were wrongfully prosecuted, including at least one tragic case.

Devote your day to democracy

The Open Rights Group are looking for volunteers to observe electronic voting/counting pilots, being tested in eleven areas around the UK during the May 3, 2007 elections. Richard and I have volunteered for Bedford pilot, but there are still many other areas that need help. If you have the time to spare, find out the details and sign the pledge. You will need to be fast; the deadline for registering as an observer is April 4, 2007.

The e-voting areas are:

  • Rushmoor
  • Sheffield
  • Shrewsbury & Atcham
  • South Bucks
  • Swindon (near Wroughton, Draycot Foliat, Chisledon)

and the e-counting pilot areas are:

  • Bedford
  • Breckland
  • Dover
  • South Bucks
  • Stratford-upon-Avon
  • Warwick (near Leek Wootton, Old Milverton, Leamington)

One of the strongest objections against e-voting and e-counting is the lack of transparency. The source code for the voting computers is rarely open to audit, and even if it is, voters have no assurance that the device they are using has been loaded with the same software as was validated. To try to find out more about how the e-counting system will work, I sent a freedom of information request to Bedford council.

If you would like to find out more about e-voting and e-counting systems, you might like to consider making your own request, but remember that public bodies are permitted 20 working days (about a month) to reply, so there is not much time before the election. For general information on the Freedom of Information Act, see the guide book from the Campaign for Freedom of Information.

What is the unit of amplification for DoS?

Roger Needham’s work has warned us of the potential damage of Denial-of-Service attacks. Since, protocol designers try to minimize the storage committed to unauthenticated partners, as well as prevent ‘amplification effects’ that could help DoS adversaries: a single unit of network communication should only lead to at most one unit of network communication. This way an adversary cannot use network services to amplify his or her ability to flood other nodes. The key question that arises is: what is the unit of network communication?

My favorite authentication protocol, that incorporates state-of-the-art DoS prevention features, is JFKr (Just Fast Keying with responder anonymity.) An RFC of it is also available for those keen to implement it. JFKr implements a signed ephemeral Diffie-Hellman exchange, with DoS prevention cookies being used by the responder to thwart storage exhaustion DoS attacks directed against him.

The protocol goes a bit like this (full details in section 2.3 of the RFC):

Message 1, I->R: Ni, g^i

Message 2, R->I: Ni, Nr, g^r, GRPINFOr, HMAC{HKr}(Ni, Nr, g^r)

Message 3, I->R: Ni, Nr, g^i, g^r, HMAC{HKr}(Ni, Nr, g^r),
E{Ke}(IDi, sa, SIG{i}(Ni, Nr, g^i, g^r, GRPINFO)),
HMAC{Ka}(‘I’, E{Ke}(IDi, sa, SIG{i}(Ni, Nr, g^i, g^r, GRPINFO)))

Message 4, R->I: E{Ke}(IDr, sa’, SIG{r}(Ni, Nr, g^i, g^r)),
HMAC{Ka}(‘R’, E{Ke}(IDr, sa’, SIG{r}(Ni, Nr, g^i, g^r)))

Note that after message 2. is sent, ‘R’ (the responder) does not need to store anything, since all the data necessary to perform authentication will be sent back by ‘I’ in Message 3. One is also inclined to think that there is no amplification effect that ‘I’ could benefit from for DoS, since the single message 1. generates a single reply i.e. message 2. Is this the case?

We are implementing (at K.U.Leuven) a traffic analysis prevention system, where all data is transported in UDP packets of fixed length 140 bytes. As it happens message 1. is slightly shorter than message 2., since it only carries a single nonce and no cookie. (The JFKi version of the protocol has a message 2. that is even longer.) This means that we have to split the reply in message 2. over two packets carrying 280 bytes in total.

A vanilla implementation of JFKr would allow for the following DoS attack. The adversary sends many message 1. UDP packets pretending to be initiating an authentication session with an unsuspecting server ‘R’ using JFKr. Furthermore the adversary forges the source IP of the UDP packets to make them appear as if they are sent by the victim’s IP address. This costs the adversary a UDP packet of 140 bytes. The server ‘R’ follows the protocol and replies with two UDP packets of 140 bytes each, sent to the victim’s IP address. The adversary can of course perform this attack with many sessions or servers in parallel, and end up with double the number of packets and data being sent to the victim.

What went wrong here? The assumption that one message (1.) is replied with one message (2.) is not sufficient (in our case, but also in general) to guarantee the adversary cannot amplify a DoS attack. One has to count the actual bits and bytes on the wire before making this judgment.

How do we fix this? It is really easy: The responder ‘R’ requires the first message (1.) to be at least as long (in bytes) as the reply message (2.) This can be done by appending a Nonce (J) at the end of the message.

Message 1, I->R: Ni, g^i, J

This forces the attacker to spend to send, byte for byte, as much traffic as will hit the victim’s computer.

e-Government Framework is Rather Broken

FIPR colleagues and I have written a response to the recent Cabinet Office consultation on the proposed Framework for e-Government. We’re not very impressed. Whitehall’s security advisers don’t seem to understand phishing; they protect information much less when its compromise could harm private citizens, rather than government employees (which is foolish given how terrorists and violent lobby groups work nowadays); and as well as the inappropriate threat model, there are inappropriate policy models. Government departments that follow this advice are likely to build clunky, expensive, insecure systems.

How (not) to write an abstract

Having just finished another pile of conference-paper reviews, it strikes me that the single most common stylistic problem with papers in our field is the abstract.

Disappointingly few Computer Science authors seem to understand the difference between an abstract and an introduction. Far too many abstracts are useless because they read just like the first paragraphs of the “Introduction” section; the separation between the two would not be obvious if there were no change in font or a heading in between.

The two serve completely different purposes:

Abstracts are concise summaries for experts. Write your abstract for readers who are familiar with >50% of the references in your bibliography, who will soon have read at least the abstracts of the rest, and who are quite likely to quote your work in their own next paper. Answer implicitely in your abstract experts’ questions such as “What’s new here?” and “What was actually achieved?”. Write in a form that squeezes as many technical details as you can about what you actually did into about 250 words (or whatever your publisher specifies). Include details about any experimental setup and results. Make sure all the crucial keywords that describe your work appear in either the title or the abstract.

Introductions are for a wider audience. Think of your reader as a first-year graduate student who is not yet an expert in your field, but interested in becoming one. An introduction should answer questions like “Why is the general topic of your work interesting?”, “What do you ultimateley want to achieve?”, “What are the most important recent related developments?”, “What inspired your work?”. None of this belongs into an abstract, because experts will know the answers already.

Abstract and introduction are alternative paths into your paper. You may think of an abstract also as a kind of entrance test: a reader who fully understands your abstract is likely to be an expert and therefore should be able to skip at least the first section of the paper. A reader who does not understand something in the abstract should focus on the introduction, which gently introduces and points to all the necessary background knowledge to get started. Continue reading How (not) to write an abstract