Monthly Archives: March 2007

TK Maxx and banking regulation

Today’s news coverage of the theft of 46m credit card numbers from TK Maxx underlines a number of important issues in security, economics and regulation. First, US cardholders are treated much better than customers here – over there, the store will have to write to them and apologise. Here, cardholders might not have been told at all were it not that some US cardholders also had their data stolen from the computer centre in Watford. We need a breach reporting law in the UK; even the ICO agrees.

Second, from the end of this month, UK citizens won’t be able to report bank or card fraud to the police; you’ll have to report it to the bank instead, which may or may not then report it to the police. (The Home Office wants to massage the crime statistics downwards, while the banks want to be able to control and direct such police investigations as take place.)

Third, this week the UK government agreed to support the EU Payment Services Directive, which (unless the European Parliament amends it) looks set to level down consumer protection against card fraud in Europe to the lowest common denominator.

Oh, and I think it’s disgraceful that the police’s Dedicated Cheque and Plastic Crime Unit is jointly funded and staffed by the banks. The Financial Ombudsman service, which is also funded by the banks, is notoriously biased against cardholders, and it’s not acceptable for the police to follow them down that path. When bankers tell customers who complain about fraud ‘Our systems are secure so it must be your fault’, that’s fraud. Police officers should not side with fraudsters against their victims. And it’s not just financial crime investigations that suffer because policemen leave it to the banks to investigate and adjudicate card fraud; when policemen don’t understand fraud, they screw up elsewhere too. For example, there have been dozens of cases where people whose credit card numbers were stolen and used to buy child pornography were wrongfully prosecuted, including at least one tragic case.

Devote your day to democracy

The Open Rights Group are looking for volunteers to observe electronic voting/counting pilots, being tested in eleven areas around the UK during the May 3, 2007 elections. Richard and I have volunteered for Bedford pilot, but there are still many other areas that need help. If you have the time to spare, find out the details and sign the pledge. You will need to be fast; the deadline for registering as an observer is April 4, 2007.

The e-voting areas are:

  • Rushmoor
  • Sheffield
  • Shrewsbury & Atcham
  • South Bucks
  • Swindon (near Wroughton, Draycot Foliat, Chisledon)

and the e-counting pilot areas are:

  • Bedford
  • Breckland
  • Dover
  • South Bucks
  • Stratford-upon-Avon
  • Warwick (near Leek Wootton, Old Milverton, Leamington)

One of the strongest objections against e-voting and e-counting is the lack of transparency. The source code for the voting computers is rarely open to audit, and even if it is, voters have no assurance that the device they are using has been loaded with the same software as was validated. To try to find out more about how the e-counting system will work, I sent a freedom of information request to Bedford council.

If you would like to find out more about e-voting and e-counting systems, you might like to consider making your own request, but remember that public bodies are permitted 20 working days (about a month) to reply, so there is not much time before the election. For general information on the Freedom of Information Act, see the guide book from the Campaign for Freedom of Information.

What is the unit of amplification for DoS?

Roger Needham’s work has warned us of the potential damage of Denial-of-Service attacks. Since, protocol designers try to minimize the storage committed to unauthenticated partners, as well as prevent ‘amplification effects’ that could help DoS adversaries: a single unit of network communication should only lead to at most one unit of network communication. This way an adversary cannot use network services to amplify his or her ability to flood other nodes. The key question that arises is: what is the unit of network communication?

My favorite authentication protocol, that incorporates state-of-the-art DoS prevention features, is JFKr (Just Fast Keying with responder anonymity.) An RFC of it is also available for those keen to implement it. JFKr implements a signed ephemeral Diffie-Hellman exchange, with DoS prevention cookies being used by the responder to thwart storage exhaustion DoS attacks directed against him.

The protocol goes a bit like this (full details in section 2.3 of the RFC):

Message 1, I->R: Ni, g^i

Message 2, R->I: Ni, Nr, g^r, GRPINFOr, HMAC{HKr}(Ni, Nr, g^r)

Message 3, I->R: Ni, Nr, g^i, g^r, HMAC{HKr}(Ni, Nr, g^r),
E{Ke}(IDi, sa, SIG{i}(Ni, Nr, g^i, g^r, GRPINFO)),
HMAC{Ka}(‘I’, E{Ke}(IDi, sa, SIG{i}(Ni, Nr, g^i, g^r, GRPINFO)))

Message 4, R->I: E{Ke}(IDr, sa’, SIG{r}(Ni, Nr, g^i, g^r)),
HMAC{Ka}(‘R’, E{Ke}(IDr, sa’, SIG{r}(Ni, Nr, g^i, g^r)))

Note that after message 2. is sent, ‘R’ (the responder) does not need to store anything, since all the data necessary to perform authentication will be sent back by ‘I’ in Message 3. One is also inclined to think that there is no amplification effect that ‘I’ could benefit from for DoS, since the single message 1. generates a single reply i.e. message 2. Is this the case?

We are implementing (at K.U.Leuven) a traffic analysis prevention system, where all data is transported in UDP packets of fixed length 140 bytes. As it happens message 1. is slightly shorter than message 2., since it only carries a single nonce and no cookie. (The JFKi version of the protocol has a message 2. that is even longer.) This means that we have to split the reply in message 2. over two packets carrying 280 bytes in total.

A vanilla implementation of JFKr would allow for the following DoS attack. The adversary sends many message 1. UDP packets pretending to be initiating an authentication session with an unsuspecting server ‘R’ using JFKr. Furthermore the adversary forges the source IP of the UDP packets to make them appear as if they are sent by the victim’s IP address. This costs the adversary a UDP packet of 140 bytes. The server ‘R’ follows the protocol and replies with two UDP packets of 140 bytes each, sent to the victim’s IP address. The adversary can of course perform this attack with many sessions or servers in parallel, and end up with double the number of packets and data being sent to the victim.

What went wrong here? The assumption that one message (1.) is replied with one message (2.) is not sufficient (in our case, but also in general) to guarantee the adversary cannot amplify a DoS attack. One has to count the actual bits and bytes on the wire before making this judgment.

How do we fix this? It is really easy: The responder ‘R’ requires the first message (1.) to be at least as long (in bytes) as the reply message (2.) This can be done by appending a Nonce (J) at the end of the message.

Message 1, I->R: Ni, g^i, J

This forces the attacker to spend to send, byte for byte, as much traffic as will hit the victim’s computer.

e-Government Framework is Rather Broken

FIPR colleagues and I have written a response to the recent Cabinet Office consultation on the proposed Framework for e-Government. We’re not very impressed. Whitehall’s security advisers don’t seem to understand phishing; they protect information much less when its compromise could harm private citizens, rather than government employees (which is foolish given how terrorists and violent lobby groups work nowadays); and as well as the inappropriate threat model, there are inappropriate policy models. Government departments that follow this advice are likely to build clunky, expensive, insecure systems.

How (not) to write an abstract

Having just finished another pile of conference-paper reviews, it strikes me that the single most common stylistic problem with papers in our field is the abstract.

Disappointingly few Computer Science authors seem to understand the difference between an abstract and an introduction. Far too many abstracts are useless because they read just like the first paragraphs of the “Introduction” section; the separation between the two would not be obvious if there were no change in font or a heading in between.

The two serve completely different purposes:

Abstracts are concise summaries for experts. Write your abstract for readers who are familiar with >50% of the references in your bibliography, who will soon have read at least the abstracts of the rest, and who are quite likely to quote your work in their own next paper. Answer implicitely in your abstract experts’ questions such as “What’s new here?” and “What was actually achieved?”. Write in a form that squeezes as many technical details as you can about what you actually did into about 250 words (or whatever your publisher specifies). Include details about any experimental setup and results. Make sure all the crucial keywords that describe your work appear in either the title or the abstract.

Introductions are for a wider audience. Think of your reader as a first-year graduate student who is not yet an expert in your field, but interested in becoming one. An introduction should answer questions like “Why is the general topic of your work interesting?”, “What do you ultimateley want to achieve?”, “What are the most important recent related developments?”, “What inspired your work?”. None of this belongs into an abstract, because experts will know the answers already.

Abstract and introduction are alternative paths into your paper. You may think of an abstract also as a kind of entrance test: a reader who fully understands your abstract is likely to be an expert and therefore should be able to skip at least the first section of the paper. A reader who does not understand something in the abstract should focus on the introduction, which gently introduces and points to all the necessary background knowledge to get started. Continue reading How (not) to write an abstract

Identity theft without identification infrastructure

Recent comments to my last post about biometric passports have raised wider questions about the general purpose, risks and benefits of new government-supplied identification mechanisms (the wider “ID card debate” in the UK). So here is a quick summary of my basic views on this.

For some years now, the UK government has planned to catch up with other European countries in providing a purpose-designed identification infrastructure in order to make life simpler and reduce the risk of identity fraud (impersonation). The most visible of these plans center around a high-integrity identity register that keeps an append-only lifetime record of who exists and how they can be recognized biometrically. People will be able to get security-printed individual copies of their current record in this register (ID card, passport, biometric certificate), which they can easily present for offline verification. (What exact support is planned for remote identification over the telephone or Internet is not quite clear yet, so I’ll exclude that aspect for the moment, although the citizen PKIs already used in Finland, Belgium, etc., and under preparation elsewhere, probably give a good first idea.)

However, such plans have faced vocal opposition in the UK from “privacy advocates”, who have showed great talent in raising continuous media attention to a rather biased view of the subject. Their main refrain is that rather than prevent identity fraud, an identification infrastructure will help identity thieves by making it easier to access the very data that is today used by business to verify identity. I disagree. And I put “privacy advocates” into quotation marks here, because I believe that the existing practice whose continuation they advocate restricts both my privacy and my freedom. Continue reading Identity theft without identification infrastructure

Passports and biometric certificates

A recurring media story over the past half year has been that “a person’s identity can be stolen from new biometric passports”, which are “easy to clone” and therefore “not fit for purpose”. Most of these reports began with a widely quoted presentation by Lukas Grunwald in Las Vegas in August 2006, and continued with a report in the Guardian last November and one in this week’s Daily Mail on experiments by Adam Laurie.

I have closely followed the development of the ISO/ICAO standards for the biometric passport back in 2002/2003. In my view, the worries behind this media coverage are mainly based on a deep misunderstanding of what a “biometric passport” really is. The recent reports bring nothing to light that was not already well understood, anticipated and discussed during the development of the system more than four years ago. Continue reading Passports and biometric certificates