Monthly Archives: July 2006

Stolen mobiles story

I was just on Sky TV to debunk today’s initiative from the Home Office. The Home Secretary claimed that more rapid notification of stolen phone IMEIs between UK operators would have a significant effect on street crime.

I’m not so sure. Most mobiles stolen in the UK go abroad – the cheap ones to the third world and the flash ones to developed countries whose operators don’t subsidise handsets. As for the UK secondhand market, most mobiles can be reprogrammed (even though this is illegal). Lowering their street price is, I expect, a hard problem – like raising the street price of drugs.

What the Home Office might usefully do is to crack down on mobile operators who continue to bill customers after they have reported their phones stolen and cancelled their accounts. That is a scandal. Government’s role in problems like this is to straighten out the incentives and to stop the big boys from dumping risk on their customers.

Health IT Report

Late last year I wrote a report for the National Audit Office on the health IT expenditure, strategies and goals of the UK and a number of other developed countries. This showed that our National Program for IT is in many ways an outlier, and high-risk. Now that the NAO has published its own report, we’re allowed to make public our contribution to it.

Readers may recall that I was one of 23 computing professors who wrote to Parliament’s Health Select Committee asking for a technical review of this NHS computing project, which seems set to become the biggest computer project disaster ever. My concernes were informed by the NAO work.

Growing epidemic of card cloning

Markus points us to a story on card fraud by German TV reporter Sabine Wolf, who reported some of our recent work on how cards get cloned.She reports a number of cases in which German holidaymakers had cards cloned in Italy. In one case, a sniffer in a chip and PIN terminal at a skilift in Livigno sent holidaymakers’ card and PIN details by SMS to Romania. These devices, which apparently first appeared in Hungary in 2003, are now becoming widespread in Europe; one model sits between a card reader and the retail terminal. (I have always refused to use my chip card at stores such as Tesco and B&Q where they want to swipe your card at the checkout terminal and have you enter your PIN at a separate PIN pad – this is particularly vulnerable to such sniffing attacks.)

According to Hungarian police, the crooks bribe the terminal maintenance technicians, or send people round stores pretending to be technicians; the Bavarian police currently have a case in which 150 German cardholders lost 600,000 Euro; the Guardia di Finanza in Genoa have a case in which they’ve recovered thousands of SMSs from phone company computers containing card data; a prosecutor in Bolzano believes that crooks hide in supermarkets overnight and wire up the terminals; and there are also cases from Sweden, France, and Britain. Customers tend to get blamed unless there’s such a large batch of similar frauds that the bank can’t fail to observe the pattern. (This liability algorithm gives the bankers every incentive not to look too hard.)

In Hungary, banks now routinely confirm all card transactions to their customers by SMS. Maybe that’s what banks here will be doing in a year or two (Barclays will already SMS you if you make an online payment to a new payee). It’s not ideal though as it keeps pushing liability to the customer. I suspect it might take an EU directive to push the liability firmly back on the banks, along the lines of the US Federal Reserve’s Regulation E.

Powers, Powers, and yet more Powers …

Our beloved government is once again Taking Powers in the fight against computer crime. The Home Office proposes to create cyber-asbos that would enable the police to ban suspects from using such dangerous tools as computers and bank accounts. This would be done in a civil court against a low evidence standard; there are squeals from the usual suspects such as zdnet.

The Home Office proposals will also undermine existing data protection law; for example by allowing the banks to process sensitive data obtained from the public sector (medical record privacy, anyone?) and ‘dispelling misconceptions about consent’. I suppose some might welcome the proposed extension of ASBOs to companies. Thus, a company with repeated convictions for antitrust violations might be saddled with a list of harm-prevention conditions, for example against designing proprietary server-side protocols or destroying emails. I wonder what sort of responses the computer industry will make to this consultation 🙂

A cynic might point out that the ‘new powers’ seem in inverse proportion to the ability, or will, to use the existing ones. Ever since the South Sea Bubble in the 18th century, Britain has been notoriously lax in prosecuting bent bankers; city folk are now outraged when a Texas court dares to move from talk to action. Or take spam; although it’s now illegal to send unsolicited commercial emails to individuals in the UK, complaints don’t seem to result in action. Now trade and industry minister ‘Enver’ Hodge explains this is because there’s a loophole – it’s not illegal to spam businesses. So rather than prosecuting a spammer for spamming individuals, our beloved government will grab a headline or two by blocking this loophole. I don’t suppose Enver ever stopped to wonder how many spam runs are so well managed as to not send a single item to a single private email address – cheap headlines are more attractive than expensive, mesy implementation.

This pattern of behaviour – taking new powers rather than using the existing ones – is getting too well entrenched. In cyberspace we don’t have law enforcement any more – we have the illusion of law enforcement.

New card security problem?

Yesterday my wife received through the post a pre-approved unsolicited gold mastercard with a credit limit of over a thousand pounds. The issuer was Debenhams and the rationale was that she has a store card anyway – if she doesn’t want to use the credit card she is invited to cut the credit card in half and throw it away. (Although US banks do this all the time and UK banks aren’t supposed to, I’ll leave to the lawyers whether their marketing tactics test the limits of banking regulation.)

My point is this: the average customer has no idea how to ‘cut up’ a card now that it’s got a chip in it. Bisecting the plastic using scissors leaves the chip functional, so someone who fishes it out of the trash might use a yescard to clone it, even if they don’t know the PIN. (Of course the PIN mailer might be in the same bin.)

Here at the Lab we do have access to the means to destroy chips (HNO3, HF) but you really don’t want that stuff at home. Putting 240V through it will stop it working – but as this melts the bonding wires, an able attacker might depackage and rebond the chip.

My own suggestion would be to bisect the whole chip package using a pair of tin snips. If you don’t have those in your toolbox a hacksaw should do. This isn’t foolproof as there exist labs that can retrieve data from chip fragments, but it’s probably good enough to keep out the hackers.

It does seem a bit off, though, that card issuers now put people to the trouble of devising a means of the secure disposal of electronic waste, when consumers mostly have neither the knowledge nor the tools to do so properly

Downtime

Light Blue Touchpaper will be inaccessible for around 19 hours due to building maintenance. The server will be powered off at 22:00 UTC, Saturday 15 July and should be restarted at 17:00 UTC, Sunday 16 July. However, potential problems with the server or networking equipment on restoration of power may prevent access to the site until Monday.

Update: 17:30 UTC, Sunday 16 July
The power is on, the electronic locks let me in, network connectivity, DHCP and DNS works and the coffee machine is up and running. So that is the Computer Lab critical infrastructure in operation and LBT is back online.

Update: Tuesday 25 July
There will be another downtime for the Light Blue Touchpaper server on Wednesday 26 July, 7:00–10:00 UTC, due to work on our electricity supply.

Protecting software distribution with a cryptographic build process

At the rump session of PET 2006 I presented a simple idea on how to defend against a targeted attacks on software distribution. There were some misunderstandings after my 5 minute description, so I thought it would help to put the idea down in writing and I also hope to attract more discussion and a larger audience.

Consider a security-critical open source application; here I will use the example of Tor. The source-code is signed with the developer’s private key and users have the ability to verify the signature and build the application with a trustworthy compiler. I will also assume that if a backdoor is introduced in a deployed version, someone will notice, following from Linus’s law — “given enough eyeballs, all bugs are shallow”. These assumptions are debatable, for example the threats of compiler backdoors have been known for some time and subtle security vulnerabilities are hard to find. However a backdoor in the Linux kernel was discovered, and the anonymous reporter of a flaw in Tor’s Diffie-Hellman implementation probably found it through examining the source, so I think my assumptions are at least partially valid.

The developer’s signature protects against an attacker mounting a man-in-the-middle attack and modifying a particular user’s download. If the developer’s key (or the developer) is compromised then a backdoor could be inserted, but from the above assumptions, if this version is widely distributed, someone will discover the flaw and raise the alarm. However, there is no mechanism that protects against an attacker with access to the developer’s key singling out a user and adding a backdoor to only the version they download. Even if that user is diligent, the signature will check out fine. As the backdoor is only present in one version, the “many eyeballs” do not help. To defend against this attack, a user needs to find out if the version they download is the same as what other people receive and have the opportunity to verify.

My proposal is that the application build process should first calculate the hash of the source code, embed it in the binary and make it remotely accessible. Tor already has a mechanism for the last step, because each server publishes a directory descriptor which could include this hash. Multiple directory servers collect these and allow them to be downloaded by a web browser. Then when a user downloads the Tor source code, he can use the operating system provided hashing utility to check that the package he has matches a commonly deployed one.

If a particular version claims to have been deployed for some time, but no server displays a matching hash, then the user knows that there is a problem. The verification must be performed manually for now, but an operating system provider could produce a trusted tool for automating this. Note that server operators need to perform no extra work (the build process is automated) and only users who believe they may be targeted need perform the extra verification steps.

This might seem similar to the remote-attestation feature of Trusted Computing. Here, computers are fitted with special hardware, a Trusted Platform Module (TPM), which can produce a hard to forge proof of the software currently running. Because it is implemented in tamper-resistant hardware, even the owner of the computer cannot produce a fake statement, without breaking the TPM’s defences. This feature is needed for applications including DRM, but comes with added risks.

The use of TPMs to protect anonymity networks has been suggested, but the important difference between my proposal and TPM remote-attestation is that I assume most servers are honest, so will not lie about the software they are running. They have no incentive to do so, unless they want to harm the anonymity of the users, and if enough servers are malicious then there is no need to modify the client users are running, as the network is broken already. So there is no need for special hardware to implement my proposal, although if it is present, it could be used.

I hope this makes my scheme clearer and I am happy to receive comments and suggestions. I am particularly interested in whether there are any flaws in the design, whether the threat model is reasonable and if the effort in deployment and use is worth the increased resistance to attack.