Monthly Archives: October 2006

Yet another insecure banking system

The banks are thinking about introducing a new anti-phising meaure called the ‘chip authentication protocol’. How it works is that each customer gets a device like a pocket calculator in which you put your ‘chip and PIN’ (EMV) card, enter your PIN (the same PIN you use for ATMs), and it will display a one-time authentication code that you’ll use to log on to your electronic banking service, instead of the current password and security question. The code will be computed by the card, which will encrypt a transaction counter using the EMV authentication cryptogram generation key – the same key the EMV protocol uses to generate a MAC on an ATM or store transaction. The use model is that everyone will have a CAP calculator; you’ll usually use your own, but can lend it to a friend if he’s caught short.

I can see several problems with this. First, when your wallet gets nicked the thief will be able to read your PIN digits from the calculator – they will be the dirty and worn keys. If you just use one bank card, then the thief’s chance of guessing your PIN in 3 tries has just come down from about 1 in 3000 to about 1 in 10. Second, when you use your card in a Mafia-owned shop (or in a shop whose terminals have been quietly reprogrammed) the bad guys have everything they need to loot your account. Not only that – they can compute a series of CAP codes to give them access in the future, and use your account for wicked purposes such as money laundering. Oh, and once all UK banks (not just Coutts) use one-time passwords, the phishermen will just rewrite their scripts to do real-time man-in-the-middle attacks.

I suspect the idea of trying to have a uniform UK solution to the phishing problem may be misguided. Bankers are herd animals by nature, but herding is a maladaptive response to phishing and other automated attacks. It might be better to go to the other extreme, and have a different interface for each customer. Life would be harder for the phishermen, for example, if I never got an email from the NatWest but only ever from Bernie Smith my ‘relationship banker’ – and if I were clearly instructed that if anyone other than Bernie ever emailed me from the NatWest then it was a scam. But I don’t expect that the banks will start to act rationally on security until the liability issues get fixed.

How to hack your GP's computer system

It’s easy – you just send them a letter on what appears to be Department of Health notepaper telling them to go to a URL, download a program, load it on their practice system, and run it. The program does something with the database, extracts some information and sends it back to whoever wrote it.

I have written to one of the medical magazines explaining why this is not a good way to do things. Doctors would never dream of injecting some random potion they received through the post into their patients – they’d insist on peer review, licensing, and a trustworthy supply chain. So who reviewed the specification of this software? Who evaluated the implementation? Who accepts liability if it corrupts the patient database, leading to a fatal accident?

Were it not for the Computer Misuse Act, I would email 100 practices at random with a version of the above letter, telling them to run my software – which would simply report back who ran it. From talking to a handful of doctors I reckon most of them would fall for it.

No doubt the bad guys will start doing this sort of thing. Eventually doctors, lawyers and everyone else will learn the simple lesson ‘don’t install software’. Until then, this will be the smart way to help yourself to the juicy, meaty bits of a target organisation’s data. So what will we call it? Philleting?

Mainstreaming eCrime

Back in February I wrote about how the establishment of the Serious Organised Crime Agency (SOCA) was likely to lead to situation in which “level 2” eCrime could end up failing to be investigated. “Level 1” crime is “local” to a single police force, “level 3” crime is “serious” or “organised” and requires tackling at a national or international level — and “level 2” crime is what’s in-between: occurring across the borders of local police forces, but not serious or organised enough to be SOCA’s problem.

Over the past few weeks I’ve been at a Metropolitan Police “Knowledge Forum” and at a Parliament and Internet Conference. There I’ve learnt about how the police (at ACPO level, not just the Met) are intending to tackle eCrime in the future.

The jargon for the new policy is “mainstreaming” — by which is meant that the emphasis will move away from tackling “eCrime” as something special, and regular officers will deal with it just as “Crime”.

In particular when there are “e” aspects to a normal crime, such as a murder, then this will be dealt with as a matter of course, rather than be treated as something exotic. With the majority of homes containing computers, and with the ubiquity of email, instant messaging and social network sites, this can only be seen as a sensible adaptation to society as it is today. After all, the police don’t automatically call in specialist officers just because the murder victim owns a car.

Although there is a commitment to maintain existing centres of excellence, specialist units with expertise in computer forensics, units that tackle “grooming” by paedophiles, and undercover police who deal with obscene publications, I am less sanguine about the impact of this policy when it comes to crimes that rely upon the Internet to be committed. These types of crime can be highly automated, operated from a distance, hard to track down and obtain evidence about, and can be lucrative even if only small amounts are stolen from each victim.

I believe there is still some doubt that Internet-based crimes will be investigated, not just from lack of resources (always a problem, as anyone who has been burgled or had a car window smashed will know), but because it’s no-ones task and appears on no-one’s checklist for meeting Government targets (there’s still no central counting of eCrime occurring).

Mainstreaming is proposed to have some sensible adjuncts in that police forces will be encouraged to pool intelligence about eCrime (to build up a picture of the full impact of the crime and to link investigators together), and some sort of national coordination centre is planned to partially replace the NHTCU. However, although this may sometimes mean that an investigation can be mounted into an eBay fraudster in Kent who rips off people in Lancashire and Dorset — I am not sure that the same will be true if the victims are in Louisiana and Delaware — or if the fraudster lives in a suburb of Bucharest.

The details of what “mainstreaming” will mean for eCrime are still being worked out, so it’s not possible to be sure what it will mean exactly. It sounds like it will be an improvement on the current arrangements, but I’m pessimistic about it really getting to grips with many of the bad things that continue to happen on the Internet.

New website on NHS IT problems

At http://nhs-it.info, colleagues and I have collected material on the NHS National Programme for IT, which shows all the classic symptoms of a large project failure in the making. If it goes belly-up, it could be the largest IT disaster ever, and could have grave consequences for healthcare in Britain. With 22 other computer science professors, I wrote to the Health Select Committee urging them to review the project. The Government is dragging its feet, and things seem to be going from bad to worse.

Kish's "totally secure" system is insecure

Recently, Kish proposed a “totally secure communication system” that uses only resistors, wires and Johnson noise. His paper—“Totally Secure Classical Communication Utilizing Johnson (-like) Noise and Kirchoff’s Law”—was published on Physics Letters (March 2006).

The above paper had been featured in Science magazine (Vol. 309), reported in News articles (Wired news, Physorg.com) and discussed in several weblogs (Schneier on security, Slashdot). The initial sensation created was that Quantum communication could now be replaced by a much cheaper means. But not quite so …

This paper—to appear in IEE Information Security—shows that the design of Kish’s system is fundamentally flawed. The theoretical model, which underpins Kish’s system, implicitly assumes thermal equilibrium throughout the communication channel. This assumption, however, is invalid in real communication systems.

Kish used a single symbol ‘T’ to denote the channel temperature throughout his analysis. This, however, disregards the fact that any real communication system has to span a distance and endure different conditions. A slight temperature difference between the two communicating ends will lead to security failure—allowing an eavesdropper to uncover the secret bits easily (more details are in the paper).

As a countermeasure, it might be possible to adjust the temperature difference at two ends to be as small as possible—for example, by using external thermal noise generators. However, this gives no security guarantee. Instead of requiring a fast computer, an eavesdropper now merely needs a voltage meter that is more accurate than the equipments used by Alice and Bob.

In addition, the transmission line must maintain the same temperature (and noise bandwidth) as the two ends to ensure “thermal equilibrium”, which is clearly impossible. Kish avoids this problem by assuming zero resistance on the transmission line in his paper. Since the problem with the finite resistance on the transmission line had been reported before, I will not discuss it further here.

To sum up, the mistake in Kish’s paper is that the author wrongly grafted assumptions from one subject into another. In circuit analysis, it is common practice to assume the same room temperate and ignore wire resistance in order to simplify the calculation; the resultant discrepancy is usually well within the tolerable range. However, the design of a secure communication is very different, as a tiny discrepancy could severely compromise the system security. Basing security upon invalid assumptions is a fundamental flaw in the design of Kish’s system.

Boom! Headshot!

It’s about time I confessed to playing online tactical shooters. These are warfare-themed multiplayer games that pit two teams of 16 — 32 players against each other, to capture territory and score points. These things can be great fun if you are so inclined, but what’s more, they’re big money these days. A big online shooter like Counterstrike might sell maybe 2 million copies, and tactical shooter Battlefield 2 has almost 500,000 accounts on the main roster. The modern blockbuster game grosses several times more than a blockbuster movie!

In this post, I consider a vital problem for game designers — maintaining the quality of experience for the casual gamer — and it turns out this is somewhat a security problem. People gravitate towards these games because the thrill of competition against real humans (instead of AI) is tangible, but this is a double-edged sword. Put simply, the enemy of the casual gamer is the gross unfairness in online play. But why do these games feel so unfair? Why do you shake the hand of a squash player, or clap your footie opponent on his back after losing good game, but walk away from a loss in a tactical shooter angry, frustrated and maybe even abusive? Is it just the nature of the clientele?

This post introduces a draft paper I’ve written called “Boom! Headshot! (Building Neo-Tactics on Network-Level Anomalies in Online Tactical First-Person Shooters)”, (named after this movie clip”), which may hold some of the answers to understanding why there is so much perceived unfairness. If the paper is a little dense, try the accompanying presentation instead.

Do you want to know more? If so, read on…
Continue reading Boom! Headshot!