How to hack your GP's computer system

It’s easy – you just send them a letter on what appears to be Department of Health notepaper telling them to go to a URL, download a program, load it on their practice system, and run it. The program does something with the database, extracts some information and sends it back to whoever wrote it.

I have written to one of the medical magazines explaining why this is not a good way to do things. Doctors would never dream of injecting some random potion they received through the post into their patients – they’d insist on peer review, licensing, and a trustworthy supply chain. So who reviewed the specification of this software? Who evaluated the implementation? Who accepts liability if it corrupts the patient database, leading to a fatal accident?

Were it not for the Computer Misuse Act, I would email 100 practices at random with a version of the above letter, telling them to run my software – which would simply report back who ran it. From talking to a handful of doctors I reckon most of them would fall for it.

No doubt the bad guys will start doing this sort of thing. Eventually doctors, lawyers and everyone else will learn the simple lesson ‘don’t install software’. Until then, this will be the smart way to help yourself to the juicy, meaty bits of a target organisation’s data. So what will we call it? Philleting?

Mainstreaming eCrime

Back in February I wrote about how the establishment of the Serious Organised Crime Agency (SOCA) was likely to lead to situation in which “level 2” eCrime could end up failing to be investigated. “Level 1” crime is “local” to a single police force, “level 3” crime is “serious” or “organised” and requires tackling at a national or international level — and “level 2” crime is what’s in-between: occurring across the borders of local police forces, but not serious or organised enough to be SOCA’s problem.

Over the past few weeks I’ve been at a Metropolitan Police “Knowledge Forum” and at a Parliament and Internet Conference. There I’ve learnt about how the police (at ACPO level, not just the Met) are intending to tackle eCrime in the future.

The jargon for the new policy is “mainstreaming” — by which is meant that the emphasis will move away from tackling “eCrime” as something special, and regular officers will deal with it just as “Crime”.

In particular when there are “e” aspects to a normal crime, such as a murder, then this will be dealt with as a matter of course, rather than be treated as something exotic. With the majority of homes containing computers, and with the ubiquity of email, instant messaging and social network sites, this can only be seen as a sensible adaptation to society as it is today. After all, the police don’t automatically call in specialist officers just because the murder victim owns a car.

Although there is a commitment to maintain existing centres of excellence, specialist units with expertise in computer forensics, units that tackle “grooming” by paedophiles, and undercover police who deal with obscene publications, I am less sanguine about the impact of this policy when it comes to crimes that rely upon the Internet to be committed. These types of crime can be highly automated, operated from a distance, hard to track down and obtain evidence about, and can be lucrative even if only small amounts are stolen from each victim.

I believe there is still some doubt that Internet-based crimes will be investigated, not just from lack of resources (always a problem, as anyone who has been burgled or had a car window smashed will know), but because it’s no-ones task and appears on no-one’s checklist for meeting Government targets (there’s still no central counting of eCrime occurring).

Mainstreaming is proposed to have some sensible adjuncts in that police forces will be encouraged to pool intelligence about eCrime (to build up a picture of the full impact of the crime and to link investigators together), and some sort of national coordination centre is planned to partially replace the NHTCU. However, although this may sometimes mean that an investigation can be mounted into an eBay fraudster in Kent who rips off people in Lancashire and Dorset — I am not sure that the same will be true if the victims are in Louisiana and Delaware — or if the fraudster lives in a suburb of Bucharest.

The details of what “mainstreaming” will mean for eCrime are still being worked out, so it’s not possible to be sure what it will mean exactly. It sounds like it will be an improvement on the current arrangements, but I’m pessimistic about it really getting to grips with many of the bad things that continue to happen on the Internet.

New website on NHS IT problems

At http://nhs-it.info, colleagues and I have collected material on the NHS National Programme for IT, which shows all the classic symptoms of a large project failure in the making. If it goes belly-up, it could be the largest IT disaster ever, and could have grave consequences for healthcare in Britain. With 22 other computer science professors, I wrote to the Health Select Committee urging them to review the project. The Government is dragging its feet, and things seem to be going from bad to worse.

Kish's "totally secure" system is insecure

Recently, Kish proposed a “totally secure communication system” that uses only resistors, wires and Johnson noise. His paper—“Totally Secure Classical Communication Utilizing Johnson (-like) Noise and Kirchoff’s Law”—was published on Physics Letters (March 2006).

The above paper had been featured in Science magazine (Vol. 309), reported in News articles (Wired news, Physorg.com) and discussed in several weblogs (Schneier on security, Slashdot). The initial sensation created was that Quantum communication could now be replaced by a much cheaper means. But not quite so …

This paper—to appear in IEE Information Security—shows that the design of Kish’s system is fundamentally flawed. The theoretical model, which underpins Kish’s system, implicitly assumes thermal equilibrium throughout the communication channel. This assumption, however, is invalid in real communication systems.

Kish used a single symbol ‘T’ to denote the channel temperature throughout his analysis. This, however, disregards the fact that any real communication system has to span a distance and endure different conditions. A slight temperature difference between the two communicating ends will lead to security failure—allowing an eavesdropper to uncover the secret bits easily (more details are in the paper).

As a countermeasure, it might be possible to adjust the temperature difference at two ends to be as small as possible—for example, by using external thermal noise generators. However, this gives no security guarantee. Instead of requiring a fast computer, an eavesdropper now merely needs a voltage meter that is more accurate than the equipments used by Alice and Bob.

In addition, the transmission line must maintain the same temperature (and noise bandwidth) as the two ends to ensure “thermal equilibrium”, which is clearly impossible. Kish avoids this problem by assuming zero resistance on the transmission line in his paper. Since the problem with the finite resistance on the transmission line had been reported before, I will not discuss it further here.

To sum up, the mistake in Kish’s paper is that the author wrongly grafted assumptions from one subject into another. In circuit analysis, it is common practice to assume the same room temperate and ignore wire resistance in order to simplify the calculation; the resultant discrepancy is usually well within the tolerable range. However, the design of a secure communication is very different, as a tiny discrepancy could severely compromise the system security. Basing security upon invalid assumptions is a fundamental flaw in the design of Kish’s system.

Boom! Headshot!

It’s about time I confessed to playing online tactical shooters. These are warfare-themed multiplayer games that pit two teams of 16 — 32 players against each other, to capture territory and score points. These things can be great fun if you are so inclined, but what’s more, they’re big money these days. A big online shooter like Counterstrike might sell maybe 2 million copies, and tactical shooter Battlefield 2 has almost 500,000 accounts on the main roster. The modern blockbuster game grosses several times more than a blockbuster movie!

In this post, I consider a vital problem for game designers — maintaining the quality of experience for the casual gamer — and it turns out this is somewhat a security problem. People gravitate towards these games because the thrill of competition against real humans (instead of AI) is tangible, but this is a double-edged sword. Put simply, the enemy of the casual gamer is the gross unfairness in online play. But why do these games feel so unfair? Why do you shake the hand of a squash player, or clap your footie opponent on his back after losing good game, but walk away from a loss in a tactical shooter angry, frustrated and maybe even abusive? Is it just the nature of the clientele?

This post introduces a draft paper I’ve written called “Boom! Headshot! (Building Neo-Tactics on Network-Level Anomalies in Online Tactical First-Person Shooters)”, (named after this movie clip”), which may hold some of the answers to understanding why there is so much perceived unfairness. If the paper is a little dense, try the accompanying presentation instead.

Do you want to know more? If so, read on…
Continue reading Boom! Headshot!

Closing in on suspicious transactions

After almost 3 years of problem-free banking in the UK I recently received the following letter from HSBC’s “Accounts Review Team”. It advised me that the HSBC group no longer wished to have me as a customer and that I had 30 days to move my banking to an establishment in no way connected to HSBC, before they would close my account. Confident that I had not indulged in any illegal activity recently, and concerned about their reasons for taking such action, I attempted several times to phone the number given in the letter, unsuccessfully reaching a “we are busy, please try again” recording each time. Visiting my home branch was not much more helpful as they claimed that the information had not been shared with them. I was advised to make a written complaint and was told that the branch had already referred the matter, as a number of customers had come in with similar letters.

After two written complaints and a phone call to customer services, a member of the “Team” finally contacted me. She enquired about a single international deposit into my account, which I then explained to be my study grant for the coming year. Upon this explanation I was told that the bank would not close my account, and I was given a vague explanation of them not expecting students to get large deposits. I found this strange, since it had not been a problem in previous years, and even stranger since my deposit had cleared into my account two days after the letter was sent. In terms of recent “suspicious” transactions, this left only two recent international deposits: one from my parents overseas and one from my savings, neither of which could be classified as large. I’m not an expert on complex behavioural analysis networks and fraud detection within banking systems, but would expect that study grants and family support are not unexpected for students? Moreover, rather than this being an isolated incident, it would seem that HSBC’s “account review” affected a number of people within our student community, some of whom might choose not to question the decision and may be left without bank accounts. This should raise questions about the effectiveness of their fraud detection system, or possibly a flawed behaviour model for a specific demographic.

My account is now restored, but I have still had no satisfactory explanation as to why the decision was made to close my account, nor do I know how this sorry affair will affect my future banking and credit rating. Would an attempt to transfer my account have caused HSBC’s negative opinion of me to spread to other institutions? A security mechanism that yields false positives or recommends a disproportionate reaction, e.g. closing an account based on a single unexpected transaction, should be seen as somewhat flawed. The end result is that the system runs on a guilty until proven innocent premise, with the onus for correcting marginal calls placed on the customer. Ultimately the bank will claim that these mechanisms are designed to protect the customer, but in the end randomly threatening to close my account does not make me feel any safer.

Random isn't always useful

It’s common to think of random numbers as being an essential building block in security systems. Cryptographic session keys are chosen at random, then shared with the remote party. Security protocols use “nonces” for “freshness”. In addition, randomness can slow down information gathering attacks, although here they are seldom a panacea. However, as George Danezis and I recently explained in “Route Fingerprinting in Anonymous Communications” randomness can lead to uniqueness — exactly the property you don’t want in an anonymity system.
Continue reading Random isn't always useful

Which services should remain offline?

Yesterday I gave a talk on confidentiality at the EMIS annual conference. I gained yet more insights into Britain’s disaster-prone health computerisation project. Why, for example, will this cost eleven figures, when EMIS writes the software used by 60% of England’s GPs to manage their practices with an annual development budget of only 25m?

On the consent front, it turns out that patients who exercise even the mildest form of opt-out from the national database (having their addresses stop-noted, which is the equivalent of going ex-directory — designed for celebs and people in witness protection) will not be able to use many of the swish new features we’re promised, such as automatic repeat prescriptions. There are concerns that providing a degraded health service to people who tick the privacy box might undermine the validity of consent to information sharing.

On the confidentiality front, people are starting to wrestle with the implications of allowing patients online access ot their records. Vulnerable patients — for example, under-age girls who have had pregancy terminations without telling their parents — could be at risk if they can access sensitive data online. They may be coerced into accessing it, or their passwords may become known to friends and family. So there’s talk of a two-tier online record — in effect introducing multilevel security into record access. Patients would be asked whether they wanted some, all, or none of their records to be available to them online. I don’t think the Department of Health understands the difficulties of multilevel security. I can’t help wondering whether online patient access is needed at all. Very few patients ever exercise their right to view and get a copy of their records; making all records available online seems more and more like a political gimmick to get people to accept the agenda of central data collection.

We don’t seem to have good ways of deciding what services should be kept offline. There’s been much debate about elections, and here’s an interesting case from healthcare. What else will come up, and are there any general principles we’re missing?

How many Security Officers? (reloaded)

Some years ago I wrote a subsection in my thesis (sec 8.4.3, p. 154), entitled “How Many Security Officers are Best?”, where I reviewed over the various operating procedures I’d seen for Hardware Security Modules, and pondered why some people chose to use two separate parties to oversee a critical action and some chose to use three. Occasionally a single person is even deliberately entrusted with great power and responsibility, because there can be no question where to lay the blame if something goes wrong. So, “one, two, or three?”, I said to myself.

In the end I plumped for three… with some logic excerpted from my thesis below:

But three security officers does tighten security: a corrupt officer will be outnumbered, and deceiving two people in different locations simultaneously is next to impossible. The politics of negotiating a three-way collusion is also much harder: the two bad officers will have to agree on their perceptions of the third before approaching him. Forging agreement on character judgements when the stakes are high is very difficult. So while it may be unrealistic to have three people sitting in on a long-haul reconfiguration of the system, where the officers duties are short and clearly defined, three keyholders provides that extra protection.

Some time later, I mentioned the subject with Ross, and he berated me for my over-complicated logic. His general line of argument was along these lines “The real threat for Security Officers is not that they blackmail, bribe or coerce one another, it’s that they help! Here, Bob, you go home early mate; I know you’ve got to pack for your business trip, and I’ll finish off installing the software on the key loading PC. That sort of thing. Having three key custodians makes ‘helping’ and such friendly tactics much harder – the bent officer must co-ordinate on two fronts.”

But recently my new job has exposed me to a number of real dual control and split knowledge systems. I was looking over some source code for a key loading HSM command in fact, and I spotted code that took a byte array of key material, and split it into three components each with odd parity. It generates two fresh totally random components with odd parity, and then XORs these onto the third. Hmmm, I thought, so the third component would contain the parity information of the original key, dangerous — a leakage of information preferentially to the third key component holder! But wrong… because the parity of the original key is known anyway in the case of a DES key… it’s always odd.

I chatted to our chief technical bod about this, and he casually dropped a bombshell — that shed new light on why three is best, an argument so simple and elegant that it must be true, yet faintly depressing to now believe that no-one agonised over the human psychology of the security officer numbers issue as I did. When keys are exchanged a Key Check Value (KCV) is calculated for each component, by encrypting a string of binary zeroes with the component value. Old-fashioned DES implementations only accepted keys with odd parity, so to calculate KCVs on these components, each must have odd parity as well as the final key itself. For the final key to retain odd parity from odd parity components, there must be an odd number of components (the parity of keys could be adjusted, but this takes more lines of code, and is less elegant than just tweaking a counter in the ‘for’ loop). Now the smallest odd integer greater than one is three. This is why the most valuable keys are exchanged in three components, and not in two!

So, the motto of the story for me is to make sure to apply Occam’s Razor more thoroughly when I try to deduce the logic behind the status quo, but I still think there are some interesting questions raised about how we share responsibility for critical actions. There still seems to be to me a very marked and qualitative difference in the dynamics of how three people interact versus two, whatever the situation: be it security officers entering keys, pilots flying an aircraft, or even a ménage à trois! Just like the magnitude of the difference between 2D and 3D space.

If one, two and three are all magical numbers, qualitatively different, are there any other qualitative boundaries higher in the cardinal numbers, and if so, what are they? In a security-critical process such as an election, can ten people adjudicate effectively in a way that thirty could not? Is there underlying logic or just mysticism behind the jury of twelve? Or, to take the jury example, and my own tendency to over-complicate, was it simply that in the first proper court room built back a few hundred years ago, there happened only to be space for twelve men on the benches on the right hand side!