Monthly Archives: November 2006

Developments on health privacy…

The Register reports a leaked document from the NHS which concludes that sensitive patient records would probably be safer held locally, rather than stored on a national database as the Government proposes.

This follows a poll last week in which a majority of GPs said they would not upload their patients’ records to the national database. Together the poll and the leak are a double whammy for the misguided and wasteful project to centralise all computer systems in the NHS.

On Wednesday we are launching a campaign to persuade patients to opt out too. The inaugural meeting will be from 7 to 9 PM in Imperial College, London. For background, see recent posts on opting out and on kids’ databases.

Kids’ databases

The Information Commissioner has just published a report we wrote for him on the UK Government’s plans to link up most of the public-sector databases that contain information on children. We’re concerned that aggregating this data will be both unsafe and illegal. Our report has got coverage in the Guardian, the Telegraph (with a leader), the Daily Mail, the BBC and the Evening Standard.

A backwards way of dealing with image spam

There is a great deal more email spam in your inboxes this Autumn (as noted, for example, here, here and here!). That’s partly because a very great deal more spam is being generated — perhaps twice as much as just a few months ago.

A lot of this junk is “image spam”, where the advertisement is contained within an embedded picture (almost invariably a GIF file). The filtering systems that almost everyone now uses are having significant problems in dealing with these images and so a higher percentage of the spam that arrives at the filters is getting through to your inbox.

So higher volumes and weaker filtering are combining to cause a significant problem for us all 🙁

But I have an interesting suggestion for filtering the images: it might be a lot simpler to go about it backwards 🙂

So read on!

Continue reading A backwards way of dealing with image spam

The ATM Protection Racket

EMV (or “Chip and PIN” as it’s known in the UK) is changing the fraud landscape, no doubt about it. Counterfeit card fraud at POS is down, card theft is down, card-not-present is up, phishing is up, ATM fraud is up. Fraud migrates, we get the picture. But as EMV reaches maximal deployment in the next five years or so, the banks and other investors in this technology are hoping that the flood will abate to a trickle, and that some holes can be totally plugged.

I’ve been thinking about whether or not EMV is capable of sorting out the ATM fraud problem (also known as “phantom withdrawals”) once and for all. Well as I wandered around town this afternoon, I snapped some pics at WH Smiths this afternoon of an ATM in distress, and it reminded me how horribly vulnerable our ATM infrastructure is.

ATM1

It’s not just the “look of vulnerability” exuded by them… like these cheap wafer-locks on the housing of the aforementioned ATM (I’m sure there must be a better lock before the cash safe itself), it’s that all the security is based around keeping the money and the secrets safe, and very little attention is focussed on keeping the machine alive and operating.

ATM2

Read on to find out my master plan…

Continue reading The ATM Protection Racket

Traffic Data Retention and Forensic Imaging

Last week I participated in yet another workshop on traffic data retention, in ICRI, with the added twist that now traffic data retention is in ‘European Law’, and shall become actual law in most EU countries very soon. It was a special treat to be talking just after Chief Superindendent Luc Beirens, Head of the Belgian Federal Computer Crime Unit, that tried to sell the idea of retention to a crowd of people from the flagship EU privacy project PRIME.

As usually Beirens assured us that proper judicial oversight exists and will regulate access to traffic data. Yet a different pictured emerged when we got into the details of how cyber-crime investigations are conducted. It turns out that the first thing that the police does, to the suspects but also the victims, of cyber-crime is to take a forensic image of their hard disk. This is a sound precaution: booting up the machine to extract evidence may activate malware on a victim’s machine to erase traces, or an alert system on a suspects computer.

The obvious question becomes: how does this policy of automatic forensic imaging and analysis of a hard disk interacts with traffic data retention? Luc was keen to acknowledge that the investigation procedure would proceed unchanged, and an image of a hard disk that may contain retained data would be taken — and forensic tools used on the totality of the hard disk. To be fair, tools that take a forensic image or only look at parts of the disk according to a set security policy do not exist.

What does this mean? If you are a victim of cyber-crime, or a company you have given your data to is a victim of cyber-crime, all the data will end up with the police. This will be the case irrespective of judicial oversight, or any other safeguards. You may ask yourself what the chance is that the retained data will be kept of a computer that maybe part of an investigation? First do not underestimate the fact that these machines will end up on-line to serve requests, and therefore will be subject to their fair share of attacks. But most importantly this case will obviously occur as part of an investigation on themisuse, unauthorized access, or attempted access to the traffic data retention systems!

This standard procedure may also explain why companies are so reluctant to call in the high tech crime units to help them investigate cyber-crime. Their procedures are simply incompatible with any security policy with a confidentiality component. Would you report some of your documents being stolen from your home or business, if this meant the police taking a copy of every single paper in the building?

Shishir wins BCS best student award

Security group member Shishir Nagaraja has won the BCS best PhD student award for his paper The topology of covert conflict. The judges remarked that “the work made an important contribution to traffic analysis in an area that had been previously overlooked; the authors used realistic models with clear results and exciting directions for future research.”

Opting out of the NHS Database

The front page lead in today’s Guardian explains how personal medical data (including details of mental illness, abortions, pregnancy, drug taking, alcohol abuse, fitting of colostomy bags etc etc) are to be uploaded to a central NHS database regardless of patients’ wishes.

The Government claims that especially sensitive data can be put into a “sealed envelope” which would not ordinarily be available… except that NHS staff will be able to “break the seal” under some circumstances; the police and Government agencies will be able to look at the whole record — and besides, this part of the database software doesn’t even exist yet, and so the system will be running without it for some time.

The Guardian has more details in the article: From cradle to grave, your files available to a cast of thousands, some comments from doctors and other health professionals: A national database is not essential and a leading article: Spine-chilling.

The Guardian give details on how to opt-out of data sharing: What can patients do? using suggestions for a letter from our own Ross Anderson who has worked on medical privacy for over a decade (see his links to relevant research).

If you are concerned (and in my view, you really should be — once your data is uploaded it will be pretty much public forever), then discuss it with your GP and write off to the Department of Health [*]. The Guardian gives some suitable text, or you could use the opt-out letter that FIPR developed last year (PDF or Word versions available).

[*] See Ross’s comment on this article first!