All posts by Ross Anderson

Chip and PIN on Trial

The trial of Job v Halifax plc has been set down for April 30th at 1030 in the Nottingham County Court, 60 Canal Street, Nottingham NG1 7EJ. Alain Job is an immigrant from the Cameroon who has had the courage to sue his bank over phantom withdrawals from his account. The bank refused to refund the money, making the usual claim that its systems were secure. There’s a blog post on the cavalier way in which the Ombudsman dealt with his case. Alain’s case was covered briefly in Guardian in the run-up to a previous hearing; see also reports in Finextra here, here and (especially) here.

The trial should be interesting and I hope it’s widely reported. Whatever the outcome, it may have a significant effect on consumer protection in the UK. For years, financial regulators have been just as credulous about the banks’ claims to be in control of their information-security risk management as they were about the similar claims made in respect of their credit risk management (see our blog post on the ombudsman for more). It’s not clear how regulatory capture will (or can) be fixed in respect of credit risk, but it is just possible that a court could fix the consumer side of things. (This happened in the USA with the Judd case, as described in our submission to the review of the ombudsman service — see p 13.)

For further background reading, see blog posts on the technical failures of chip and PIN, the Jane Badger case, the McGaughey case and the failures of fraud reporting. Go back into the 1990s and we find the Halifax again as the complainant in R v Munden; John Munden was prosecuted for attempted fraud after complaining about phantom withdrawals. The Halifax couldn’t produce any evidence and he was acquitted.

The Snooping Dragon

There’s been much interest today in a report that Shishir Nagaraja and I wrote on Chinese surveillance of the Tibetan movement. In September last year, Shishir spent some time cleaning out Chinese malware from the computers of the Dalai Lama’s private office in Dharamsala, and what we learned was somewhat disturbing.

Later, colleagues from the University of Toronto followed through by hacking into one of the control servers Shishir identified (something we couldn’t do here because of the Computer Misuse Act); their report relates how the attackers had controlled malware on hundreds of other PCs, many in government agencies of countries such as India, Vietnam and the Phillippines, but also in US firms such as AP and Deloittes.

The story broke today in the New York Times; see also coverage in the Telegraph, the BBC, CNN, the Times of India, AP, InfoWorld, Wired and the Wall Street Journal.

National Fraud Strategy

Today the Government “launches” its National Fraud Strategy. I qualify the verb because none of the quality papers seems to be running the story, and the press releases have not yet appeared on the websites of the Attorney General or the Ministry of Justice.

And well might Baroness Scotland be ashamed. The Strategy is a mishmash of things that are being done already with one new initiative – a National Fraud Reporting Centre, to be run by the City of London Police. This is presumably intended to defuse the Lords’ criticisms of the current system whereby fraud must be reported to the banks, not to the police. As our blog has frequently reported, banks dump liability for fraud on customers by making false claims about system security and imposing unreasinable terms and conditions. This is a regulatory failure: the FSA has been just as gullible in accepting the banking industry’s security models as they were about accepting its credit-risk models. (The ombudsman has also been eager to please.)

So what’s wrong with the new arrangements? Quite simply, the National Fraud Reporting Centre will nestle comfortably alongside the City force’s Dedicated Cheque and Plastic Crime Unit, which investigates card fraud but is funded by the banks. Given this disgraceful arrangement, which is more worthy of Uzbekistan than of Britain, you have to ask how eager the City force will be to investigate offences that bankers don’t want investigated, such as the growing number of insider frauds and chip card cloning? And how vigorously will City cops investigate their paymasters for the fraud of claiming that their systems are secure, when they’re not, in order to avoid paying compensation to defrauded accountholders? The purpose of the old system was to keep the fraud figures artificially low while enabling the banks to control such investigations as did take place. And what precisely has changed?

The lessons of the credit crunch just don’t seem to have sunk in yet. The Government just can’t kick the habit of kowtowing to bankers.

Finland privacy judgment

In a case that will have profound implications, the European Court of Human Rights has issued a judgment against Finland in a medical privacy case.

The complainant was a nurse at a Finnish hospital, and also HIV-positive. Word of her condition spread among colleagues, and her contract was not renewed. The hospital’s access controls were not sufficient to prevent colleages accessing her record, and its audit trail was not sufficient to determine who had compromised her privacy. The court’s view was that health care staff who are not involved in the care of a patient must be unable to access that patient’s electronic medical record: “What is required in this connection is practical and effective protection to exclude any possibility of unauthorised access occurring in the first place.” (Press coverage here.)

A “practical and effective” protection test in European law will bind engineering, law and policy much more tightly together. And it will have wide consequences. Privacy compaigners, for example, can now argue strongly that the NHS Care Records service is illegal. And what will be the further consequences for the Transformational Government initiative – the “Database State”?

Security psychology

I’m currently in the first Workshop on security and human behaviour; at MIT, which brings together security engineers, psychologists and others interested in topics ranging from deception through usability to fearmongering. Here’s the agenda and here are the workshop papers.

The first session, on deception, was fascinating. It emphasised the huge range of problems, from detecting deception in interpersonal contexts such as interrogation through the effects of context and misdirection to how we might provide better trust signals to computer users.

Over the past seven years, security economics has gone from nothing to a thriving research field with over 100 active researchers. Over the next seven I believe that security psychology should do at least as well. I hope I’ll find enough odd minutes to live blog this first workshop as it happens!

[Edited to add:] See comments for live blog posts on the sessions; Bruce Schneier is also blogging this event.

Operational security failure

A shocking article appeared yesterday on the BMJ website. It recounts how auditors called 45 GP surgeries asking for personal information about 51 patients. In only one case were they asked to verify their identity; the attack succeeded against the other 50 patients.

This is an old problem. In 1996, when I was advising the BMA on clinical system safety and privacy, we trained the staff at one health authority to detect false-pretext phone calls, and they found 30 a week. We reported this to the Department of Health, hoping they’d introduce some operational security measures nationwide; instead the Department got furious at us for treading on their turf and ordered the HA to stop cooperating (the story’s told in my book). More recently I confronted the NHS chief executive, David Nicholson, and patient tsar Harry Cayton, with the issue at a conference early last year; they claimed there wasn’t a problem nowadays now that people have all these computers.

What will it take to get the Department of Health to care about patient privacy? Lack of confidentiality already costs lives, albeit indirectly. Will it require a really high-profile fatality?

Second edition

The second edition of my book “Security Engineering” came out three weeks ago. Wiley have now got round to sending me the final electronic version of the book, plus permission to put half a dozen of the chapters online. They’re now available for download here.

The chapters I’ve put online cover security psychology, banking systems, physical protection, APIs, search, social networking, elections and terrorism. That’s just a sample of how our field has grown outwards in the seven years since the first edition.

Enjoy!

Security Economics and the EU

ENISA — the European Network and Information Security Agency — has just published a major report on security economics that was authored by Rainer Böhme, Richard Clayton, Tyler Moore and me.

Security economics has become a thriving field since the turn of the century, and in this report we make a first cut at applying it in a coherent way to the most pressing security policy problems. We very much look forward to your feedback.

(Edited Dec 2019 to update link after ENISA changed their website)

Justice, in one case at least

This morning Jane Badger was acquitted of fraud at Birmingham Crown Court. The judge found there was no case to answer.

Her case was remarkably similar to that of John Munden, about whom I wrote here (and in my book here). Like John, she worked for the police; like John, she complained to a bank about some ATM debits on her bank statement that she did not recognise; like John, she was arrested and suspended from work; like John, she faced a bank (in her case, Egg) claiming that as its systems were secure, she must be trying to defraud them; and like John, she faced police expert evidence that was technically illiterate and just took the bank’s claims as gospel.

In her case, Egg said that the transactions must have been done with the card issued to her rather than using a card clone, and to back this up they produced a printout allocating a transaction code of 05 to each withdrawal, and a rubric stating that 05 meant “Integrated Circuit Card read – CVV data reliable” with in brackets the explanatory phrase “(chip read)”. This seemed strange. If the chip of an EMV card is read, the reader will verify the signature on the certificate; if its magnetic strip is read (perhaps because the chip is unserviceable) then the bank will check the CVV, which is there to prevent magnetic strip forgery. The question therefore was whether the dash in the above rubric meant “OR”, as the technology would suggest, or “AND” as the bank and the CPS hoped. The technology is explained in more detail in our recent submission to the Hunt Review of the Financial Services Ombudsman (see below). I therefore advised the defence to apply for the court to order Egg to produce the actual transaction logs and supporting material so that we could verify the transaction certificates, if any.

The prosecution folded and today Jane walked free. I hope she wins an absolute shipload of compensation from Egg!