A false accusation of "hacking"

One particular style of phishing email runs something like this (edited for brevity):


From: service@paypalL.com
Subject: Your account was hijacked by a third party.

Dear PayPal valued account holder,

We recently noticed one or more attempts to log in your PayPal account from a foreign IP address and we have reasons to believe that your account was hijacked by a third party without your authorization.

If you recently accessed your account while traveling, the log in attempts may have initiated by you.

However if you are the rightful holder of the account, click on the link below and submit, as we try to verify your account.

The log in attempt was made from:

ISP host: sargon.cl.cam.ac.uk

etc...

well, spare a thought for the lucky owner of sargon.cl.cam.ac.uk (not its real name), because sometimes when people receive these emails they see it as compelling evidence (kindly supplied by PayPal) of someone who was trying to hack into their account and steal all their money.

In practice of course, the accusation is as false as the rest of the email, which is merely designed to get you to click on a link to visit a phishing website and reveal your PayPal login credentials to the criminals.

We’ve found examples of emails mentioning our machine name in several web archives, so it looks as though this part of the rubric isn’t entirely random, but is chosen from a shortlist… and on two recent occasions people have worked out where this machine is located and have decided to get in touch with our hardworking sysadmins to complain about, it is assumed, some students who are acting in a criminal manner.

Such complaints would be straightforward to deal with, except that the “sargon” machine happens to be used for monitoring phishing website lifetimes. Fairly regularly this leads to correspondence, when people clearing up an intrusion into their machine come across our monitoring visits in their web server logs. Of course once we explain the nature of our research, everyone is happy.

Anyway, last weekend someone complained about us hijacking his PayPal account, and it was immediately assumed that it just someone else looking at their logs, and so there was little here to be unduly worried about.

The complainant was promptly asked for the evidence, and he sent back a copy of the email. Unfortunately, the University of Cambridge spam filter quietly discarded it, because it contained a phishing URL. Everyone here assumed that the matter had been forgotten about, and nothing proactive was done to follow it up.

Unfortunately, at the other end of the conversation, it looked as if Cambridge wasn’t responding, and perhaps the sysadmins were part of the criminal conspiracy. So, still concerned about the safety of their PayPal account, contact was made with the Metropolitan Police and the local Cambridgeshire constabulary… which would be an interesting experiment in seeing whether eCrime is ever investigated if it hadn’t, at heart, been an unfortunate misunderstanding. So far, no officers have appeared at our door, so hopefully not too much police time has been spent on this.

Eventually, after a little more to-ing and fro-ing, a copy of the original email arrived with the sysadmins via a @gmail account (which doesn’t completely discard phishing URLs), the penny dropped and it was all sorted out on the phone.

I’d like to draw a moral from this story, but apart from noting the wickedness of discarding valuable email merely because it superficially resembles spam, it’s not easy to cast fault more in one place than another. In particular, it’s clearly nonsense to suggest that people should just “know” that emails like this are fraudulent. If phishing emails didn’t mislead a great many people, then they’d evolve until they did!

Award Winners #2

Two years ago, almost exactly, I wrote:

Congratulations to Steven J. Murdoch and George Danezis who were recently awarded the Computer Laboratory Lab Ring (the local alumni association) award for the “most notable publication” (that’s notable as in jolly good) for the past year, written by anyone in the whole lab.

Well this year, it’s the turn of Tyler Moore and myself to win, for our APWG paper: Examining the Impact of Website Take-down on Phishing.

The obligatory posed photo, showing that we both own ties (!), is courtesy of the Science Editor of the Economist.

Tyler Moore and Richard Clayton, most notable publication 2008
Tyler Moore and Richard Clayton, most notable publication 2008

Securing Network Location Awareness with Authenticated DHCP

During April–June 2006, I was an intern at Microsoft Research, Cambridge. My project, supervised by Tuomas Aura and Michael Roe, was to improve the privacy and security of mobile computer users. A paper summarizing our work was published at SecureComm 2007, but I’ve only just released the paper online: “Securing Network Location Awareness with Authenticated DHCP”.

How a computer should behave depends on its network location. Existing security solutions, like firewalls, fail to adequately protect mobile users because they assume their policy is static. This results in laptop computers being configured with fairly open policies, in order to facilitate applications appropriate for a trustworthy office LAN (e.g. file and printer sharing, collaboration applications, and custom servers). When the computer is taken home or roaming, this policy leaves an excessively large attack surface.

This static approach also harms user privacy. Modern applications broadcast a large number of identifiers which may leak privacy sensitive information (name, employer, office location, job role); even randomly generated identifiers allow a user to be tracked. When roaming, a laptop should not broadcast identifiers unless necessary, and on moving location either pseudonymous identifiers should be re-used or anonymous ones generated.

Both of these goals require a computer to be able to identify which network it is on, even when an attacker is attempting to spoof this information. Our solution was to extend DHCP to include an network location identifier, authenticated by a public-key signature. I built a proof-of-concept implementation for the Microsoft Windows Server 2003 DHCP server, and the Vista DHCP client.

A scheme like this should ideally work on both small PKI-less home LANs and still permit larger networks to aggregate multiple access points into one logical network. Achieving this requires some subtle naming and key management tricks. These techniques, and how to implement the protocols in a privacy-preserving manner are described in our paper.

Security Economics and the EU

ENISA — the European Network and Information Security Agency — has just published a major report on security economics that was authored by Rainer Böhme, Richard Clayton, Tyler Moore and me.

Security economics has become a thriving field since the turn of the century, and in this report we make a first cut at applying it in a coherent way to the most pressing security policy problems. We very much look forward to your feedback.

(Edited Dec 2019 to update link after ENISA changed their website)

The two faces of Privila

We have discussed the Privila network on Light Blue Touchpaper before. Richard explained how Privila solicit links and I described how to map the network. Since then, Privila’s behavior has changed. Previously, their pages were dominated by adverts, but included articles written by unpaid interns. Now the articles have been dropped completely, leaving more room for the adverts.

This change would appear to harm Privila’s search rankings — the articles, carefully optimized to include desirable keywords, would no longer be indexed. However, when Google download the page, the articles re-appear and the adverts are gone. The web server appears to be configured to give different pages, depending on the “User-Agent” header in the HTTP request.

For example, here’s how soccerlove.com appears in Firefox, Netscape, Opera and Internet Explorer — lots of adverts, and no article:
Soccerlove (Firefox)

In contrast, by setting the browser’s user-agent to match that of Google’s spider, the page looks very different — a prominent article and no adverts:
Soccerlove (Google)

Curiously, the Windows Live Search, and Yahoo! spiders are presented with an almost empty page: just a header but neither adverts nor articles (see update 2). You can try this yourself, by using the User Agent Switcher Firefox extension and a list of user-agent strings.

I expect the interns who wrote these articles will be displeased that their articles are hidden from view. Google will doubtlessly be interested too, since their webmaster guidelines recommend against such behavior. BMW and Ricoh were delisted for similar reasons. Fortunately for Google, I’ve already shown how to build a complete list of Privila’s sites.

Update 1 (2008-03-08):
It looks like Google has removed the Privila sites from their index. For example, searches of soccerlove.com, ammancarpets.com, and canadianbattery.com all return zero results.

Update 2 (2008-03-11):
Privila appear to have fixed the problem that led to Yahoo! and Windows Live Search bots being presented with a blank page. Both of these spiders are being shown the same content as Google’s — the article with no adverts. Normal web browsers are still being sent adverts with no article.

Update 3 (2008-03-11):
Shortly after the publication of an article about Privila’s browser tricks on The Register, Privila has restored articles on the pages shown to normal web browsers. Pages presented to search engines still are not identical — they don’t contain the adverts.

Chip & PIN terminals vulnerable to simple attacks

Steven J. Murdoch, Ross Anderson and I looked at how well PIN entry devices (PEDs) protect cardholder data. Our paper will be published at the IEEE Symposium on Security and Privacy in May, though an extended version is available as a technical report. A segment about this work will appear on BBC Two’s Newsnight at 22:30 tonight.

We were able to demonstrate that two of the most popular PEDs in the UK — the Ingenico i3300 and Dione Xtreme — are vulnerable to a “tapping attack” using a paper clip, a needle and a small recording device. This allows us to record the data exchanged between the card and the PED’s processor without triggering tamper proofing mechanisms, and in clear violation of their supposed security properties. This attack can capture the card’s PIN because UK banks have opted to issue cheaper cards that do not use asymmetric cryptography to encrypt data between the card and PED.

Ingenico attack Dione attack

In addition to the PIN, as part of the transaction, the PED reads an exact replica of the magnetic strip (for backwards compatibility). Thus, if an attacker can tap the data line between the card and the PED’s processor, he gets all the information needed to create a magnetic strip card and withdraw money out of an ATM that does not read the chip.

We also found that the certification process of these PEDs is flawed. APACS has been effectively approving PEDs for the UK market as Common Criteria (CC) Evaluated, which does not equal Common Criteria Certified (no PEDs are CC Certified). What APACS means by “Evaluated” is that an approved lab has performed the “evaluation”, but unlike CC Certified products, the reports are kept secret, and governmental Certification Bodies do not do quality control.

This process causes a race to the bottom, with PED developers able to choose labs that will approve rather than improve PEDs, at the lowest price. Clearly, the certification process needs to be more open to the cardholders, who suffer from the fraud. It also needs to be fixed such that defective devices are refused certification.

We notified APACS, Visa, and the PED manufactures of our results in mid-November 2007 and responses arrived only in the last week or so (Visa chose to respond only a few minutes ago!) The responses are the usual claims that our demonstrations can only be done in lab conditions, that criminals are not that sophisticated, the threat to cardholder data is minimal, and that their “layers of security” will detect fraud. There is no evidence to support these claims. APACS state that the PEDs we examined will not be de-certified or removed, and the same for the labs who certified them and would not even tell us who they are.

The threat is very real: tampered PEDs have already been used for fraud. See our press release and FAQ for basic points and the technical report where we discuss the work in detail.

Update 1 (2008-03-09): The segment of Newsnight featuring our contribution has been posted to Google Video.

Update 2 (2008-03-21): If the link above doesn’t work try YouTube: part1 and part 2.

Inane security questions

I am the trustee of a small pensions scheme, which means that every few years I have to fill in a form for The Pensions Regulator. This year the form-filling is required to be done online.

In order to register for the online system I need to supply an email address and a password (“at least 8 characters long and contain at least 1 numeric or non-alphabetic character”). So far so good.

If I forget this password, I will be required to answer two security questions, which I get to choose from a little shortlist. They’ve eschewed “mother’s maiden name”, but the system designer seems to have copied them from Bebo or Disney’s Mickey Mouse Club:

  • Name of your favourite entertainer?
  • Your main childhood phone number?
  • Your favourite place to visit as a child?
  • Name of your favourite teacher?
  • Your grandfather’s occupation?
  • Your best childhood friend?
  • Name your childhood hero?

Since most pension fund trustees, the people who have to provide good answers to these questions, will be in their 50’s and 60’s, these questions are quite clearly unsuitable.

I’ve gone with the last two… each of which turn out to be different from the password, but the answers, weirdly enough, are also at least 8 characters long and contain at least one numeric or non-alphabetic character!

Computer Misuse in Scotland

Last June I explained that the Computer Misuse Act 1990 would not be amended until April 2008 — because the amendments introduced in the Police and Justice Act 2006 were themselves to be amended by the Serious Crime Act 2007, and that was not expected to come into force until then. Also, right at the end of 2007 the CPS published their guidance on how these new offences might be prosecuted.

Now Clive Feather draws my attention to a rather significant difference in the way that the law stands in Scotland.

Although on the face of it, both Acts do not extend to Scotland (Computer Misuse is a devolved matter) in practice the Scottish Parliament has used a Sewel motion (here for the Police and Justice Act, and here for the Serious Crime Act) to keep the law in both jurisdictions the same…

HOWEVER — as Clive points out — for some currently unknown reason the Scots brought the first version of the amendments into force on 1st October 2007 with this statutory instrument.

So North of the Border the law is currently different: you can prosecuted for denial-of-service attacks and locked up for distributing hacking tools… whereas in the rest of the country, it’s 1990 offences only for a few more weeks.

The changes that arrive in April with the Serious Crime Act won’t make much difference to the people of Scotland, all that happens is that one of the new offences stops being computer-specific and is more broadly drawn instead. Still, it makes you wonder why the denial-of-service offence particularly — which has been widely welcomed — has been delayed for over a year; if the Scots can cope with two law changes rather than one.

BTW: Clive has a marked up copy of the Computer Misuse Act on his website, with pretty colours to show the current form of the Act (it’s been amended a number of times now) and how it will soon look.

Justice, in one case at least

This morning Jane Badger was acquitted of fraud at Birmingham Crown Court. The judge found there was no case to answer.

Her case was remarkably similar to that of John Munden, about whom I wrote here (and in my book here). Like John, she worked for the police; like John, she complained to a bank about some ATM debits on her bank statement that she did not recognise; like John, she was arrested and suspended from work; like John, she faced a bank (in her case, Egg) claiming that as its systems were secure, she must be trying to defraud them; and like John, she faced police expert evidence that was technically illiterate and just took the bank’s claims as gospel.

In her case, Egg said that the transactions must have been done with the card issued to her rather than using a card clone, and to back this up they produced a printout allocating a transaction code of 05 to each withdrawal, and a rubric stating that 05 meant “Integrated Circuit Card read – CVV data reliable” with in brackets the explanatory phrase “(chip read)”. This seemed strange. If the chip of an EMV card is read, the reader will verify the signature on the certificate; if its magnetic strip is read (perhaps because the chip is unserviceable) then the bank will check the CVV, which is there to prevent magnetic strip forgery. The question therefore was whether the dash in the above rubric meant “OR”, as the technology would suggest, or “AND” as the bank and the CPS hoped. The technology is explained in more detail in our recent submission to the Hunt Review of the Financial Services Ombudsman (see below). I therefore advised the defence to apply for the court to order Egg to produce the actual transaction logs and supporting material so that we could verify the transaction certificates, if any.

The prosecution folded and today Jane walked free. I hope she wins an absolute shipload of compensation from Egg!

Opting out

The British Journal of General Practice has just published an editorial I wrote on Patient confidentiality and central databases. I’m encouraging GPs to make clear to patients that it’s OK to opt out – that they won’t incur the practice’s disapproval. Some practices have distributed leaflets from www.TheBigOptOut.org while others – such as The Oakland practice – have produced their own leaflets. These practices have seen the proportion of patients opting out rise from about 1% to between 6% and 19%. The same thing happened a few years ago in Iceland, where GP participation led to 11% of the population opting out of a central database project, which as a result did not become universal. GPs can help patients do the same here.