Government ignores Personal Internet Security

At the end of last week the Government published their response to the House of Lords Science and Technology Committee Report on Personal Internet Security. The original report was published in mid-August and I blogged about it (and my role in assisting the Committee) at that time.

The Government has turned down pretty much every recommendation. The most positive verbs used were “consider” or “working towards setting up”. That’s more than a little surprising, because the report made a great deal of sense, and their lordships aren’t fools. So is the Government ignorant, stupid, or in the thrall of some special interest group?

On balance I think it starts from ignorance.

Some of the most compelling evidence that the Committee heard was at private meetings in the USA from companies such as Microsoft, Cisco, Verisign, and in particular from Team Cymru, who monitor the “underground economy”. I don’t think that the Whitehall mandarins have heard these briefings, or have bothered to read the handful of published articles such as this one in ;login, or this more recent analysis that will appear at CCS next week. If the Government was up-to-speed on what researchers are documenting, they wouldn’t be arguing that there is more crime solely because there are more users — and they could not possibly say that they “refute the suggestion […] that lawlessness is rife”.

However, we cannot rule out stupidity.

Some of the Select Committee recommendations were intended to address the lack of authoritative data — and these were rejected as well. The Government doesn’t think its urgently necessary to capture more information about the prevalence of eCrime; they don’t think that having the banks collate crime reports gets all the incentives wrong; and they “do not accept that the incidence of loss of personal data by companies is on an upward path” (despite there being no figures in the UK to support or refute that notion, and considerable evidence of regular data loss in the United States).

The bottom line is that the Select Committee did some “out-of-the-box thinking” and came up with a number of proposals for measurement, for incentive alignment, and for bolstering law enforcement’s response to eCrime. The Government have settled for complacency, quibbling about the wording of the recommendations, and picking out a handful of the more minor recommendations to “note” to “consider” and to “keep under review”.

A whole series of missed opportunities.

Upgrade and new theme

Regular readers may have noticed that Light Blue Touchpaper was down most of today. This was due to the blog being compromised through several WordPress vulnerabilities. I’ve now cleaned this up, restored from last night’s backups and upgraded WordPress. A downside is that our various customizations need substantial modification before working again, most notably the theme, which is based on Blix and has not been updated since WordPress 1.5. Email also will not work due to this bug. I am working on a fix to this and other problems, so please accept my apologies in the mean time.

Phishing take-down paper wins 'Best Paper Award' at APWG eCrime Researcher's Summit

Richard Clayton and I have been tracking phishing sites for some time. Back in May, we reported on how quickly phishing websites are removed. Subsequently, we have also compared the performance of banks in removing websites and found evidence that ISPs and registrars are initially slow to remove malicious websites.

We have published our updated results at eCrime 2007, sponsored by the Anti-Phishing Working Group. The paper, ‘Examining the Impact of Website Take-down on Phishing’ (slides here), was selected for the ‘Best Paper Award’.

A high-level abridged description of this work also appeared in the September issue of Infosecurity Magazine.

Counters, Freshness, and Implementation

When we want to check freshness of cryptographically secured messages, we have to use monotonic counters, timestamps or random nonces. Each of these mechanisms increases the complexity of a given system in a different way. Freshness based on counters seems to be the easiest to implement in the context of ad-hoc mesh wireless networks. One does not need to increase power consumption for an extra message for challenge (containing a new random number), nor there is need for precise time synchronisation. It sounds easy but people in the real world are … creative. We have been working with TinyOS, an operating system that was designed for constrained hardware. TinyOS is a quite modular platform and even mesh networking is not part of the system’s core but is just one of the modules that can be easily replaced or not used at all.

Frame structures for TinyOS and TinySec on top of 802.15.4
Fig.: Structures of TinyOS and TinySec frames with all the counters. TinySec increases length of “data” field to store initialisation vector. Continue reading Counters, Freshness, and Implementation

Time to forget?

In a few hours time Part III of the Regulation of Investigatory Powers Act 2000 will come into effect. The commencement order means that as of October 1st a section 49 notice can be served which requires that encrypted data be “put into an intelligible form” (what you and I might call “decrypted”). Extended forms of such a notice may, under the provisions of s51, require you to hand over your decryption key, and/or under s54 include a “no tipping off” provision.

If you fail to comply with a notice (or breach a tipping off requirement by telling someone about it) then you will have committed an offence, for which the maximum penalty is two years and a fine or both. It’s five years for “tipping off” and also five years (an amendment in s15 of the Terrorism Act 2006) if the case relates to “national security”.

By convention, laws in the UK very seldom have retrospective effect, so that if you do something today, Parliament is very loth to pass a law tomorrow to make your actions illegal. However, the offences in Part III relate to failing to obey a s49 notice and that notice could be served on you tomorrow (or thereafter), but the material may have been encrypted by you today (or before).

Potentially therefore, the police could start demanding the putting into an intelligible form, not only of information that they seize in a raid tomorrow morning, but also of material that they seized weeks, months or years ago. In the 1995 Smith case (part of Operation Starburst), the defendant only received a suspended sentence because the bulk of the material was encrypted. In this particular example, the police may be constrained by double jeopardy or the time that has elapsed from serving a notice on Mr Smith, but there’s nothing in RIP itself, or the accompanying Code of Practice, to prevent them serving a s49 notice on more recently seized encrypted material if they deem it to be necessary and proportionate.

In fact, they might even be nipping round to Jack Straw’s house demanding a decryption key — as this stunt from 1999 makes possible (when the wording of a predecessor bill was rather more inane than RIP was (eventually) amended to).

There are some defences in the statute to failing to comply with a notice — one of which is that you can claim to have forgotten the decryption key (in practice, the passphrase under which the key is stored). In such a case the prosecution (the burden of proof was amended during the passage of the Bill) must show beyond a reasonable doubt that you have not forgotten it. Since they can’t mind-read, the expectation must be that they would attempt to show regular usage of the passphrase, and invite the jury to conclude that the forgetting has been faked — and this might be hard to manage if a hard disk has been in a police evidence store for over a decade.

However, if you’re still using such a passphrase and still have access to the disk, and if the contents are going to incriminate you, then perhaps a sledgehammer might be a suitable investment.

Me? I set up my alibi long ago 🙂

Notes on FPGA DRM (part 1)

For a while I have been looking very closely at how FPGA cores are distributed (the common term is “IP cores”, or just “IP”, but I try to minimize the use of this over-used catch-all catch phrase). In what I hope to be a series of posts, I will mostly discuss The problem (rather than solutions), as I think that that needs to be addressed and adequately defined first. I’ll start with my attempt at a concise definitions of the following:

FPGA: Field Programmable Gate Arrays are generic semiconductor devices comprising of interconnected functional blocks that can be programmed, and reprogrammed, to perform user-described logic functions.

Cores: ready-made functional descriptions that allow system developers to save on design cost and time by purchasing them from third parties and integrating them into their own design.

The “cores distribution problem” is easy to define, but challenging to solve: how can a digital design be distributed by its designer such that he can a) enable his customer to evaluate, simulate, and integrate it into its own, b) limit the amount of instances that can be made of it, and c) make it run only on specific devices. If this sounds like “Digital Rights Management” to you, that’s exactly what it is: DRM for FPGAs. Despite the abuse of some industries that made a bad name for DRM, in our application there may be benefits for both the design owner and the end user. We also know that enabling the three conditions above for a whole industry is challenging, and we are not even close to a solution.

Continue reading Notes on FPGA DRM (part 1)

Web content labelling

As we all know, the web contains a certain amount of content that some people don’t want to look at, and/or do not wish their children to look at. Removing the material is seldom an option (it may well be entirely lawfully hosted, and indeed many other people may be perfectly happy for it to be there). Since centralised blocking of such material just isn’t going to happen, the best way forward is the installation of blocking software on the end-user’s machine. This software will have blacklists and whitelists provided from a central server, and it will provide some useful reassurance to parents that their youngest children have some protection. Older children can of course just turn the systems off, as has recently been widely reported for the Australian NetAlert system.

A related idea is that websites should rate themselves according to widely agreed criteria, and this would allow visitors to know what to expect on the site. Such ratings would of course be freely available, unlike the blocking software which tends to cost money (to pay for the people making the whitelists and blacklists).

I’ve never been a fan of these self-rating systems whose criteria always seem to be based on a white, middle-class, presbyterian view of wickedness, and — at least initially — were hurriedly patched together from videogame rating schemes. More than a decade ago I lampooned the then widely hyped RSACi system by creating a site that scored “4 4 4 4”, the highest (most unacceptable) score in every category: http://www.happyday.demon.co.uk/awful.htm and just recently, I was reminded of this in the context of an interview for an EU review of self-regulation.

Continue reading Web content labelling

Keep your keypads close

On a recent visit to a local supermarket I noticed something new being displayed on the keypad before the transaction starts:

Did you know that you can remove the PIN pad to enter your PIN?

(“Did you know that you can remove the PIN pad to enter your PIN?”)

Picking up the keypad will allow the cardholder to align it such that bystanders, or the merchant, cannot observe the PIN as it is entered. On the one hand, this seems sensible (if we assume that the only way to get the PIN is by observation, no cameras are present, and that even more cardholder liability is the solution for card fraud). On the other hand, it also makes some attacks easier. For example, the relay attack we demonstrated earlier this year, where the crook inserts a modified card into the terminal, hoping that the merchant does not ask to examine it. Allowing the cardholder to move the keypad separates the merchant, who could detect the attack, from the transaction. Can I now hide the terminal under my jacket while the transaction is processed? Can I turn my back to the merchant? What if I found a way to tamper with the terminal? Clearly, this would make the process easier for me. We’ve been doing some more work on payment terminals and will hopefully have some more to say about it soon.

Continue reading Keep your keypads close

NHS Computer Project Failing

The House of Commons Health Select Committee has just published a Report on the Electronic Patient Record. This concludes that the NHS National Programme for IT (NPfIT), the 20-billion-pound project to rip out all the computers in the NHS and replace them with systems that store data in central server farms rather than in the surgery or hospital, is failing to meet its stated core objective – of providing clinically rich, interoperable detailed care records. What’s more, privacy’s at serious risk. Here is comment from e-Health Insider.

For the last few years I’ve been using the London Ambulance Service disaster as the standard teaching example of how things go wrong in big software projects. It looks like I will have to refresh my notes for the Software Engineering course next month!

I’ve been warning about the safety and privacy risks of the Department of Health’s repeated attempts to centralise healthcare IT since 1995. Here is an analysis of patient privacy I wrote earlier this year, and here are my older writings on the security of clinical information systems. It doesn’t give me any great pleasure to be proved right, though.

Embassy email accounts breached by unencrypted passwords

When it rains, it pours. Following the fuss over the Storm worm impersonating Tor, today Wired and The Register are covering the story of a Dan Egerstad, who intercepted embassy email account passwords by setting up 5 Tor exit nodes, then published the results online. People have been sniffing passwords on Tor before, and one even published a live feed. However, the sensitivity of embassies as targets and initial mystery over how the passwords were snooped, helped drum up media interest.

That unencrypted traffic can be read by Tor exit nodes is an unavoidable fact – if the destination does not accept encrypted information then there is nothing Tor can do to change this. The download page has a big warning, recommending users adopt end-to-end encryption. In some cases this might not be possible, for example browsing sites which do not support SSL, but for downloading email, not using encryption with Tor is inexcusable.

Looking at who owns the IP addresses of the compromised email accounts, I can see that they are mainly commercial ISPs, generally in the country where the embassy is located, so probably set up by the individual embassy and not subject to any server-imposed security policies. Even so, it is questionable whether such accounts should be used for official business, and it is not hard to find providers which support encrypted access.

The exceptions are Uzbekistan, and Iran whose servers are controlled by the respective Ministry of Foreign Affairs, so I’m surprised that secure access is not mandated (even my university requires this). I did note that the passwords of the Uzbek accounts are very good, so might well be allocated centrally according to a reasonable password policy. In contrast, the Iranian passwords are simply the name of the embassy, so guessable not only for these accounts, but any other one too.

In general, if you are sending confidential information over the Internet unencrypted you are at risk, and Tor does not change this fact, but it does move those risks around. Depending on the nature of the secrets, this could be for better or for worse. Without Tor, data can be intercepted near the server, near the client and also in the core of the Internet; with Tor data is encrypted near the client, but can be seen by the exit node.

Users of unknown Internet cafés or of poorly secured wireless are at risk of interception near the client. Sometimes there is motivation to snoop traffic there but not at the exit node. For example, people may be curious what websites their flatmates browse, but it is not interesting to know that an anonymous person is browsing a controversial site. This is why, at conferences, I tunnel my web browsing via Cambridge. I know that without end-to-end encryption my data can be intercepted, but the sysadmins at Cambridge have far less incentive misbehave than some joker sitting behind me.

Tor has similar properties, but when used with unencrypted data the risks need to be carefully evaluated. When collecting email, be it over Tor, using wireless, or via any other untrustworthy media, end-to-end encryption is essential. The fact that embassies, who are supposed to be security conscious, do not appreciate this is disappointing to learn.

Although I am a member of the Tor project, the views expressed here are mine alone and not those of Tor.