Monthly Archives: September 2007

Time to forget?

In a few hours time Part III of the Regulation of Investigatory Powers Act 2000 will come into effect. The commencement order means that as of October 1st a section 49 notice can be served which requires that encrypted data be “put into an intelligible form” (what you and I might call “decrypted”). Extended forms of such a notice may, under the provisions of s51, require you to hand over your decryption key, and/or under s54 include a “no tipping off” provision.

If you fail to comply with a notice (or breach a tipping off requirement by telling someone about it) then you will have committed an offence, for which the maximum penalty is two years and a fine or both. It’s five years for “tipping off” and also five years (an amendment in s15 of the Terrorism Act 2006) if the case relates to “national security”.

By convention, laws in the UK very seldom have retrospective effect, so that if you do something today, Parliament is very loth to pass a law tomorrow to make your actions illegal. However, the offences in Part III relate to failing to obey a s49 notice and that notice could be served on you tomorrow (or thereafter), but the material may have been encrypted by you today (or before).

Potentially therefore, the police could start demanding the putting into an intelligible form, not only of information that they seize in a raid tomorrow morning, but also of material that they seized weeks, months or years ago. In the 1995 Smith case (part of Operation Starburst), the defendant only received a suspended sentence because the bulk of the material was encrypted. In this particular example, the police may be constrained by double jeopardy or the time that has elapsed from serving a notice on Mr Smith, but there’s nothing in RIP itself, or the accompanying Code of Practice, to prevent them serving a s49 notice on more recently seized encrypted material if they deem it to be necessary and proportionate.

In fact, they might even be nipping round to Jack Straw’s house demanding a decryption key — as this stunt from 1999 makes possible (when the wording of a predecessor bill was rather more inane than RIP was (eventually) amended to).

There are some defences in the statute to failing to comply with a notice — one of which is that you can claim to have forgotten the decryption key (in practice, the passphrase under which the key is stored). In such a case the prosecution (the burden of proof was amended during the passage of the Bill) must show beyond a reasonable doubt that you have not forgotten it. Since they can’t mind-read, the expectation must be that they would attempt to show regular usage of the passphrase, and invite the jury to conclude that the forgetting has been faked — and this might be hard to manage if a hard disk has been in a police evidence store for over a decade.

However, if you’re still using such a passphrase and still have access to the disk, and if the contents are going to incriminate you, then perhaps a sledgehammer might be a suitable investment.

Me? I set up my alibi long ago 🙂

Notes on FPGA DRM (part 1)

For a while I have been looking very closely at how FPGA cores are distributed (the common term is “IP cores”, or just “IP”, but I try to minimize the use of this over-used catch-all catch phrase). In what I hope to be a series of posts, I will mostly discuss The problem (rather than solutions), as I think that that needs to be addressed and adequately defined first. I’ll start with my attempt at a concise definitions of the following:

FPGA: Field Programmable Gate Arrays are generic semiconductor devices comprising of interconnected functional blocks that can be programmed, and reprogrammed, to perform user-described logic functions.

Cores: ready-made functional descriptions that allow system developers to save on design cost and time by purchasing them from third parties and integrating them into their own design.

The “cores distribution problem” is easy to define, but challenging to solve: how can a digital design be distributed by its designer such that he can a) enable his customer to evaluate, simulate, and integrate it into its own, b) limit the amount of instances that can be made of it, and c) make it run only on specific devices. If this sounds like “Digital Rights Management” to you, that’s exactly what it is: DRM for FPGAs. Despite the abuse of some industries that made a bad name for DRM, in our application there may be benefits for both the design owner and the end user. We also know that enabling the three conditions above for a whole industry is challenging, and we are not even close to a solution.

Continue reading Notes on FPGA DRM (part 1)

Web content labelling

As we all know, the web contains a certain amount of content that some people don’t want to look at, and/or do not wish their children to look at. Removing the material is seldom an option (it may well be entirely lawfully hosted, and indeed many other people may be perfectly happy for it to be there). Since centralised blocking of such material just isn’t going to happen, the best way forward is the installation of blocking software on the end-user’s machine. This software will have blacklists and whitelists provided from a central server, and it will provide some useful reassurance to parents that their youngest children have some protection. Older children can of course just turn the systems off, as has recently been widely reported for the Australian NetAlert system.

A related idea is that websites should rate themselves according to widely agreed criteria, and this would allow visitors to know what to expect on the site. Such ratings would of course be freely available, unlike the blocking software which tends to cost money (to pay for the people making the whitelists and blacklists).

I’ve never been a fan of these self-rating systems whose criteria always seem to be based on a white, middle-class, presbyterian view of wickedness, and — at least initially — were hurriedly patched together from videogame rating schemes. More than a decade ago I lampooned the then widely hyped RSACi system by creating a site that scored “4 4 4 4”, the highest (most unacceptable) score in every category: http://www.happyday.demon.co.uk/awful.htm and just recently, I was reminded of this in the context of an interview for an EU review of self-regulation.

Continue reading Web content labelling

Keep your keypads close

On a recent visit to a local supermarket I noticed something new being displayed on the keypad before the transaction starts:

Did you know that you can remove the PIN pad to enter your PIN?

(“Did you know that you can remove the PIN pad to enter your PIN?”)

Picking up the keypad will allow the cardholder to align it such that bystanders, or the merchant, cannot observe the PIN as it is entered. On the one hand, this seems sensible (if we assume that the only way to get the PIN is by observation, no cameras are present, and that even more cardholder liability is the solution for card fraud). On the other hand, it also makes some attacks easier. For example, the relay attack we demonstrated earlier this year, where the crook inserts a modified card into the terminal, hoping that the merchant does not ask to examine it. Allowing the cardholder to move the keypad separates the merchant, who could detect the attack, from the transaction. Can I now hide the terminal under my jacket while the transaction is processed? Can I turn my back to the merchant? What if I found a way to tamper with the terminal? Clearly, this would make the process easier for me. We’ve been doing some more work on payment terminals and will hopefully have some more to say about it soon.

Continue reading Keep your keypads close

NHS Computer Project Failing

The House of Commons Health Select Committee has just published a Report on the Electronic Patient Record. This concludes that the NHS National Programme for IT (NPfIT), the 20-billion-pound project to rip out all the computers in the NHS and replace them with systems that store data in central server farms rather than in the surgery or hospital, is failing to meet its stated core objective – of providing clinically rich, interoperable detailed care records. What’s more, privacy’s at serious risk. Here is comment from e-Health Insider.

For the last few years I’ve been using the London Ambulance Service disaster as the standard teaching example of how things go wrong in big software projects. It looks like I will have to refresh my notes for the Software Engineering course next month!

I’ve been warning about the safety and privacy risks of the Department of Health’s repeated attempts to centralise healthcare IT since 1995. Here is an analysis of patient privacy I wrote earlier this year, and here are my older writings on the security of clinical information systems. It doesn’t give me any great pleasure to be proved right, though.

Embassy email accounts breached by unencrypted passwords

When it rains, it pours. Following the fuss over the Storm worm impersonating Tor, today Wired and The Register are covering the story of a Dan Egerstad, who intercepted embassy email account passwords by setting up 5 Tor exit nodes, then published the results online. People have been sniffing passwords on Tor before, and one even published a live feed. However, the sensitivity of embassies as targets and initial mystery over how the passwords were snooped, helped drum up media interest.

That unencrypted traffic can be read by Tor exit nodes is an unavoidable fact – if the destination does not accept encrypted information then there is nothing Tor can do to change this. The download page has a big warning, recommending users adopt end-to-end encryption. In some cases this might not be possible, for example browsing sites which do not support SSL, but for downloading email, not using encryption with Tor is inexcusable.

Looking at who owns the IP addresses of the compromised email accounts, I can see that they are mainly commercial ISPs, generally in the country where the embassy is located, so probably set up by the individual embassy and not subject to any server-imposed security policies. Even so, it is questionable whether such accounts should be used for official business, and it is not hard to find providers which support encrypted access.

The exceptions are Uzbekistan, and Iran whose servers are controlled by the respective Ministry of Foreign Affairs, so I’m surprised that secure access is not mandated (even my university requires this). I did note that the passwords of the Uzbek accounts are very good, so might well be allocated centrally according to a reasonable password policy. In contrast, the Iranian passwords are simply the name of the embassy, so guessable not only for these accounts, but any other one too.

In general, if you are sending confidential information over the Internet unencrypted you are at risk, and Tor does not change this fact, but it does move those risks around. Depending on the nature of the secrets, this could be for better or for worse. Without Tor, data can be intercepted near the server, near the client and also in the core of the Internet; with Tor data is encrypted near the client, but can be seen by the exit node.

Users of unknown Internet cafés or of poorly secured wireless are at risk of interception near the client. Sometimes there is motivation to snoop traffic there but not at the exit node. For example, people may be curious what websites their flatmates browse, but it is not interesting to know that an anonymous person is browsing a controversial site. This is why, at conferences, I tunnel my web browsing via Cambridge. I know that without end-to-end encryption my data can be intercepted, but the sysadmins at Cambridge have far less incentive misbehave than some joker sitting behind me.

Tor has similar properties, but when used with unencrypted data the risks need to be carefully evaluated. When collecting email, be it over Tor, using wireless, or via any other untrustworthy media, end-to-end encryption is essential. The fact that embassies, who are supposed to be security conscious, do not appreciate this is disappointing to learn.

Although I am a member of the Tor project, the views expressed here are mine alone and not those of Tor.

Analysis of the Storm Javascript exploits

On Monday I formally joined the Tor project and it certainly has been an interesting week. Yesterday, on both the Tor internal and public mailing lists, we received several reports of spam emails advertising Tor. Of course, this wasn’t anything to do with the Tor project and the included link was to an IP address (it varied across emails). On visiting this webpage (below), the user was invited to download tor.exe which was not Tor, but instead a trojan which if run would recruit a computer into the Storm (aka Peacomm and Nuwar) botnet, now believed to be the worlds largest supercomputer.

Spoofed Tor download site

Ben Laurie, amongst others, has pointed out that this attack shows that Tor must have a good reputation for it to be considered worthwhile to impersonate. So while dealing with this incident has been tedious, it could be considered a milestone in Tor’s progress. It has also generated some publicity on a few blogs. Tor has long promoted procedures for verifying the authenticity of downloads, and this attack justifies the need for such diligence.

One good piece of advice, often mentioned in relation to the Storm botnet, is that recipients of spam email should not click on the link. This is because there is malicious Javascript embedded in the webpage, intended to exploit web-browser vulnerabilities to install the trojan without the user even having to click on the download link. What I did not find much discussion of is how the exploit code actually worked.

Notably, the malware distribution site will send you different Javascript depending on the user-agent string sent by the browser. Some get Javascript tailored for vulnerabilities in that browser/OS combination while the rest just get the plain social-engineering text with a link to the trojan. I took a selection of popular user-agent strings, and investigated what I got back on sending them to one of the malware sites.

Continue reading Analysis of the Storm Javascript exploits

Mapping the Privila network

Last week, Richard Clayton described his investigation of the Privila internship programme. Unlike link farms, Privila doesn’t link to its own websites. Instead, they apparently solely depend on the links made to the site before they took over the domain name, and new ones solicited through spamming. This means that normal mapping techniques, just following links, will not uncover Privila sites. This might be one reason they took this approach, or perhaps it was just to avoid being penalized by search engines.

The mapping approach which I implemented, as suggested by Richard, was to exploit the fact that Privila authors typically write for several websites. So, starting with one seed site, you can find more by searching for the names of authors. I used the Yahoo search API to automate this process, since the Google API has been discontinued. From the new set of websites discovered, the list of authors is extracted, allowing yet more sites to be found. These steps are repeated until no new sites are discovered (effectively a breadth first search).

The end result was that starting from bustem.com, I found 294 further sites, with a total of 3 441 articles written by 124 authors (these numbers are lower than the ones in the previous post since duplicates have now been properly removed). There might be even more undiscovered sites, with a disjoint set of authors, but the current network is impressive in itself.

I have implemented an interactive Java applet visualization (using the Prefuse toolkit) so you can explore the network yourself. Both the source code, and the data used to construct the graph can also be downloaded.

Screenshot of PrivilaView applet

The dinosaurs of five years ago

A project called NSA@home has been making the rounds. It’s a gem. Stanislaw Skowronek got some old HDTV hardware off of eBay, and managed to create himself a pre-image brute force attack machine against SHA-1. The claim is that it can find a pre-image for an 8 character password hash from a 64 character set in about 24 hours.

The key here is that this hardware board uses 15 field programmable gate arrays (FPGAs), which are generic integrated circuits that can perform any logic function within their size limit. So, Stanislaw reverse engineered the connections between the FPGAs, wrote his own designs and now has a very powerful processing unit. FPGAs are better at specific tasks compared to general purpose CPUs, especially for functions that can be divided into many independently-running smaller chunks operating in parallel. Some cryptographic functions are a perfect match; our own Richard Clayton and Mike Bond attacked the DES implementation in the IBM 4758 hardware security module using an FPGA prototyping board; DES was attacked on the FPGA-based custom hardware platform, the Transmogrifier 2a; more recently, the purpose-built COPACOBANA machine which uses 120 low-end FPGAs operating in parallel to break DES in about 7 days; a proprietary stream cipher on RFID tokens was attacked using 16 commercial FPGA boards operating in parallel; and finally, people are now in the midst of cracking the A5 stream cipher in real time using commercial FPGA modules. The unique development we see with NSA@home is that it uses a defunct piece of hardware.

Continue reading The dinosaurs of five years ago