Monthly Archives: December 2007

Hacking tool guidance finally appears

When civil servants talk about “spring” they mean before Parliament rises in July and by “the summer” they usually mean “before the party conference season” in September. But it seems that when a minister tells a Lords Committee “the end of the summer” they mean the last day of December. Well it has been pretty cold recently, so I expect that concentrated their minds!

This “summer” event which can be reported today, is the publication of the Crown Prosecution Service guidance on what should be considered before bringing prosecutions under s3A of the Computer Misuse Act, when amendments to it come into force — probably April 2008 (for reasons that I discussed last July).

What is at issue is so-called hacking tools, and the problem arises because almost every hacking tool you can think of from perl to nmap is dual use — the good guys use it for good purposes, and the bad guys use it for bad. The bad guys are of course committing an offence, and the good guys are not … but the complexity surrounds “distribution”, if a good guy runs a website and a lot of bad people download the tool from it, has the good guy committed an offence?

The actual wording of the offence says "supply or offer to supply, believing that it is likely to be used to commit, or to assist in the commission of [a Computer Misuse Act s1/s3 offence]" and so we need to know what "believing that it is likely" might mean. Whilst the law was going through Parliament the Home Office suggested that “likely” would be a 50% test, and they promised to publish the guidance to prosecutors so we’d all know where we stood.

Anyway, that guidance is now out — and there’s no mention, surprise, surprise, of “50%”. Instead, the tests that the CPS will apply are:

  • Has the article been developed primarily, deliberately and for the sole purpose of committing a CMA offence (i.e. unauthorised access to computer material)?
  • Is the article available on a wide scale commercial basis and sold through legitimate channels?
  • Is the article widely used for legitimate purposes?
  • Does it have a substantial installation base?
  • What was the context in which the article was used to commit the offence compared with its original intended purpose?

which after a good start using words like “primarily” and “deliberately” (which would have been a sensible law to have in the first place) then goes a bit downhill in that prosecutors don’t know the difference between “i.e” and “e.g.” and seem to think that software is generally sold (!), and rather misses the point of dual use by talking about using the tool in a different “context”.

Still, the “installed base” test should at least allow people to distribute perl without qualms (millions of users) — though do note that these are the tests which will be applied at the “deciding if you ought to be charged with an offence” stage, not the points of law and interpretation that the court will use in deciding your guilt.

How effective is the wisdom of crowds as a security mechanism?

Over the past year, Richard Clayton and I have been tracking phishing websites. For this work, we are indebted to PhishTank, a website where dedicated volunteers submit URLs from suspected phishing websites and vote on whether the submissions are valid. The idea behind PhishTank is to bring together the expertise and enthusiasm of people across the Internet to fight phishing attacks. The more people participate, the larger the crowd, the more robust it should be against errors and perhaps even manipulation by attackers.

Not so fast. We studied the submission and voting records of PhishTank’s users, and our results are published in a paper appearing at Financial Crypto next month. It turns out that participation is very skewed. While PhishTank has several thousand registered users, a small core of around 25 moderators perform the bulk of the work, casting 74% of the votes we observed. Both the distributions of votes and submissions follow a power law.

This leaves PhishTank more vulnerable to manipulation than would be the case if every member of the crowd participated to the same extent. Why? If a few of the most active users stopped voting, a backlog of unverified phishing sites might collect. It also means an attacker could join the system and vote maliciously on a massive scale. Since 97% of submissions to PhishTank are verified as phishing URLs, it would be easy for an attacker to build up reputation by voting randomly many times, and then sprinkle in malicious votes protecting the attacker’s own phishing sites, for example. Since over half of the phishing sites in PhishTank are duplicate rock-phish domains, a savvy attacker could build reputation by voting for these sites without contributing to PhishTank otherwise.

So crowd-sourcing your security decisions can leave you exposed to manipulation. But how does PhishTank compare to the feeds maintained by specialist website take-down companies hired by the banks? Well, we compared PhishTank’s feed to a feed from one such company, and found the company’s feed to be slightly more complete and significantly faster in confirming phishing websites. This is because companies can afford employees to verify their submissions.

We also found that users who vote less often are more likely to vote incorrectly, and that users who commit many errors tend to have voted on
the same URLs.

Despite these problems, we do not advocate against leveraging user participation in the design of all security mechanisms, nor do we believe that PhishTank should throw in the towel. Some improvements can be made by automating obvious categorization so that the hard decisions are taken by PhishTank’s users. In any case, we implore caution before turning over a security decision to a crowd.

Infosecurity Magazine has written a news article describing this work.

Fatal wine waiters

I’ve written before about “made for adware” (MFA) websites — those parts of the web that are created solely to host lots of (mainly Google) ads, and thereby make their creators loads of money.

Well, this one “hallwebhosting.com” is a little different. I first came across it a few months back when it was clearly still under development, but it seems to have settled down now — so that it’s worth looking at exactly what they’re doing.

The problem that such sites have is that they need to create lots of content really quickly, get indexed by Google so that people can find them, and then wait for the clicks (and the money) to roll in. The people behind hallwebhosting have had a cute idea for this — they take existing content from other sites and do word substitutions on sentences to produce what they clearly intend to be identical in meaning (so the site will figure in web search results), but different enough that the indexing spider won’t treat it as identical text.

So, for example, this section from Wikipedia’s page on Windows Server 2003:

Released on April 24, 2003, Windows Server 2003 (which carries the version number 5.2) is the follow-up to Windows 2000 Server, incorporating compatibility and other features from Windows XP. Unlike Windows 2000 Server, Windows Server 2003’s default installation has none of the server components enabled, to reduce the attack surface of new machines. Windows Server 2003 includes compatibility modes to allow older applications to run with greater stability.

becomes:

Released on April 24, 2003, Windows Server 2003 (which carries the form quantity 5.2) is the follow-up to Windows 2000 Server, incorporating compatibility and other skin from Windows XP. Unlike Windows 2000 Server, Windows Server 2003’s evasion installation has none of the attendant workings enabled, to cut the molest outward of new machines. Windows Server 2003 includes compatibility modes to allow big applications to gush with larger stability.

I first noticed this site because they rendered a Wikipedia article about my NTP DDoS work, entitled “NTP server misuse and abuse” into “NTP wine waiter knock about and abuse” … the contents of which almost makes sense:

“In October 2002, one of the first known hand baggage of phase wine waiter knock about resulted in troubles for a mess wine waiter at Trinity College, Dublin”

for doubtless a fine old university has wine waiters to spare, and a mess for them to work in.

Opinions around here differ as to whether this is machine translation (as in all those old stories about “Out of sight, out of mind” being translated to Russian and then back as “Invisible idiot”) or imaginative use of a thesaurus where “wine waiter” is a hyponym of “server”.

So fas as I can see, this is all potentially lawful — Wikipedia is licensed under the GNU Free Documentation License so if there was an acknowledgement of the original article’s authors then all would be fine. But there isn’t — so in fact, all is not fine!

However, even if this (perhaps) oversight was corrected, some articles are clearly copyright infringements.

For example, this article from shellaccounts.biz entitled Professional Web Site Hosting Checklist appears to be entirely covered by copyright, yet it has been rendered into this amusement:

In harmony to create sure you get what you’ve been looking for from a qualified confusion put hosting server, here are a few stuff you should take into tally before deciding on a confusion hosting provider.

where you’ll see that “site” has become “put”, “web” has become “confusion” (!) and later on “requirements” becomes “food” which leads to further hilarity.

However, beyond the laughter, this is pretty clearly yet another ham-fisted attempt to clutter up the web with dross in the hopes of making money. This time it’s not Google adwords, but banner ads, and other franchised links, but it’s still essentially “MFA”. These types of site will continue until advertisers get more savvy about the websites that they don’t wish to be associated with — at which point the flow of money will cease and the sites will disappear.

To finish by being lighthearted again, the funniest page (so far) is the reworking of the Wikipedia article on “Terminal Servers” … since servers once again becomes “wine waiters”, but “terminal” naturally enough, becomes “fatal”. The image is clear.

Index on Censorship: Shifting Borders

The latest issue of the journal “Index on Censorship” is dedicated to the topic of Internet censorship and features an article, “Shifting Borders”, by Ross Anderson and me. In it, we argue that it is wrong to claim that the Internet is free from barriers. They exist, and while often aligning with national boundaries they are hopefully lower.

However, the changing nature of the end-to-end principle is increasing the significance of barriers that stem from industry structure — which companies are hosting controversial information, where they do business, what markets do they compete in and what corporate partnerships are involved. The direction these take will have a significant impact on the scale of Internet censorship.

The rest of the journal is well worth reading, with authors including Xeni Jardin, David Weinberger and Jimmy Wales. I can especially recommend taking a look at Nart Villeneuve’s article, “Evasion Tactics”, also published on his blog. Unfortunately access to the full online version is restricted to subscribers.

Covert channel vulnerabilities in anonymity systems

My PhD thesis — “Covert channel vulnerabilities in anonymity systems” — has now been published:

The spread of wide-scale Internet surveillance has spurred interest in anonymity systems that protect users’ privacy by restricting unauthorised access to their identity. This requirement can be considered as a flow control policy in the well established field of multilevel secure systems. I apply previous research on covert channels (unintended means to communicate in violation of a security policy) to analyse several anonymity systems in an innovative way.

One application for anonymity systems is to prevent collusion in competitions. I show how covert channels may be exploited to violate these protections and construct defences against such attacks, drawing from previous covert channel research and collusion-resistant voting systems.

In the military context, for which multilevel secure systems were designed, covert channels are increasingly eliminated by physical separation of interconnected single-role computers. Prior work on the remaining network covert channels has been solely based on protocol specifications. I examine some protocol implementations and show how the use of several covert channels can be detected and how channels can be modified to resist detection.

I show how side channels (unintended information leakage) in anonymity networks may reveal the behaviour of users. While drawing on previous research on traffic analysis and covert channels, I avoid the traditional assumption of an omnipotent adversary. Rather, these attacks are feasible for an attacker with limited access to the network. The effectiveness of these techniques is demonstrated by experiments on a deployed anonymity network, Tor.

Finally, I introduce novel covert and side channels which exploit thermal effects. Changes in temperature can be remotely induced through CPU load and measured by their effects on crystal clock skew. Experiments show this to be an effective attack against Tor. This side channel may also be usable for geolocation and, as a covert channel, can cross supposedly infallible air-gap security boundaries.

This thesis demonstrates how theoretical models and generic methodologies relating to covert channels may be applied to find practical solutions to problems in real-world anonymity systems. These findings confirm the existing hypothesis that covert channel analysis, vulnerabilities and defences developed for multilevel secure systems apply equally well to anonymity systems.

Steven J. Murdoch, Covert channel vulnerabilities in anonymity systems, Technical report UCAM-CL-TR-706, University of Cambridge, Computer Laboratory, December 2007.

Privacy Enhancing Technologies Symposium (PETS 2008)

I am on the program committee for the Privacy Enhancing Technologies Symposium (previously the PET Workshop), which this year will be held in Leuven, Belgium, 23–25 July 2008. PETS is one of the leading venues for research in privacy, so if you have any relevant research, I can thoroughly recommend submitting it here.

In addition to the main paper session, a new feature this year is HotPETS, which gives the opportunity for short presentations on new and exciting ideas that are potentially not yet mature enough for publication. As usual, proposals for panels are also invited.

The deadline for submissions is 19 February 2008 (except for HotPETS, which is 11 April 2008). More details can be found in the Call For Papers.

A conspicuous contribution !

When people are up for an award at the Oscars or some other prestigious event, they generally know all about it beforehand. So they turn up on the day with an impromptu speech tucked away in a pocket and they’ve a glassy smile to hand when it turns out that they’ve been overlooked for yet another year…

LINX, the London Internet Exchange, doesn’t work that way, so I’d no previous inkling when they recently gave me their 2007 award for a “conspicuous contribution”.

LINX conspicuous contribution award 2007

This award was first given in 2006 to Nigel Titley, who was a LINX council member from its 1994 formation through to 2006, and his contribution is crystal clear to all. My own was perhaps a little less obvious. I have regularly attended LINX general meetings from 1998 onwards — even after I became an academic, because attending LINX meetings is one of the ways that I continue to consult for THUS plc (aka Demon Internet), my previous employer. I’ve often given talks at meetings, or just asked awkward questions of the LINX board from the floor.

But I suspect that the main reason that I got the award is because of my contribution to many of LINX’s Best Current Practice (BCP) documents, on everything from traceability to spam. These documents are hugely influential. They show the industry the best ways to do things — spreading knowledge to all of the companies, not keeping it within the largest and most competent. They show Government and the regulators that the industry is responsible and can explain why it works the way it does. They educate end-users to the best way of doing things and — when there’s a dispute with an abuse@ team — that other ISPs will take the same dim view of their spamming as their current provider (which reduces churn and helps everyone to work things out sensibly).

Of course I haven’t worked on these documents in isolation — the whole point is that they’re a distillation of Best Practice from across the whole industry, and so there’s been dozens of people from dozens of companies attending meetings, contributing text, reading drafts, and then eventually voting for their adoption at formal LINX meetings.

When you step back and think about it, it’s quite remarkable that so many companies from within a fiercely competitive industry are prepared, like THUS, to put their resources into co-operation in this way. I think it’s partly far-sightedness (a belief that self-regulation is much to be preferred to the imposition of standards from outside), and partly the inherent culture of the Internet, where you cannot stand alone but have to co-operate with other companies so that your customers can interwork.

Anyway, when I was given the award, I should have pulled out a neat little speech along the above lines, and said thank you to the whole industry, and thank you to THUS, and thank you to colleagues and particularly thank you to Phil Male who had faith that my consultancy would be of ongoing value… but it was all a surprise and I stammered out something far less eloquent. I’m really pleased to try and fix that now.