Action Replay Justice

It is a sad fact that cheating and rule-breaking in sport gives rise to a lot of bile amongst both competitors and supporters. Think of the furore when a top athlete fails a drugs test, or when the result of a championship final comes down to a judgement call about offside. Multiplayer computer games are no different, and while there may be some rough team sports out there, in no other setting are team players so overtly trying to beat the crap out of each other as in an online first-person shooter. Throw in a bit of teenage angst in 1/3rd of the player base and you have a massive “bile bomb” primed to explode at any moment.

For this reason, cheating and the perception of cheating is a really big deal in the design of online shooters. In Boom! Headshot! I voiced some theories of mine on how a lot of the perception of cheating in computer games may be explained by skilled players inadvertently exploiting the game mechanics, but I have recently seen a shining example in the form of the game Call of Duty 4: Modern Warfare (COD4) of how to address and mitigate the perception of cheating.

First lets review two sorts of cheating that have really captured the imagination of the popular player base: wall hacks and aimbots. With a wall hack, the opponent can see his target even though he is concealed behind an object because the cheat has modified the graphics drivers to display walls as translucent rather than opaque (slight simplification). Aimbots can identify enemy players and assist a cheat in bringing his rifle to bear on the body of the enemy, usually the head. Many players who meet their death in situations where they cannot see how the person has managed to hit them (because they have been hiding, have been moving evasively, or are at great distance) get frustrated and let rip with accusations of cheating. Ironically this sort of cheating is pretty rare, because widespread adoption can be effectively countered by cheat detection software such as punkbuster. There will always be one or two cheats with their own custom software, but the masses simply cannot cheat.

But the trick the Call of Duty 4 developers have used is to make an action replay. Now this has been done before in games for dramatic effect, but crucially COD4 makes the replay from first-person view of the enemy who makes the kill, and winds back a full 5 or 6 seconds before the kill. Should you be unconcerned to see the replay, you may of course skip it. The embedded youtube video shows multiplayer gameplay, with a action replay occurring about 40 seconds in. Now, read on to consider the effect of this…


http://www.youtube.com/watch?v=jOMik2TXLec

Continue reading Action Replay Justice

WordPress cookie authentication vulnerability

In my previous post, I discussed how I analyzed the recent attack on Light Blue Touchpaper. What I did not disclose was how the attacker gained access in the first place. It turned out to incorporate a zero-day exploit, which is why I haven’t mentioned it until now.

As a first step, the attacker exploited an SQL injection vulnerability. When I noticed the intrusion, I upgraded WordPress then restored the database and files from off-server backups. WordPress 2.3.1 was released less than a day before my upgrade, and was supposed to fix this vulnerability, so I presumed I would be safe.

I was therefore surprised when the attacker broke in again, the following day (and created himself an administrator account). After further investigation, I discovered that he had logged into the “admin” account — nobody knows the password for this because I set it to a long random string. Neither me nor other administrators ever used that account, so it couldn’t have been XSS or another cookie stealing attack. How was this possible?

From examining the WordPress authentication code I discovered that the password hashing was backwards! While the attacker couldn’t have obtained the password from the hash stored in the database, by simply hashing the entry a second time, he generated a valid admin cookie. On Monday I posted a vulnerability disclosure (assigned CVE-2007-6013) to the BugTraq and Full-Disclosure mailing lists, describing the problem in more detail.

It is disappointing to see that people are still getting this type of thing wrong. In their 1978 summary, Morris and Thompson describe the importance of one way hashing and password salting (neither of which WordPress does properly). The issue is currently being discussed on LWN.net and the wp-hackers mailing list. Hopefully some progress will be made at getting it right this time around.

Government security failure

In breaking news, the Chancellor of the Exchequer will announce at 1530 that HM Revenue and Customs has lost the data of 15 million child benefit recipients, and that the head of HMRC has resigned.

FIPR has been saying since last November’s publication of our report on Children’s Databases for the Information Commissioner that the proposed centralisation of public-sector data on the nation’s children was not only unsafe but illegal.

But that isn’t all. The Health Select Committee recently made a number of recommendations to improve safety and privacy of electronic medical records, and to give patients more rights to opt out. Ministers dismissed these recommendations, and a poll today shows doctors are so worried about confidentiality that many will opt out of using the new shared care record system.

The report of the Lords Science and Technology Committee into Personal Internet Security also poitned out a lot of government failings in preventing electronic crime – which ministers contemptuously dismissed. It’s surely clear by now that the whole public-sector computer-security establishment is no longer fit for purpose. The next government should replace CESG with a civilian agency staffed by competent people. Ministers need much better advice than they’re currently getting.

Developing …

(added later: coverage from the BBC, the Guardian, Channel 4, the Times, Computer Weekly and e-Health Insider; and here’s the ORG Blog)

Happy Birthday ORG!

The Open Rights Group (ORG) has, today, published a report about their first two years of operation.

ORG’s origins lie in an online pledge, which got a thousand people agreeing to pay a fiver a month to fund a campaigning organisation for digital rights. This mass membership gives it credibility, and it’s used that credibility on campaigns on Digital Rights Management, Copyright Term Extension (“release the music“), Software Patents, Crown Copyright and E-Voting (for one small part of which Steven Murdoch and I stayed up into the small hours to chronicle the debacle in Bedford).

ORG is now lobbying in the highest of circles (though as everyone else who gives the Government good advice they aren’t always listened to), and they are getting extensively quoted in the press, as journalists discover their expertise, and their unique constituency.

Naturally ORG needs even more members, to become even more effective, and to be able to afford to campaign on even more issues in the future. So whilst you look at their annual report, do think about whether you can really afford not to support them!

ObDisclaimer: I’m one of ORG’s advisory council members. I’m happy to advise them to keep it up!

Google as a password cracker

One of the steps used by the attacker who compromised Light Blue Touchpaper a few weeks ago was to create an account (which he promoted to administrator; more on that in a future post). I quickly disabled the account, but while doing forensics, I thought it would be interesting to find out the account password. WordPress stores raw MD5 hashes in the user database (despite my recommendation to use salting). As with any respectable hash function, it is believed to be computationally infeasible to discover the input of MD5 from an output. Instead, someone would have to try out all possible inputs until the correct output is discovered.

So, I wrote a trivial Python script which hashed all dictionary words, but that didn’t find the target (I also tried adding numbers to the end). Then, I switched to a Russian dictionary (because the comments in the shell code installed were in Russian) but that didn’t work either. I could have found or written a better password cracker, which varies the case of letters, and does common substitutions (e.g. o → 0, a → 4) but that would have taken more time than I wanted to spend. I could also improve efficiency with a rainbow table, but this needs a large database which I didn’t have.

Instead, I asked Google. I found, for example, a genealogy page listing people with the surname “Anthony”, and an advert for a house, signing off “Please Call for showing. Thank you, Anthony”. And indeed, the MD5 hash of “Anthony” was the database entry for the attacker. I had discovered his password.

In both the webpages, the target hash was in a URL. This makes a lot of sense — I’ve even written code which does the same. When I needed to store a file, indexed by a key, a simple option is to make the filename the key’s MD5 hash. This avoids the need to escape any potentially dangerous user input and is very resistant to accidental collisions. If there are too many entries to store in a single directory, by creating directories for each prefix, there will be an even distribution of files. MD5 is quite fast, and while it’s unlikely to be the best option in all cases, it is an easy solution which works pretty well.

Because of this technique, Google is acting as a hash pre-image finder, and more importantly finding hashes of things that people have hashed before. Google is doing what it does best — storing large databases and searching them. I doubt, however, that they envisaged this use though. 🙂

Government ignores Personal Medical Security

The Government has just published their response to the Health Committee’s report on The Electronic Patient Record. This response is shocking but not surprising.

For example, on pages 6-7 the Department reject the committee’s recommendation that sealed-envelope data should be kept out of the secondary uses service (SUS). Sealed-envelope data is the stuff you don’t want shared, and SUS is the database that lets civil servants, medical researchers others access to masses of health data. The Department’s justification (para 4 page 6) is not just an evasion but is simply untruthful: they claim that the design of SUS `ensures that patient confidentiality is protected’ when in fact it doesn’t. The data there are not pseudonymised (though the government says it’s setting up a research programme to look at this – report p 23). Already many organisations have access.

The Department also refuses to publish information about security evaluations, test results and breaches (p9) and reliability failures (p19). Their faith in security-by-obscurity is touching.

The biggest existing security problem in the NHS – that many staff carelessly give out data on the phone to anyone who asks for it – will be subject to `assessment’, which `will feed into the further implementation’. Yeah, I’m sure. But as for the recommendation that the NHS provide a substantial audit resource – as there is to detect careless and abusive disclosure from the police national computer – we just get a long-winded evasion (pp 10-11).

Finally, the fundamental changes to the NPfIT business process that would be needed to make the project work, are rejected (p14-15): Sir Humphrey will maintain central control of IT and there will be no `catalogue’ of approved systems from which trusts can choose. And the proposals that the UK participate in open standards, along the lines of the more successful Swedish or Dutch model, draw just a long evasion (p16). I fear the whole project will just continue on its slow slide towards becoming the biggest IT disaster ever.

Government ignores Personal Internet Security

At the end of last week the Government published their response to the House of Lords Science and Technology Committee Report on Personal Internet Security. The original report was published in mid-August and I blogged about it (and my role in assisting the Committee) at that time.

The Government has turned down pretty much every recommendation. The most positive verbs used were “consider” or “working towards setting up”. That’s more than a little surprising, because the report made a great deal of sense, and their lordships aren’t fools. So is the Government ignorant, stupid, or in the thrall of some special interest group?

On balance I think it starts from ignorance.

Some of the most compelling evidence that the Committee heard was at private meetings in the USA from companies such as Microsoft, Cisco, Verisign, and in particular from Team Cymru, who monitor the “underground economy”. I don’t think that the Whitehall mandarins have heard these briefings, or have bothered to read the handful of published articles such as this one in ;login, or this more recent analysis that will appear at CCS next week. If the Government was up-to-speed on what researchers are documenting, they wouldn’t be arguing that there is more crime solely because there are more users — and they could not possibly say that they “refute the suggestion […] that lawlessness is rife”.

However, we cannot rule out stupidity.

Some of the Select Committee recommendations were intended to address the lack of authoritative data — and these were rejected as well. The Government doesn’t think its urgently necessary to capture more information about the prevalence of eCrime; they don’t think that having the banks collate crime reports gets all the incentives wrong; and they “do not accept that the incidence of loss of personal data by companies is on an upward path” (despite there being no figures in the UK to support or refute that notion, and considerable evidence of regular data loss in the United States).

The bottom line is that the Select Committee did some “out-of-the-box thinking” and came up with a number of proposals for measurement, for incentive alignment, and for bolstering law enforcement’s response to eCrime. The Government have settled for complacency, quibbling about the wording of the recommendations, and picking out a handful of the more minor recommendations to “note” to “consider” and to “keep under review”.

A whole series of missed opportunities.

Upgrade and new theme

Regular readers may have noticed that Light Blue Touchpaper was down most of today. This was due to the blog being compromised through several WordPress vulnerabilities. I’ve now cleaned this up, restored from last night’s backups and upgraded WordPress. A downside is that our various customizations need substantial modification before working again, most notably the theme, which is based on Blix and has not been updated since WordPress 1.5. Email also will not work due to this bug. I am working on a fix to this and other problems, so please accept my apologies in the mean time.

Phishing take-down paper wins 'Best Paper Award' at APWG eCrime Researcher's Summit

Richard Clayton and I have been tracking phishing sites for some time. Back in May, we reported on how quickly phishing websites are removed. Subsequently, we have also compared the performance of banks in removing websites and found evidence that ISPs and registrars are initially slow to remove malicious websites.

We have published our updated results at eCrime 2007, sponsored by the Anti-Phishing Working Group. The paper, ‘Examining the Impact of Website Take-down on Phishing’ (slides here), was selected for the ‘Best Paper Award’.

A high-level abridged description of this work also appeared in the September issue of Infosecurity Magazine.

Counters, Freshness, and Implementation

When we want to check freshness of cryptographically secured messages, we have to use monotonic counters, timestamps or random nonces. Each of these mechanisms increases the complexity of a given system in a different way. Freshness based on counters seems to be the easiest to implement in the context of ad-hoc mesh wireless networks. One does not need to increase power consumption for an extra message for challenge (containing a new random number), nor there is need for precise time synchronisation. It sounds easy but people in the real world are … creative. We have been working with TinyOS, an operating system that was designed for constrained hardware. TinyOS is a quite modular platform and even mesh networking is not part of the system’s core but is just one of the modules that can be easily replaced or not used at all.

Frame structures for TinyOS and TinySec on top of 802.15.4
Fig.: Structures of TinyOS and TinySec frames with all the counters. TinySec increases length of “data” field to store initialisation vector. Continue reading Counters, Freshness, and Implementation