Monthly Archives: November 2007

Hackers get busted

There is an article on BBC News about how yet another hacker running a botnet got busted. When I read the sentence “…he is said to be very bright and very skilled …”, I started thinking. How did they find him? He clearly must have made some serious mistakes, what sort of mistakes? How can isolation influence someone’s behaviour, what is the importance of external opinions on objectivity?

When we write a paper, we very much appreciate when someone is willing to read it, and give back some feedback. It allows to identify loopholes in thinking, flaws in descriptions, and so forth. The feedback does not necessarily have to imply large changes in the text, but it very often clarifies it and makes it much more readable.

Hackers do use various tools – either publicly available, or made by the hacker themself. There may be errors in the tools, but they will be probably fixed very quickly, especially if they are popular. Hackers often allow others to use the tools – if it is for testing or fame. But hacking for profit is a quite creative job, and there is plenty left for actions that cannot be automated.

So what is the danger of these manual tasks? Is it the case that hackers write down descriptions of all the procedures with checklists and stick to them, or do they do the stuff intuitively and become careless after a few months or years? Clearly, the first option is how intelligence agencies would deal with the problem, because they know that human is the weakest link. But what about hackers? “…very bright and very skilled…”, but isolated from the rest of the world?

So I keep thinking, is it worth trying to reconstruct “operational procedures” for running a botnet, analyse them, identify the mistakes most likely to happen, and use such knowledge against the “cyber-crime groups”?

Theme is back

Dan Cvrček has very kindly ported over the old Blix-based theme to be compatible with WordPress 2.3 (and also hopefully more maintainable). There are a few bugs to be ironed out, for example the Authors and About pages don’t work yet, but these are being worked on. If you spot any other problems, please leave a comment on this post, or email lbt-admin

Update 2007-11-28: Authors and About should now work.

A cryptographic hash function reading guide

After a few years of spectacular advances in breaking cryptographic hash function NIST has announced a competition to determine the next Secure Hash Algorithm, SHA-3. SHA-0 is considered broken, SHA-1 is still secure but no one knows for how long, and the SHA-2 family are desperately slow. (Do not even think about using MD5, or MD4 for which Prof. Wang can find collisions by hand, but RIPEMD-160 still stands.) Cryptographers are ecstatic about this development: as if they were a bit bored since the last NIST AES competition and depressed by the prospect of not having to design another significant block cipher for the next few years.

The rest of us should expect the next four years to be filled with news, first about advances in the design, then advances in the attacks against Hash functions, as teams with candidate hash algorithms will bitterly try to find flaws in each other’s proposals to ensure that their function becomes SHA-3. To fully appreciate the details of this competition, some of us may want a quick refresher on how to build secure hash function.

Here is a list of on-line resources for catching up with the state of the art:

  1. A very quick overview of hash functions and their applications is provided by Ilya Mironov. This is very introductory material, and does not go into the deeper details of what makes these functions secure, or how to break them.
  2. Chapter 9 on Hash Functions and Data Integrity of the Handbook of Applied Cryptography (Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone) provides a very good first overview of the properties expected from collision resistant hash function. It also presents the basic constructions for such functions from block ciphers (too slow for SHA-3), as well as from dedicated compression functions. Chapter 3 also quickly presents Floyd’s cycle finding algorithm to find collisions with negligible storage requirements.
  3. If your curiosity has not been satisfied, the second stop is Prof. Bart Preneel’s thesis entitled “Analysis and Design of Cryptographic Hash Functions“. This work provides a very good overview of the state of the art in hash function design up to the middle of the nineties (before SHA-1 was commissioned.) The back to the basics approach is very instructive, and frankly the thesis could be entitled “everything you wanted to know about hash functions and never dared ask.” Bart is one of the authors of RIPEMD-160 that is still considered secure, an algorithm worth studying.
  4. Hash functions do look like block ciphers under the hood, and an obvious idea might be to adapt aspects of AES and turn it into such a function. Whirlpool does exactly this, and is worth reading about. One of its authors, Paulo Barreto, also maintains a very thorough bibliography of hash function proposals along with all known cryptanalytic results against them (and a cute health status indicating their security.)
  5. Prof. Wang’s attacks that forced NIST to look for better functions are a must-read, even though they get very technical very soon. A gentler introduction to these attacks is provided in Martin Schlaffer’s Master’s thesis describing how the attacks are applied to MD4.
  6. Finally it is no fun observing a game without knowing the rules: the NIST SHA-3 requirements provide detailed descriptions of what the algorithm should look like, as well as the families of attacks it should resist. After reading it you might even be tempted to submit your own candidate!

Action Replay Justice

It is a sad fact that cheating and rule-breaking in sport gives rise to a lot of bile amongst both competitors and supporters. Think of the furore when a top athlete fails a drugs test, or when the result of a championship final comes down to a judgement call about offside. Multiplayer computer games are no different, and while there may be some rough team sports out there, in no other setting are team players so overtly trying to beat the crap out of each other as in an online first-person shooter. Throw in a bit of teenage angst in 1/3rd of the player base and you have a massive “bile bomb” primed to explode at any moment.

For this reason, cheating and the perception of cheating is a really big deal in the design of online shooters. In Boom! Headshot! I voiced some theories of mine on how a lot of the perception of cheating in computer games may be explained by skilled players inadvertently exploiting the game mechanics, but I have recently seen a shining example in the form of the game Call of Duty 4: Modern Warfare (COD4) of how to address and mitigate the perception of cheating.

First lets review two sorts of cheating that have really captured the imagination of the popular player base: wall hacks and aimbots. With a wall hack, the opponent can see his target even though he is concealed behind an object because the cheat has modified the graphics drivers to display walls as translucent rather than opaque (slight simplification). Aimbots can identify enemy players and assist a cheat in bringing his rifle to bear on the body of the enemy, usually the head. Many players who meet their death in situations where they cannot see how the person has managed to hit them (because they have been hiding, have been moving evasively, or are at great distance) get frustrated and let rip with accusations of cheating. Ironically this sort of cheating is pretty rare, because widespread adoption can be effectively countered by cheat detection software such as punkbuster. There will always be one or two cheats with their own custom software, but the masses simply cannot cheat.

But the trick the Call of Duty 4 developers have used is to make an action replay. Now this has been done before in games for dramatic effect, but crucially COD4 makes the replay from first-person view of the enemy who makes the kill, and winds back a full 5 or 6 seconds before the kill. Should you be unconcerned to see the replay, you may of course skip it. The embedded youtube video shows multiplayer gameplay, with a action replay occurring about 40 seconds in. Now, read on to consider the effect of this…

Continue reading Action Replay Justice

WordPress cookie authentication vulnerability

In my previous post, I discussed how I analyzed the recent attack on Light Blue Touchpaper. What I did not disclose was how the attacker gained access in the first place. It turned out to incorporate a zero-day exploit, which is why I haven’t mentioned it until now.

As a first step, the attacker exploited an SQL injection vulnerability. When I noticed the intrusion, I upgraded WordPress then restored the database and files from off-server backups. WordPress 2.3.1 was released less than a day before my upgrade, and was supposed to fix this vulnerability, so I presumed I would be safe.

I was therefore surprised when the attacker broke in again, the following day (and created himself an administrator account). After further investigation, I discovered that he had logged into the “admin” account — nobody knows the password for this because I set it to a long random string. Neither me nor other administrators ever used that account, so it couldn’t have been XSS or another cookie stealing attack. How was this possible?

From examining the WordPress authentication code I discovered that the password hashing was backwards! While the attacker couldn’t have obtained the password from the hash stored in the database, by simply hashing the entry a second time, he generated a valid admin cookie. On Monday I posted a vulnerability disclosure (assigned CVE-2007-6013) to the BugTraq and Full-Disclosure mailing lists, describing the problem in more detail.

It is disappointing to see that people are still getting this type of thing wrong. In their 1978 summary, Morris and Thompson describe the importance of one way hashing and password salting (neither of which WordPress does properly). The issue is currently being discussed on and the wp-hackers mailing list. Hopefully some progress will be made at getting it right this time around.

Government security failure

In breaking news, the Chancellor of the Exchequer will announce at 1530 that HM Revenue and Customs has lost the data of 15 million child benefit recipients, and that the head of HMRC has resigned.

FIPR has been saying since last November’s publication of our report on Children’s Databases for the Information Commissioner that the proposed centralisation of public-sector data on the nation’s children was not only unsafe but illegal.

But that isn’t all. The Health Select Committee recently made a number of recommendations to improve safety and privacy of electronic medical records, and to give patients more rights to opt out. Ministers dismissed these recommendations, and a poll today shows doctors are so worried about confidentiality that many will opt out of using the new shared care record system.

The report of the Lords Science and Technology Committee into Personal Internet Security also poitned out a lot of government failings in preventing electronic crime – which ministers contemptuously dismissed. It’s surely clear by now that the whole public-sector computer-security establishment is no longer fit for purpose. The next government should replace CESG with a civilian agency staffed by competent people. Ministers need much better advice than they’re currently getting.

Developing …

(added later: coverage from the BBC, the Guardian, Channel 4, the Times, Computer Weekly and e-Health Insider; and here’s the ORG Blog)

Happy Birthday ORG!

The Open Rights Group (ORG) has, today, published a report about their first two years of operation.

ORG’s origins lie in an online pledge, which got a thousand people agreeing to pay a fiver a month to fund a campaigning organisation for digital rights. This mass membership gives it credibility, and it’s used that credibility on campaigns on Digital Rights Management, Copyright Term Extension (“release the music“), Software Patents, Crown Copyright and E-Voting (for one small part of which Steven Murdoch and I stayed up into the small hours to chronicle the debacle in Bedford).

ORG is now lobbying in the highest of circles (though as everyone else who gives the Government good advice they aren’t always listened to), and they are getting extensively quoted in the press, as journalists discover their expertise, and their unique constituency.

Naturally ORG needs even more members, to become even more effective, and to be able to afford to campaign on even more issues in the future. So whilst you look at their annual report, do think about whether you can really afford not to support them!

ObDisclaimer: I’m one of ORG’s advisory council members. I’m happy to advise them to keep it up!

Google as a password cracker

One of the steps used by the attacker who compromised Light Blue Touchpaper a few weeks ago was to create an account (which he promoted to administrator; more on that in a future post). I quickly disabled the account, but while doing forensics, I thought it would be interesting to find out the account password. WordPress stores raw MD5 hashes in the user database (despite my recommendation to use salting). As with any respectable hash function, it is believed to be computationally infeasible to discover the input of MD5 from an output. Instead, someone would have to try out all possible inputs until the correct output is discovered.

So, I wrote a trivial Python script which hashed all dictionary words, but that didn’t find the target (I also tried adding numbers to the end). Then, I switched to a Russian dictionary (because the comments in the shell code installed were in Russian) but that didn’t work either. I could have found or written a better password cracker, which varies the case of letters, and does common substitutions (e.g. o → 0, a → 4) but that would have taken more time than I wanted to spend. I could also improve efficiency with a rainbow table, but this needs a large database which I didn’t have.

Instead, I asked Google. I found, for example, a genealogy page listing people with the surname “Anthony”, and an advert for a house, signing off “Please Call for showing. Thank you, Anthony”. And indeed, the MD5 hash of “Anthony” was the database entry for the attacker. I had discovered his password.

In both the webpages, the target hash was in a URL. This makes a lot of sense — I’ve even written code which does the same. When I needed to store a file, indexed by a key, a simple option is to make the filename the key’s MD5 hash. This avoids the need to escape any potentially dangerous user input and is very resistant to accidental collisions. If there are too many entries to store in a single directory, by creating directories for each prefix, there will be an even distribution of files. MD5 is quite fast, and while it’s unlikely to be the best option in all cases, it is an easy solution which works pretty well.

Because of this technique, Google is acting as a hash pre-image finder, and more importantly finding hashes of things that people have hashed before. Google is doing what it does best — storing large databases and searching them. I doubt, however, that they envisaged this use though. 🙂

Government ignores Personal Medical Security

The Government has just published their response to the Health Committee’s report on The Electronic Patient Record. This response is shocking but not surprising.

For example, on pages 6-7 the Department reject the committee’s recommendation that sealed-envelope data should be kept out of the secondary uses service (SUS). Sealed-envelope data is the stuff you don’t want shared, and SUS is the database that lets civil servants, medical researchers others access to masses of health data. The Department’s justification (para 4 page 6) is not just an evasion but is simply untruthful: they claim that the design of SUS `ensures that patient confidentiality is protected’ when in fact it doesn’t. The data there are not pseudonymised (though the government says it’s setting up a research programme to look at this – report p 23). Already many organisations have access.

The Department also refuses to publish information about security evaluations, test results and breaches (p9) and reliability failures (p19). Their faith in security-by-obscurity is touching.

The biggest existing security problem in the NHS – that many staff carelessly give out data on the phone to anyone who asks for it – will be subject to `assessment’, which `will feed into the further implementation’. Yeah, I’m sure. But as for the recommendation that the NHS provide a substantial audit resource – as there is to detect careless and abusive disclosure from the police national computer – we just get a long-winded evasion (pp 10-11).

Finally, the fundamental changes to the NPfIT business process that would be needed to make the project work, are rejected (p14-15): Sir Humphrey will maintain central control of IT and there will be no `catalogue’ of approved systems from which trusts can choose. And the proposals that the UK participate in open standards, along the lines of the more successful Swedish or Dutch model, draw just a long evasion (p16). I fear the whole project will just continue on its slow slide towards becoming the biggest IT disaster ever.