When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities

(co-authored with Robert Watson)

Recently, our group was treated to a presentation by Ruby Lee of Princeton University, who discussed novel cache architectures which can prevent some cache-based side channel attacks against AES and RSA. The new architecture was fascinating, in particular because it may actually increase cache performance (though this point was spiritedly debated by several systems researchers in attendance). For the security group, though, it raised two interesting and troubling questions. What is the proper defence against side-channels due to processor cache? And why hasn’t it been implemented despite these attacks being around for years?

Continue reading When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities

Technical aspects of the censoring of archive.org

Back in December I wrote an article here on the “Technical aspects of the censoring of Wikipedia” in the wake of the Internet Watch Foundation’s decision to add two Wikipedia pages to their list of URLs where child sexual abuse images are to be found. This list is used by most UK ISPs (and blocking systems in other countries) in the filtering systems they deploy that attempt to prevent access to this material.

A further interesting censoring issue was in the news last month, and this article (a little belatedly) explains the technical issues that arose from that.

For some time, the IWF have been adding URLs from The Internet Archive (widely known as “the wayback machine“) to their list. I don’t have access to the list and so I am unable to say how many URLs have been involved, but for several months this blocking also caused some technical problems.
Continue reading Technical aspects of the censoring of archive.org

Missing the Wood for the Trees

I’ve just submitted a (rather critical) public response to an ICANN working group report on fast-flux hosting (read the whole thing here).

Many phishing websites (and other types of wickedness) are hosted on botnets, with the hostname resolving to different machines every few minutes or hours (hence the “fast” in fast-flux). This means that in order to remove the phishing website you either have to shut-down the botnet — which could take months — or you must get the domain name suspended.

ICANN’s report goes into lots of detail about how fast-flux hosting has been operated up to now, and sets out all sorts of characteristics that it currently displays (but of course the criminals could do something different tomorrow). It then makes some rather ill-considered suggestions about how to tackle some of these symptoms — without really understanding how that behaviour might be being used by legimitate companies and individuals.

In all this concentration on the mechanics they’ve lost sight of the key issue, which is that the domain name must be removed — and this is an area where ICANN (who look after domain names) might have something to contribute. However, their report doesn’t even tackle the different roles that registries (eg Nominet who look after the .UK infrastructure) and registrars (eg Gradwell who sell .UK domain names) might have.

From my conclusion:

The bottom line on fast-flux today is that it is almost entirely associated with a handful of particular botnets, and a small number of criminal gangs. Law enforcement action to tackle these would avoid a further need for ICANN consideration, and it would be perfectly rational to treat the whole topic as of minor importance compared with other threats to the Internet.

If ICANN are determined to deal with this issue, then they should leave the technical issues almost entirely alone. There is little evidence that the working group has the competence for considering these. Attention should be paid instead to the process issues involved, and the minimal standards of behaviour to be expected of registries, registrars, and those investigators who are seeking to have domain names suspended.

I strongly recommend adopting my overall approach of an abstract definition of the problem: The specific distinguisher of a fast-flux attack is that the dynamic nature of the DNS is exploited so that if a website is to be suppressed then it is essential to prevent the hostname resolving, rather than attempting to stop the website being hosted. The working group should consider the policy and practice issues that flow from considering how to prevent domain name resolution; rather than worrying about the detail of current attacks.

New Facebook Photo Hacks

Last March, Facebook caught some flak when some hacks circulated showing how to access private photos of any user. These were enabled by egregiously lazy design: viewing somebody’s private photos simply required determining their user ID (which shows up in search results) and then manually fetching a URL of the form:
www.facebook.com/photo.php?pid=1&view=all&subj=[uid]&id=[uid]
This hack was live for a few weeks in February, exposing some photos of Facebook CEO Mark Zuckerberg and (reportedly) Paris Hilton, before the media picked it up in March and Facebook upgraded the site.

Instead of using properly formatted PHP queries as capabilities to view photos, Faceook now verifies the requesting user against the ACL for each photo request. What could possibly go wrong? Well, as I discovered this week, the photos themselves are served from a separate content-delivery domain, leading to some problems which highlight the difficulty of building access control into an enormous, globally distributed website like Facebook.

Continue reading New Facebook Photo Hacks

Variable Length Fields in Cryptographic Protocols

Many crypto protocols contain variable length fields: the names of the participants, different sizes of public key, and so on.

In my previous post, I mentioned how Liqun Chen has (re)discovered the attack that many protocols are broken if you don’t include the field lengths in MAC or signature computations (and, more to the point, a bunch of ISO standards fail to warn the implementor about this issue).

The problem applies to confidentiality, as well as integrity.

Many protocol verification tools (ProVerif, for example) will assume that the attacker is unable to distinguish enc(m1, k, iv) from enc(m2, k, iv) if they don’t know k.

If m1 and m2 are of different lengths, this may not be true: the length of the ciphertext leaks information about the length of the plaintext. With Cipher Block Chaining, you can tell the length of the plaintext to the nearest block, and with stream ciphers you can tell the exact length. So you can have protocols that are “proved” correct but are still broken, because the idealized protocol doesn’t properly represent what the implementation is really doing.

If you want different plaintexts to be observationally equivalent to the attacker, you can pad the variable-length fields to a fixed length before encrypting. But if there is a great deal of variation in length, this may be a very wasteful thing to do.

The alternative approach is to change your idealization of the protocol to reflect the reality of your encryption primitive. If your implementation sends m encrypted under a stream cipher, you can idealize it as sending an encrypted version of m together with L_m (the length of m) in the clear.

Hidden Assumptions in Cryptographic Protocols

At the end of last week, Microsoft Research hosted a meeting of “Cryptoforma”, a proposed new project (a so-called “network of excellence”) to bring together researchers working on applying formal methods to security. They don’t yet know whether or not this project will get funding from the EPSRC, but I wish them good luck.

There were several very interesting papers presented at the meeting, but today I want to talk about the one by Liqun Chen, “Parsing ambiguities in authentication and key establishment protocols”.

Some of the protocol specifications published by ISO specify how the protocol should be encoded on the wire, in sufficient detail to enable different implementations to interoperate. An example of a standard of this type is the one for the public key certificates that are used in SSL authentication of web sites (and many other applications).

The security standards produced by one group within ISO (SC27) aren’t like that. They specify the abstract protocols, but give the implementor considerable leeway in how they are encoded. This means that you can have different implementations that don’t interoperate. If these implementations are in different application domains, the lack of interoperability doesn’t matter. For example, Tuomas Aura and I recently wrote a paper in which we presented a protocol for privacy-preserving wireless LAN authentication, which we rather boldly claim to be based on the abstract protocol from ISO 9798-4.

You could think of these standards as separating concerns: the SC27 folks get the abstract crypto protocol correct, and then someone else standardises how to encode it in a particular application. But does the choice of concrete encoding affect the protocol’s correctness?

Liqun Chen points out one case where it clearly does. In the abstract protocols in ISO 9798-4 and others, data fields are joined by a double vertical bar operator. If you want to find out what that double vertical bar really means, you have to spend another 66 Swiss Francs and get a copy of ISO 9798-1, which tells you that Y || Z means “the result of the concatenation of the data items Y and Z in that order”.

Oops.

When we specify abstract protocols, it’s generally understood that the concrete encoding that gets signed or MAC’d contains enough information to unambigously identify the field boundaries: it contains length fields, a closing XML tag, or whatever. A signed message {Payee, Amount} K_A should not allow a payment of $3 to Bob12 to be mutated by the attacker into a payment of $23 to Bob1. But ISO 9798 (and a bunch of others) don’t say that. There’s nothing that says a conforming implementation can’t send the length field without authentication.

No of course, an implementor probably wouldn’t do that. But they might.

More generally: do these abstract protocols make a bunch of implicit, undocumented assumptions about the underlying crypto primitives and encodings that might turn out not to be true?

See also: Boyd, C. Hidden assumptions in cryptographic protocols. Computers and Digital Techniques, volume 137, issue 6, November 1990.

Security issues in ubiquitous computing

I have written the security chapter for a multi-author volume on ubiquitous computing that will be published by Springer later this year. For me it was an opportunity to pull together some of the material I have been collecting for a possible second edition of my 2002 book on Security for Ubiquitous Computing—but of course a 30-page chapter can be nothing more than a brief introduction.

Anyway, here is a “release candidate” copy of the chapter, which will ship to the book editors in a couple of weeks. Comments are welcome, either on the chapter itself or, based on this preview, on what you’d like me to discuss in my own full-length book when I yield to the repeated pleas of John Wiley And Sons and sit down to write a new edition.

Marksmen, on your marks!

The beginning of a Call of Duty 4 Search and Destroy game is essentially a race. When the game starts, experienced players all make a mad dash from the starting post, head for their preferred defensive or offensive positions, to dig in before the enemy can bring their guns to bear. From these choice spots, they engage the enemy within seconds, and despite moderately large maps which are a few hundred metres across, up to a third of the kills in a 3-5 minute game do take place in the first 15 seconds. Of course there is skill in figuring out what to do next (the top 1% of players distinguish themselves through adaptability and quick thinking), but the fact remains that the opening of an S&D match is critically important.

I have previously posted about “Neo-Tactics” – unintended side-effects of low-level game algorithms which create competitive advantage. Once a player seems to win without a visible justification this sort of effect causes a problem – it creates the perception of cheating. At a second level, actual cheats might deliberately manipulate their network infrastructure or game client to take advantage of the effect. Well I think I might have found a new one…

The screenshots below give a flavour of the sort of sneaky position that players might hope to be first to reach, affording a narrow but useful line of sight through multiple windows and doorways, crossing most of the map. NB: Seasoned COD4 players will laugh at my choice of so-called sneaky position, but I am a novice and I cannot hope to reach the ingenious hideouts they have discovered after years of play-testing.


Continue reading Marksmen, on your marks!

Andy Burnham and the decline of standards

There’s a short story by (I think) Stephen Leacock, which tells of declining standards. How an undergraduate, newly arrived at university, lived in awe of the sagacity of the professors, of the intelligence of the grad students, and the learning of those about to receive their degrees. By the time he was receiving his first degree, he and his class were merely of average competence. By the time his PhD was awarded there were few of his cohort with any real learning; and standards had slipped so much over time that when they made him a Professor he and his colleagues hardly knew anything at all!

Having now reached the point in my life when I’m older than half the British Cabinet, it’s perhaps no surprise to read that UK cabinet minister Andy Burnham (born when I was in the Lower Sixth), has come up with some ideas about regulating the Internet that I am deeply unimpressed with.

In a Telegraph interview he proposes that ISPs should be forced to provide censored access to the Internet with only child-friendly sites visible; that the industry should have new “take-down” targets for bad material (presumably shorter ones); that it should be easier to sue for defamation online; and that the web should be labelled with age-ratings the way that video games and films are. Of course he realises he can’t do this alone, so he’s going to ask President Obama to help out!

Unfortunately, Mr Burnham doesn’t know anything about the Internet and seems to be arguing by analogy, and with a childlike hope that merely wishing for something will make it come true.
Continue reading Andy Burnham and the decline of standards

Card fraud — what can one do?

People often ask me what can they do to prevent themselves from being victims of card fraud when they pay with their cards at shops or use them in ATMs (for on-line card fraud tips see e-victims.org, for example). My short answer is usually “not much, except checking your statements and reporting anomalies to the bank”. This post is the longer answer — little practical things, some a bit over the top, I admit — that cardholders can do to decrease the risk of falling victim to card fraud. (Some of these will only apply to UK issued cards, some to all smartcards, and the rest applies to all types of cards.)

Practical:

1. If you have a UK EMV card, ask the bank to send you a new card if it was issued before the first quarter of 2008. APACS has said that cards issued from January 2008 have an iCVV (‘integrated circuit card verification value‘) in the chip that isn’t the same as the one on the magnetic stripe (CVV1). This means that if the magstripe data was read off the chip (it’s there for fallback) and written onto a blank magstripe card, it shouldn’t — if iCVVs are indeed checked — work at ATMs anywhere. The bad news is that in February 2008 only two out of four newly minted cards that we tested had iCVV, though today your chances may be better.

A PIN entry device taped together

2. In places that you are able to pick up the PIN entry device (PED), do it (Sainsbury’s actually encourages this). Firstly, it may allow you to hide your PIN from the people behind you in the queue. Secondly, it allows you to give it a cursory inspection: if there is more than one wire coming out from the back, or the thing falls apart, you shouldn’t use it. (In the picture on the right you see a mounted PED at a high-street shop that is crudely taped together.) In addition, be suspicious of PEDs that are mounted in an irregular way such that you can’t move or comfortably use them; this may indicate that the merchant has a very good camera angle on the keypad, and if you move the PED, it may get out of focus. Of course, some stores mount their PEDs such that they can’t be moved, so you’ll have to use your judgment.

Continue reading Card fraud — what can one do?