Category Archives: Cryptology

Cryptographic primitives, cryptanalysis

Light Blue Touchpaper now on HTTPS

Light Blue Touchpaper now supports TLS, so as to protect passwords and authentication cookies from eavesdropping. TLS support is provided by the Pound load-balancer, because Varnish (our reverse-proxy cache) does not support TLS.

The configuration is intended to be a reasonable trade-off between security and usability, and gets an A– on the Qualys SSL report. The cipher suite list is based on the very helpful Qualys Security Labs recommendations and Apache header re-writing sets the HttpOnly and Secure cookie flags to resist cookie hijacking.

As always, we might have missed something, so if you notice problems such as incompatibilities with certain browsers, then please let us know on <lbt-admin@cl.cam.ac.uk>. or in the comments.

The pre-play vulnerability in Chip and PIN

Today we have published a new paper: “Chip and Skim: cloning EMV cards with the pre-play attack”, presented at the 2014 IEEE Symposium on Security and Privacy. The paper analyses the EMV protocol, the leading smart card payment system with 1.62 billion cards in circulation, and known as “Chip and PIN” in English-speaking countries. As a result of the Target data breach, banks in the US (which have lagged behind in Chip and PIN deployment compared to the rest of the world) have accelerated their efforts to roll out Chip and PIN capable cards to their customers.

However, our paper shows that Chip and PIN, as currently implemented, still has serious vulnerabilities, which might leave customers at risk of fraud. Previously we have shown how cards can be used without knowing the correct PIN, and that card details can be intercepted as a result of flawed tamper-protection. Our new paper shows that it is possible to create clone chip cards which normal bank procedures will not be able to distinguish from the real card.

When a Chip and PIN transaction is performed, the terminal requests that the card produces an authentication code for the transaction. Part of this transaction is a number that is supposed to be random, so as to stop an authentication code being generated in advance. However, there are two ways in which the protection can by bypassed: the first requires that the Chip and PIN terminal has a poorly designed random generation (which we have observed in the wild); the second requires that the Chip and PIN terminal or its communications back to the bank can be tampered with (which again, we have observed in the wild).

To carry out the attack, the criminal arranges that the targeted terminal will generate a particular “random” number in the future (either by predicting which number will be generated by a poorly designed random number generator, by tampering with the random number generator, or by tampering with the random number sent to the bank). Then the criminal gains temporary access to the card (for example by tampering with a Chip and PIN terminal) and requests authentication codes corresponding to the “random” number(s) that will later occur. Finally, the attacker loads the authentication codes on to the clone card, and uses this card in the targeted terminal. Because the authentication codes that the clone card provides match those which the real card would have provided, the bank cannot distinguish between the clone card and the real one.

Because the transactions look legitimate, banks may refuse to refund victims of fraud. So in the paper we discuss how bank procedures could be improved to detect whether this attack has occurred. We also describe how the Chip and PIN system could be improved. As a result of our research, work has started on mitigating one of the vulnerabilities we identified; the certification requirements for random number generators in Chip and PIN terminals have been improved, though old terminals may still be vulnerable. Attacks making use of tampered random number generators or communications are more challenging to prevent and have yet to be addressed.

Update (2014-05-20): There is press coverage of this paper in The Register, SC Magazine UK and Schneier on Security.
Update (2014-05-21): Also now covered in The Hacker News.

Current state of anonymous email usability

As part of another project, I needed to demonstrate how the various user-interface options for sending anonymous email through Mixmaster appeared to the email sender. This is very difficult to explain in words, so I recorded some screencasts. The tools I used were the Mixmaster command line tool, the Mutt email client with Mixmaster plugin, QuickSilver Lite, and finally a web-based interface.

The project is now over, but in case these screencasts are of wider interest, I’ve put them on YouTube.

Overall, the usability of Mixmaster is not great. All of the secure options are difficult to configure and use (QuickSilver Lite is probably the best), emails take a long time to be sent, recipients of anonymous email can’t send replies, and there is a high chance that the email will be dropped en-route.

Continue reading Current state of anonymous email usability

Financial cryptography 2014

I will be trying to liveblog Financial Cryptography 2014. I just gave a keynote talk entitled “EMV – Why Payment Systems Fail” summarising our last decade’s research on what goes wrong with Chip and PIN. There will be a paper on this out in a few months; meanwhile here’s the slides and here’s our page of papers on bank security.

The sessions of refereed papers will be blogged in comments to this post.

Why bouncing droplets are a pretty good model of quantum mechanics

Today Robert Brady and I publish a paper that solves an outstanding problem in physics. We explain the beautiful bouncing droplet experiments of Yves Couder, Emmanuel Fort and their colleagues.

For years now, people interested in the foundations of physics have been intrigued by the fact that droplets bouncing on a vibrating tray of fluid can behave in many ways like quantum mechanical particles, with single-slit and double-slit diffraction, tunneling, Anderson localisation and quantised orbits.

In our new paper, Robert Brady and I explain why. The wave field surrounding the droplet is, to a good approximation, Lorentz covariant with the constant c being the speed of surface waves. This plus the inverse square force between bouncing droplets (which acts like the Coulomb force) gives rise to an analogue of the magnetic force, which can be observed clearly in the droplet data. There is also an analogue of the Schrödinger equation, and even of the Pauli exclusion principle.

These results not only solve a fascinating puzzle, but might perhaps nudge more people to think about novel models of quantum foundations, about which we’ve written three previous papers.

How Certification Systems Fail: Lessons from the Ware Report

Research in the Security Group has uncovered various flaws in systems, despite them being certified as secure. Sometimes the certification criteria have been inadequate and sometimes the certification process has been subverted. Not only do these failures affect the owners of the system but when evidence of certification comes up in court, the impact can be much wider.

There’s a variety of approaches to certification, ranging from extremely generic (such as Common Criteria) to highly specific (such as EMV), but all are (at least partially) descendants of a report by Willis H. Ware – “Security Controls for Computer Systems”. There’s much that can be learned from this report, particularly the rationale for why certification systems are set up as the way they are. The differences between how Ware envisaged certification and how certification is now performed is also informative, whether these differences are for good or for ill.

Along with Mike Bond and Ross Anderson, I have written an article for the “Lost Treasures” edition of IEEE Security & Privacy where we discuss what can be learned, about how today’s certifications work and should work, from the Ware report. In particular, we explore how the failure to follow the recommendations in the Ware report can explain why flaws in certified banking systems were not detected earlier. Our article, “How Certification Systems Fail: Lessons from the Ware Report” is available open-access in the version submitted to the IEEE. The edited version, as appearing in the print edition (IEEE Security & Privacy, volume 10, issue 6, pages 40–44, Nov‐Dec 2012. DOI:10.1109/MSP.2012.89) is only available to IEEE subscribers.

Yet more banking industry censorship

Yesterday, banking security vendor Thales sent this DMCA takedown request to John Young who runs the excellent Cryptome archive. Thales want him to remove an equipment manual that has been online since 2003 and which was valuable raw material in research we did on API security.

Banks use hardware security modules (HSMs) to manage the cryptographic keys and PINs used to authenticate bank card transactions. These used to be thought to be secure. But their application programming interfaces (APIs) had become unmanageably complex, and in the early 2000s Mike Bond, Jolyon Clulow and I found that by sending sequences of commands to the machine that its designers hadn’t anticipated, it was often possible to break the device spectacularly. This became a thriving field of security research.

But while API security has been a goldmine for security researchers, it’s been an embarrassment for the industry, in which Thales is one of two dominant players. Hence the attempt to close down our mine. As you’d expect, the smaller firms in the industry, such as Utimaco, would prefer HSM APIs to be open (indeed, Utimaco sent two senior people to a Dagstuhl workshop on APIs that we held a couple of months ago). Even more ironically, Thales’s HSM business used to be the Cambridge startup nCipher, which helped our research by giving us samples of their competitors’ products to break.

If this case ever comes to court, the judge might perhaps consider the Lexmark case. Lexmark sued Static Control Components (SCC) for DMCA infringement in order to curtail competition. The court found this abusive and threw out the case. I am not a lawyer, and John Young must clearly take advice. However this particular case of internet censorship serves no public interest (as with previous attempts by the banking industry to censor security research).

Analysis of FileVault 2 (Apple's full disk encryption)

With the launch of Mac OS X 10.7 (Lion), Apple has introduced a volume encryption mechanism known as FileVault 2.

During the past year Joachim Metz, Felix Grobert and I have been analysing this encryption mechanism. We have identified most of the components in FileVault 2’s architecture and we have also built an open source tool that can read volumes encrypted with FileVault 2. This tool can be useful to forensic investigators (who know the encryption password or recovery token) that need to recover some files from an encrypted volume but cannot trust or load the MAC OS that was used to encrypt the data. We have also made an analysis of the security of FileVault 2.

A few weeks ago we have made public this paper on eprint describing our work. The tool to recover data from encrypted volumes is available here.

Three paper Thursday: Shamir x3 at Eurocrypt

For the past 4 days Cambridge has been hosting Eurocrypt 2012.

There were many talks, probably interesting, but I will only comment on 3 talks given by Adi Shamir, 1 during the official conference and 2 during the rump session.
Among the other sessions I mention that the best paper was given to this paper by Antoine Joux and Vanessa Vitse for the enhancement of index calculus to break elliptic curves.

Official Talk: Minimalism in cryptography, the Even-Mansour scheme revisited

In this work, Adi et al. presented an analysis on the Even-Mansour scheme:

E(P) = F(P ⊕ K1) ⊕ K2

Such scheme, some times referred to as key whitening, is used in the DESX construction and in the AES-XTS mode of operation (just a few examples).

Adi et al. shown a new slide attack, called SLIDEX, which has been used to prove a tight bound on the security of the Even-Mansour scheme.

Even more, they show that using K1 = K2 you can achieve the same security.

Rump talk 1: security of multiple key encryption

Here Adi considered the case of encrypting data multiple times with multiple keys, as in 3DES:
data -> c1 = E_k1(data) ->  c2 = E_k2(c1) -> c3 = E_k3(c2) -> c4 = E_k3(c3) …. and so on.

The general approach to break a scheme where a key is used 2 times or 3 times (2DES, 3DES for e.g.) is the meet-in-the-middle attack, where you encrypt from one side and then decrypt from the other side, and by storing a table of the size of the key space (say n bits) you can eventually find the keys used in a scheme using only a few pairs of plaintext/ciphertext. For 2 keys such an attack would require 2^{n} time, for 3 keys 2^{2n}. Therefore some people may assume that increasing the number of keys by 1 (i.e. to use 4 keys) may increase the security of this scheme. This is in fact not true.

Adi shown that once we go beyond 3 keys (e.g. 4, 5, 6, etc…) the security only increases once every few keys. If you think of it, using 4 keys you can just apply the meet-in-the-middle attack in 2^{2n} time to the left 2 encryptions and also in 2^{2n} time to the right 2 decryptions. After this, he shown how to use the meet-in-the-middle attack to solve the knapsack problem and proposed the idea of using such an algorithm to solve other problems as well.

Rump talk 2: the cryptography of John Nash

Apparently John Nash, member of MIT during the 1950s, wrote some letters to the NSA in 1955 explaining the implications of computational complexity for security (this wasn’t known at the time).

John Nach also sent a proposal for an encryption scheme that is similar with today’s stream ciphers. However the NSA’s replied saying that the scheme didn’t match the security requirements of the US.
Adi Shamir and Ron Rivest then analysed the scheme and found that in the known plaintext model it would require something like 2^{sqrt(n)} time to break (which John Nach considered not to be a polynomial time, and therefore assumed would be secure).

The letters are now declassified. This blog also comments on the story.

Three Paper Thursday: full disk encryption

Information is often an important asset and today’s information is commonly stored as digital data (bytes). We store this data in our computers local hard disks and in our laptops disks. Many organisations wish to keep the data stored in their computers and laptops confidential. Therefore a natural desire is that a stolen disk or laptop should not be readable by an external person (an attacker in general terms). For this reason we use encryption.

A hard disk is commonly logically organised in multiple sections, often referred to as either partitions or volumes. These volumes can be used for various purposes, and they are often structured according to a file system format (e.g. NTFS, FAT, HFS, etc.). It is possible to have a single disk with 3 volumes, where the first volume is formatted with NTFS and contains a Windows operating system, the second volume is formatted with EXT3 file system and contains an installation of a Linux distribution, while the third volume is formatted with FAT file system and only contains data (no operating system).

Volume encryption is a mechanism used to encrypt the contents of an entire volume. This is sometimes referred as “full disk encryption”, which is misleading, since a physical disk can actually contain multiple volumes, each encrypted independently.  However, since the term has become very popular, I will continue to refer to this kind of encryption as “full disk encryption” but the reader should keep the above distinction in mind.

There are several products that offer full disk encryption, e.g. PGP Whole Disk Encryption, TrueCrypt, Sophos SafeGuard, or Check Point FDE. Bitlocker is the full disk encryption integrated with the Windows OS and Apple has recently introduced FileVault 2 as full disk encryption from MAC OS X 10.7.

There are several limitations that affect the encryption of an entire disk. These have to do with 3 important aspects among others: a) encryption must be fast (a user should not notice any extra latency); b) the operating system is encrypted as well (so there must be some way of bootstrapping the decryption process when the computer boots)  c) the encryption mechanism should not reduce the available storage space noticeable (that is, we cannot use an extra block of data for every few encrypted blocks).

The following 3 papers explain in detail these limitations. Two of them relate to currently deployed full disk encryption systems.

Continue reading Three Paper Thursday: full disk encryption