Category Archives: Cryptology

Cryptographic primitives, cryptanalysis

How Certification Systems Fail: Lessons from the Ware Report

Research in the Security Group has uncovered various flaws in systems, despite them being certified as secure. Sometimes the certification criteria have been inadequate and sometimes the certification process has been subverted. Not only do these failures affect the owners of the system but when evidence of certification comes up in court, the impact can be much wider.

There’s a variety of approaches to certification, ranging from extremely generic (such as Common Criteria) to highly specific (such as EMV), but all are (at least partially) descendants of a report by Willis H. Ware – “Security Controls for Computer Systems”. There’s much that can be learned from this report, particularly the rationale for why certification systems are set up as the way they are. The differences between how Ware envisaged certification and how certification is now performed is also informative, whether these differences are for good or for ill.

Along with Mike Bond and Ross Anderson, I have written an article for the “Lost Treasures” edition of IEEE Security & Privacy where we discuss what can be learned, about how today’s certifications work and should work, from the Ware report. In particular, we explore how the failure to follow the recommendations in the Ware report can explain why flaws in certified banking systems were not detected earlier. Our article, “How Certification Systems Fail: Lessons from the Ware Report” is available open-access in the version submitted to the IEEE. The edited version, as appearing in the print edition (IEEE Security & Privacy, volume 10, issue 6, pages 40–44, Nov‐Dec 2012. DOI:10.1109/MSP.2012.89) is only available to IEEE subscribers.

Yet more banking industry censorship

Yesterday, banking security vendor Thales sent this DMCA takedown request to John Young who runs the excellent Cryptome archive. Thales want him to remove an equipment manual that has been online since 2003 and which was valuable raw material in research we did on API security.

Banks use hardware security modules (HSMs) to manage the cryptographic keys and PINs used to authenticate bank card transactions. These used to be thought to be secure. But their application programming interfaces (APIs) had become unmanageably complex, and in the early 2000s Mike Bond, Jolyon Clulow and I found that by sending sequences of commands to the machine that its designers hadn’t anticipated, it was often possible to break the device spectacularly. This became a thriving field of security research.

But while API security has been a goldmine for security researchers, it’s been an embarrassment for the industry, in which Thales is one of two dominant players. Hence the attempt to close down our mine. As you’d expect, the smaller firms in the industry, such as Utimaco, would prefer HSM APIs to be open (indeed, Utimaco sent two senior people to a Dagstuhl workshop on APIs that we held a couple of months ago). Even more ironically, Thales’s HSM business used to be the Cambridge startup nCipher, which helped our research by giving us samples of their competitors’ products to break.

If this case ever comes to court, the judge might perhaps consider the Lexmark case. Lexmark sued Static Control Components (SCC) for DMCA infringement in order to curtail competition. The court found this abusive and threw out the case. I am not a lawyer, and John Young must clearly take advice. However this particular case of internet censorship serves no public interest (as with previous attempts by the banking industry to censor security research).

Analysis of FileVault 2 (Apple's full disk encryption)

With the launch of Mac OS X 10.7 (Lion), Apple has introduced a volume encryption mechanism known as FileVault 2.

During the past year Joachim Metz, Felix Grobert and I have been analysing this encryption mechanism. We have identified most of the components in FileVault 2’s architecture and we have also built an open source tool that can read volumes encrypted with FileVault 2. This tool can be useful to forensic investigators (who know the encryption password or recovery token) that need to recover some files from an encrypted volume but cannot trust or load the MAC OS that was used to encrypt the data. We have also made an analysis of the security of FileVault 2.

A few weeks ago we have made public this paper on eprint describing our work. The tool to recover data from encrypted volumes is available here.

Three paper Thursday: Shamir x3 at Eurocrypt

For the past 4 days Cambridge has been hosting Eurocrypt 2012.

There were many talks, probably interesting, but I will only comment on 3 talks given by Adi Shamir, 1 during the official conference and 2 during the rump session.
Among the other sessions I mention that the best paper was given to this paper by Antoine Joux and Vanessa Vitse for the enhancement of index calculus to break elliptic curves.

Official Talk: Minimalism in cryptography, the Even-Mansour scheme revisited

In this work, Adi et al. presented an analysis on the Even-Mansour scheme:

E(P) = F(P ⊕ K1) ⊕ K2

Such scheme, some times referred to as key whitening, is used in the DESX construction and in the AES-XTS mode of operation (just a few examples).

Adi et al. shown a new slide attack, called SLIDEX, which has been used to prove a tight bound on the security of the Even-Mansour scheme.

Even more, they show that using K1 = K2 you can achieve the same security.

Rump talk 1: security of multiple key encryption

Here Adi considered the case of encrypting data multiple times with multiple keys, as in 3DES:
data -> c1 = E_k1(data) ->  c2 = E_k2(c1) -> c3 = E_k3(c2) -> c4 = E_k3(c3) …. and so on.

The general approach to break a scheme where a key is used 2 times or 3 times (2DES, 3DES for e.g.) is the meet-in-the-middle attack, where you encrypt from one side and then decrypt from the other side, and by storing a table of the size of the key space (say n bits) you can eventually find the keys used in a scheme using only a few pairs of plaintext/ciphertext. For 2 keys such an attack would require 2^{n} time, for 3 keys 2^{2n}. Therefore some people may assume that increasing the number of keys by 1 (i.e. to use 4 keys) may increase the security of this scheme. This is in fact not true.

Adi shown that once we go beyond 3 keys (e.g. 4, 5, 6, etc…) the security only increases once every few keys. If you think of it, using 4 keys you can just apply the meet-in-the-middle attack in 2^{2n} time to the left 2 encryptions and also in 2^{2n} time to the right 2 decryptions. After this, he shown how to use the meet-in-the-middle attack to solve the knapsack problem and proposed the idea of using such an algorithm to solve other problems as well.

Rump talk 2: the cryptography of John Nash

Apparently John Nash, member of MIT during the 1950s, wrote some letters to the NSA in 1955 explaining the implications of computational complexity for security (this wasn’t known at the time).

John Nach also sent a proposal for an encryption scheme that is similar with today’s stream ciphers. However the NSA’s replied saying that the scheme didn’t match the security requirements of the US.
Adi Shamir and Ron Rivest then analysed the scheme and found that in the known plaintext model it would require something like 2^{sqrt(n)} time to break (which John Nach considered not to be a polynomial time, and therefore assumed would be secure).

The letters are now declassified. This blog also comments on the story.

Three Paper Thursday: full disk encryption

Information is often an important asset and today’s information is commonly stored as digital data (bytes). We store this data in our computers local hard disks and in our laptops disks. Many organisations wish to keep the data stored in their computers and laptops confidential. Therefore a natural desire is that a stolen disk or laptop should not be readable by an external person (an attacker in general terms). For this reason we use encryption.

A hard disk is commonly logically organised in multiple sections, often referred to as either partitions or volumes. These volumes can be used for various purposes, and they are often structured according to a file system format (e.g. NTFS, FAT, HFS, etc.). It is possible to have a single disk with 3 volumes, where the first volume is formatted with NTFS and contains a Windows operating system, the second volume is formatted with EXT3 file system and contains an installation of a Linux distribution, while the third volume is formatted with FAT file system and only contains data (no operating system).

Volume encryption is a mechanism used to encrypt the contents of an entire volume. This is sometimes referred as “full disk encryption”, which is misleading, since a physical disk can actually contain multiple volumes, each encrypted independently.  However, since the term has become very popular, I will continue to refer to this kind of encryption as “full disk encryption” but the reader should keep the above distinction in mind.

There are several products that offer full disk encryption, e.g. PGP Whole Disk Encryption, TrueCrypt, Sophos SafeGuard, or Check Point FDE. Bitlocker is the full disk encryption integrated with the Windows OS and Apple has recently introduced FileVault 2 as full disk encryption from MAC OS X 10.7.

There are several limitations that affect the encryption of an entire disk. These have to do with 3 important aspects among others: a) encryption must be fast (a user should not notice any extra latency); b) the operating system is encrypted as well (so there must be some way of bootstrapping the decryption process when the computer boots)  c) the encryption mechanism should not reduce the available storage space noticeable (that is, we cannot use an extra block of data for every few encrypted blocks).

The following 3 papers explain in detail these limitations. Two of them relate to currently deployed full disk encryption systems.

Continue reading Three Paper Thursday: full disk encryption

New attacks on HMQV

Many people may still remember the debates a few years ago about the HMQV protocol, a modification of MQV with the primary aim of provable security. Various attacks were later discovered for the original HMQV. In the subsequent submission to the IEEE P1363 standards, the HMQV protocol has been revised to address the reported weaknesses.

However, the revised HMQV protocol is still vulnerable. In a paper that I presented at Financial Cryptography ’10, I described two new attacks. The first presents a counterexample to invalidate the basic authentication feature in the protocol. The second is generally applicable to other key exchange protocols, despite that many have formal security proofs.

The first attack is particularly concerning since the formal security proofs failed to detect this basic flaw. The HMQV protocol explicitly specifies that the Certificate Authority (CA) does not need to validate the public key except checking it is not zero. (This is one reason why HMQV claims to be more efficient than MQV). So, the protocol allows the CA to certify a small subgroup element as the user’s “public key”. Then, anyone who knows this “public key” can successfully pass authentication using HMQV (see the paper for details). Note, in this case, a private key doesn’t exit, but the authentication is successful. What is the “authentication” in HMQV based on?

The HMQV author acknowledges this attack, but states it has no bad effects. Although I disagree, this will be up to the reader to decide.

Updates:

  • 2010-03-11: Full version of the paper available here
  • 2010-04-04: My comments on Tang’s paper.

How to vote anonymously under ubiquitous surveillance

In 2006, the Chancellor proposed to invade an enemy planet, but his motion was anonymously vetoed. Three years on, he still cannot find out who did it.

This time, the Chancellor is seeking re-election in the Galactic Senate. Some delegates don’t want to vote for him, but worry about his revenge. How to arrange an election such that the voter’s privacy will be best protected?

The environment is extremely adverse. Surveillance is everywhere. Anything you say will be recorded and traceable to you. All communication is essentially public. In addition, you have no one to trust but yourself.

It may seem mind-boggling that this problem is solvable in the first place. With cryptography, anything is possible. In a forthcoming paper to be published by IET Information Security, we (joint work with Peter Ryan and Piotr Zielinski) described a decentralized voting protocol called “Open Vote Network”.

In the Open Vote Network protocol, all communication data is open, and publicly verifiable. The protocol provides the maximum protection of the voter’s privacy; only a full collusion can break the privacy. In addition, the protocol is exceptionally efficient. It compares favorably to past solutions in terms of the round efficiency, computation load and bandwidth usage, and has been close to the best possible in each of these aspects.

With the same security properties, it seems unlikely to have a decentralized voting scheme that is significantly more efficient than ours. However, in cryptography, nothing is ever optimal, so we keep this question open.

A preprint of the paper is available here, and the slides here.

Defending against wedge attacks in Chip & PIN

The EMV standard, which is behind Chip & PIN, is not so much a protocol, but a toolkit from which protocols can be built. One component it offers is card authentication, which allows the terminal to discover whether a card is legitimate, without having to go online and contact the bank which issued it. Since the deployment of Chip & PIN, cards issued in the UK only offer SDA (static data authentication) for card authentication. Here, the card contains a digital signature over a selection of data records (e.g. card number, expiry date, etc). This digital signature is generated by the issuing bank, and the bank’s public key is, in turn, signed by the payment scheme (e.g. Visa or MasterCard).

The transaction process for an SDA card goes roughly as follows:

Card auth. Card → Terminal: records, sigBANK{records}
Cardholder verif. Terminal → Card: PIN entered
Card → Terminal: PIN OK
Transaction auth. Terminal → Card: transaction, nonce
Card → Terminal: MAC{transaction, nonce, PIN OK}

Some things to note here:

  • The card contains a static digital signature, which anyone can read and replay
  • The response to PIN verification is not authenticated

This means that anyone who has brief access to the card, can read the digital signature to create a clone which will pass card authentication. Moreover, the fraudster doesn’t need to know the customer’s PIN in order to use the clone, because it can simply return “yes” to any PIN and the terminal will be satisfied. Such clones (so-called “yes cards”) have been produced by security consultants as a demo, and also have been found in the wild.

Continue reading Defending against wedge attacks in Chip & PIN

When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities

(co-authored with Robert Watson)

Recently, our group was treated to a presentation by Ruby Lee of Princeton University, who discussed novel cache architectures which can prevent some cache-based side channel attacks against AES and RSA. The new architecture was fascinating, in particular because it may actually increase cache performance (though this point was spiritedly debated by several systems researchers in attendance). For the security group, though, it raised two interesting and troubling questions. What is the proper defence against side-channels due to processor cache? And why hasn’t it been implemented despite these attacks being around for years?

Continue reading When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities