Category Archives: Cryptology

Cryptographic primitives, cryptanalysis

New attacks on HMQV

Many people may still remember the debates a few years ago about the HMQV protocol, a modification of MQV with the primary aim of provable security. Various attacks were later discovered for the original HMQV. In the subsequent submission to the IEEE P1363 standards, the HMQV protocol has been revised to address the reported weaknesses.

However, the revised HMQV protocol is still vulnerable. In a paper that I presented at Financial Cryptography ’10, I described two new attacks. The first presents a counterexample to invalidate the basic authentication feature in the protocol. The second is generally applicable to other key exchange protocols, despite that many have formal security proofs.

The first attack is particularly concerning since the formal security proofs failed to detect this basic flaw. The HMQV protocol explicitly specifies that the Certificate Authority (CA) does not need to validate the public key except checking it is not zero. (This is one reason why HMQV claims to be more efficient than MQV). So, the protocol allows the CA to certify a small subgroup element as the user’s “public key”. Then, anyone who knows this “public key” can successfully pass authentication using HMQV (see the paper for details). Note, in this case, a private key doesn’t exit, but the authentication is successful. What is the “authentication” in HMQV based on?

The HMQV author acknowledges this attack, but states it has no bad effects. Although I disagree, this will be up to the reader to decide.

Updates:

  • 2010-03-11: Full version of the paper available here
  • 2010-04-04: My comments on Tang’s paper.

How to vote anonymously under ubiquitous surveillance

In 2006, the Chancellor proposed to invade an enemy planet, but his motion was anonymously vetoed. Three years on, he still cannot find out who did it.

This time, the Chancellor is seeking re-election in the Galactic Senate. Some delegates don’t want to vote for him, but worry about his revenge. How to arrange an election such that the voter’s privacy will be best protected?

The environment is extremely adverse. Surveillance is everywhere. Anything you say will be recorded and traceable to you. All communication is essentially public. In addition, you have no one to trust but yourself.

It may seem mind-boggling that this problem is solvable in the first place. With cryptography, anything is possible. In a forthcoming paper to be published by IET Information Security, we (joint work with Peter Ryan and Piotr Zielinski) described a decentralized voting protocol called “Open Vote Network”.

In the Open Vote Network protocol, all communication data is open, and publicly verifiable. The protocol provides the maximum protection of the voter’s privacy; only a full collusion can break the privacy. In addition, the protocol is exceptionally efficient. It compares favorably to past solutions in terms of the round efficiency, computation load and bandwidth usage, and has been close to the best possible in each of these aspects.

With the same security properties, it seems unlikely to have a decentralized voting scheme that is significantly more efficient than ours. However, in cryptography, nothing is ever optimal, so we keep this question open.

A preprint of the paper is available here, and the slides here.

Defending against wedge attacks in Chip & PIN

The EMV standard, which is behind Chip & PIN, is not so much a protocol, but a toolkit from which protocols can be built. One component it offers is card authentication, which allows the terminal to discover whether a card is legitimate, without having to go online and contact the bank which issued it. Since the deployment of Chip & PIN, cards issued in the UK only offer SDA (static data authentication) for card authentication. Here, the card contains a digital signature over a selection of data records (e.g. card number, expiry date, etc). This digital signature is generated by the issuing bank, and the bank’s public key is, in turn, signed by the payment scheme (e.g. Visa or MasterCard).

The transaction process for an SDA card goes roughly as follows:

Card auth. Card → Terminal: records, sigBANK{records}
Cardholder verif. Terminal → Card: PIN entered
Card → Terminal: PIN OK
Transaction auth. Terminal → Card: transaction, nonce
Card → Terminal: MAC{transaction, nonce, PIN OK}

Some things to note here:

  • The card contains a static digital signature, which anyone can read and replay
  • The response to PIN verification is not authenticated

This means that anyone who has brief access to the card, can read the digital signature to create a clone which will pass card authentication. Moreover, the fraudster doesn’t need to know the customer’s PIN in order to use the clone, because it can simply return “yes” to any PIN and the terminal will be satisfied. Such clones (so-called “yes cards”) have been produced by security consultants as a demo, and also have been found in the wild.

Continue reading Defending against wedge attacks in Chip & PIN

When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities

(co-authored with Robert Watson)

Recently, our group was treated to a presentation by Ruby Lee of Princeton University, who discussed novel cache architectures which can prevent some cache-based side channel attacks against AES and RSA. The new architecture was fascinating, in particular because it may actually increase cache performance (though this point was spiritedly debated by several systems researchers in attendance). For the security group, though, it raised two interesting and troubling questions. What is the proper defence against side-channels due to processor cache? And why hasn’t it been implemented despite these attacks being around for years?

Continue reading When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities

Variable Length Fields in Cryptographic Protocols

Many crypto protocols contain variable length fields: the names of the participants, different sizes of public key, and so on.

In my previous post, I mentioned how Liqun Chen has (re)discovered the attack that many protocols are broken if you don’t include the field lengths in MAC or signature computations (and, more to the point, a bunch of ISO standards fail to warn the implementor about this issue).

The problem applies to confidentiality, as well as integrity.

Many protocol verification tools (ProVerif, for example) will assume that the attacker is unable to distinguish enc(m1, k, iv) from enc(m2, k, iv) if they don’t know k.

If m1 and m2 are of different lengths, this may not be true: the length of the ciphertext leaks information about the length of the plaintext. With Cipher Block Chaining, you can tell the length of the plaintext to the nearest block, and with stream ciphers you can tell the exact length. So you can have protocols that are “proved” correct but are still broken, because the idealized protocol doesn’t properly represent what the implementation is really doing.

If you want different plaintexts to be observationally equivalent to the attacker, you can pad the variable-length fields to a fixed length before encrypting. But if there is a great deal of variation in length, this may be a very wasteful thing to do.

The alternative approach is to change your idealization of the protocol to reflect the reality of your encryption primitive. If your implementation sends m encrypted under a stream cipher, you can idealize it as sending an encrypted version of m together with L_m (the length of m) in the clear.

Hidden Assumptions in Cryptographic Protocols

At the end of last week, Microsoft Research hosted a meeting of “Cryptoforma”, a proposed new project (a so-called “network of excellence”) to bring together researchers working on applying formal methods to security. They don’t yet know whether or not this project will get funding from the EPSRC, but I wish them good luck.

There were several very interesting papers presented at the meeting, but today I want to talk about the one by Liqun Chen, “Parsing ambiguities in authentication and key establishment protocols”.

Some of the protocol specifications published by ISO specify how the protocol should be encoded on the wire, in sufficient detail to enable different implementations to interoperate. An example of a standard of this type is the one for the public key certificates that are used in SSL authentication of web sites (and many other applications).

The security standards produced by one group within ISO (SC27) aren’t like that. They specify the abstract protocols, but give the implementor considerable leeway in how they are encoded. This means that you can have different implementations that don’t interoperate. If these implementations are in different application domains, the lack of interoperability doesn’t matter. For example, Tuomas Aura and I recently wrote a paper in which we presented a protocol for privacy-preserving wireless LAN authentication, which we rather boldly claim to be based on the abstract protocol from ISO 9798-4.

You could think of these standards as separating concerns: the SC27 folks get the abstract crypto protocol correct, and then someone else standardises how to encode it in a particular application. But does the choice of concrete encoding affect the protocol’s correctness?

Liqun Chen points out one case where it clearly does. In the abstract protocols in ISO 9798-4 and others, data fields are joined by a double vertical bar operator. If you want to find out what that double vertical bar really means, you have to spend another 66 Swiss Francs and get a copy of ISO 9798-1, which tells you that Y || Z means “the result of the concatenation of the data items Y and Z in that order”.

Oops.

When we specify abstract protocols, it’s generally understood that the concrete encoding that gets signed or MAC’d contains enough information to unambigously identify the field boundaries: it contains length fields, a closing XML tag, or whatever. A signed message {Payee, Amount} K_A should not allow a payment of $3 to Bob12 to be mutated by the attacker into a payment of $23 to Bob1. But ISO 9798 (and a bunch of others) don’t say that. There’s nothing that says a conforming implementation can’t send the length field without authentication.

No of course, an implementor probably wouldn’t do that. But they might.

More generally: do these abstract protocols make a bunch of implicit, undocumented assumptions about the underlying crypto primitives and encodings that might turn out not to be true?

See also: Boyd, C. Hidden assumptions in cryptographic protocols. Computers and Digital Techniques, volume 137, issue 6, November 1990.

J-PAKE: From Dining Cryptographers to Jugglers

Password Authenticated Key Exchange (PAKE) is one of the central topics in cryptography. It aims to address a practical security problem: how to establish secure communication between two parties solely based on their shared password without requiring a Public Key Infrastructure (PKI).

The solution to the above problem is very useful in practice — in fact, so useful that it spawns a lot “fights” over patents. Many techniques were patented, including the well-known Encrypted Key Exchange (EKE) and Simple Password Exponential Key Exchange (SPEKE). A secondary problem is technical; both the EKE and SPEKE protocols have subtle but worrying technical limitations (see the paper for details).

At the 16th Workshop on Security Protocols held in April 2008, Cambridge, UK, I presented a new solution (joint work with Peter Ryan) called Password Authenticated Key Exchange by Juggling (or J-PAKE). The essence of the protocol design inherits from the earlier work on solving the Dining Cryptographers problem; we adapted the same juggling technique to the two-party case to solve the PAKE problem. To our best knowledge, this design is significantly different from all past PAKE solutions.

Intuitively, the J-PAKE protocol works like a juggling game between two people — if we regard a public key as a “ball”. In round one, each person throws two ephemeral public keys (“balls”) to each other. In round 2, each person combines the available public keys and the password to form a new public key, and throws the new “ball” to each other.

After round 2, the two parties can securely compute a common session key, if they supplied the same passwords. Otherwise, the protocol leaks nothing more than: “the supplied passwords at two sides are not the same”. In other words, one can prove his knowledge of the password without revealing it. A Java implementation of the protocol on a MacBook Pro laptop shows that the total computation time at each side is merely 75 ms.

We hope this protocol is of usefulness to security engineers. For example, compared with SSL/TLS, J-PAKE is potentially much more resistant against phishing attacks, not to mention that it is PKI-free. Since this protocol is the result of an academic research project, we didn’t — and have no intention to — patent it. As explained in the paper, J-PAKE even has technical advantages over the patented EKE and SPEKE in terms of security, with comparable efficiency. It has been submitted as a follow-up to the possible future extension of IEEE P1363.2.

We believe the PAKE research is important and has strong practical relevance. This post is to facilitate discussions on this subject. The paper can be viewed here. Any comments or questions are welcome.

Update

  • 2008-06-28: a crude J-PAKE demo source code (.java) by me. (link broken)
  • 2008-11-04: a more refined J-PAKE in C and OpenSSL by Ben Laurie.
  • 2008-11-11: possible applications of J-PAKE in VPN and browser by James.
  • 2009-02-08: public group parameters for 112-bit and 128-bit security can be found in the comments.
  • 2009-03-15: fixed the broken link to the old Java file. Here is the link to the Java demo code.
  • 2010-04-17: a journal version of the paper available on IACR. No technical change to the protocol.
  • 2010-10-25: the journal version of the paper is accepted to publish on the TCS journal – Springer Transactions on Computational Science, the special issue on “Security in Computing”, 2011.
  • 2010-11-25: Sebastien Martini reported an implementation issue of J-PAKE in OpenSSL and OpenSSH. The issue is not applicable to the Java demo code that I wrote. As stated in the last paragraph of p. 11 in the paper, one shall check the element lies within the specified group. Stefan Arentz implemented a fix in OpenSSL. Official OpenSSL and OpenSSH patches can be found here and here.
  • 2011-01-11: Mozilla built J-PAKE into the base product of Firefox 4 ( beta 8 and later). More details here.
  • 2012-01-18: Today, Mohsen Toorani uploadeda paper on IACR eprint to claim several attacks on J-PAKE. My response can be found here.
  • 2012-07-21: Phil Clay contributed a Java implementation of J-PAKE to bouncycastle.
  • 2013-02-24: J-PAKE included into bouncycastle 1.48.
  • 2013-03-28: a code example to show how to use J-PAKE in bouncycastle
  • 2013-05-21: Submitted two Internet Drafts to IETF (one on J-PAKE and the other one on Schnorr NIZK Proof)
  • 2013-12-30: a code example to show how to implement J-PAKE using Elliptic Curve (or ECDSA-like group setting)
  • 2014-04-17: J-PAKE included into VMware NSS Cryptographic Module
  • 2014-10-27: J-PAKE adopted by the ISO/IEC standard (11770-4) following the ISO/IEC SC27 meeting held in Mexico City, October 20-24, 2014
  • 2014-12-26: My response to Mohsen Toorani’s IEEE ISCC’14 paper “Security Analysis of J-PAKE”.
  • 2015-04-29: J-PAKE included into Python
  • 2015-05-08: Standardization of J-PAKE in ISO/IEC 11770-4 in process. The first working draft (WD1) passed reviews by ISO/IEC SC27 at Kuching Malaysia, May 5-8, 2015.
  • 2015-05-19: Here is an independent formal analysis of J-PAKE by other researchers published at Oakland’2015. Their results are consistent with the original J-PAKE paper.
  • 2015-05-30: J-PAKE included in BarcoSilex BA414E Public Key crypto engine
  • 2015-05-31: Firefox is upgrading Sync 1.1 (using J-PAKE to transfer a full-entropy AES key between sync devices) to new Sync 1.5 (using user-defined passwords as encryption keys). But Pale moon decides to stay with Sync 1.1.
  • 2015-07-28: the Thread Technical white paper is public. It describes a technique that securely enrols a new device to the network based on J-PAKE. The technique is used in Google Nest thermostat products.
  • 2017-10-06: J-PAKE published in ISO/IEC 11770-4 as an international standard and also published in RFC 8236. See here.

Hardened stateless session cookies

The root cause behind the last-but-one WordPress cookie debacle was that the authors invented their own password hashing and cookie generation scheme. This is generally a bad idea, since it’s hard even for experts to get these right. Instead, whenever possible, a well-studied proposal should be chosen. It is for this reason that I suggested the phpass library for password hashing, and the Fu et al. stateless session cookie proposal.

These choices would be a substantial improvement on the previous custom design (had they been implemented correctly), but I still was not quite satisfied. The Fu et al. scheme has the property that an attacker who can read the cryptographic key stored in the database can create spoofed cookies. Given the history of WordPress security, it seems likely that there will eventually be a vulnerability discovered which allows the key, which authenticates cookies, to be leaked.

It’s good practice in security engineering to design systems with the widest possible range of attacker capabilities in mind. I therefore designed a cookie scheme which would do all that the Fu et al. design did, but also maintained some of its security properties if the attacker has read-access to the authentication database and knows the cookie authentication key. I published a paper on this topic — Hardened stateless session cookies — at the 2008 Security Protocols Workshop.

The trick behind my scheme is to store the hash of the user’s password in the cookie, and the hash of that in the authentication database. This means that it’s possible for the server to verify cookies, but the authentication database doesn’t contain enough information to create a fake cookie. Thus an attacker with read-access to the database still needs to know the user’s password to log in, and that isn’t stored. There are some additional subtleties to resist different attacks, and those are described in the paper.

I hope this proposal will trigger discussion over this important problem and lead to improved cookie authentication schemes.

WordPress 2.5 cookie integrity protection vulnerability

Recently, I was preparing to give a talk on web authentication so was looking at the source code of WordPress, which I had just upgraded to version 2.5. Unfortunately, I found a rather nasty security hole, which has now been disclosed. If a WordPress installation is configured to permit account creation, the vulnerability allows an attacker to gain administrator access.

The problem is to do with how cookies are generated. The authentication code was substantially overhauled for WordPress 2.5, in part to deal with security problems in the password database. Now, the authentication cookies take the form of:

wordpress_.COOKIEHASH = USERNAME . | . EXPIRY_TIME . | . MAC

Where:
COOKIEHASH
MD5 hash of the site URL (to maintain cookie uniqueness)
USERNAME
The username for the authenticated user
EXPIRY_TIME
When cookie should expire, in seconds since start of epoch
MAC
HMAC-MD5(USERNAME . EXPIRY_TIME) under a key derived from a secret and USERNAME . EXPIRY_TIME.

This scheme is based on two papers: Dos and Don’ts of Client Authentication on the Web by Fu et al. and A Secure Cookie Protocol by Liu et al. However, there is a small difference and, as is common in cryptographic systems, small changes can have big impact.

The problem is that USERNAME and EXPIRY_TIME are not delimited when calculating the MAC. This means that a MAC for one cookie is valid for any other, provided that USERNAME . EXPIRY_TIME is unchanged. So an attacker can register a username starting with “admin”, log in as usual, then modify their cookie so it’s valid for the administrator account.

Fu et al. called this the “cryptographic splicing” attack in their paper (Section 3.3), and is one of the many ways they show how people can slip up when implementing web authentication. Unfortunately, dynamic website frameworks, especially PHP, offer little assistance to programmers trying to implement secure applications.

I will expand on this topic in a future post but in the mean time, if you run WordPress, you really should upgrade to 2.5.1. While WordPress 2.3.3 doesn’t have the problem described here, it is still not secure.

There is some more detail on the cookie vulnerability in my disclosure (CVE-2008-1930). WordPress have mentioned it in their release announcement and I’ve also just sent it to the usual mailing lists.