Posts filed under 'Cryptology

Nov 30, '09

In 2006, the Chancellor proposed to invade an enemy planet, but his motion was anonymously vetoed. Three years on, he still cannot find out who did it.

This time, the Chancellor is seeking re-election in the Galactic Senate. Some delegates don’t want to vote for him, but worry about his revenge. How to arrange an election such that the voter’s privacy will be best protected?

The environment is extremely adverse. Surveillance is everywhere. Anything you say will be recorded and traceable to you. All communication is essentially public. In addition, you have no one to trust but yourself.

It may seem mind-boggling that this problem is solvable in the first place. With cryptography, anything is possible. In a forthcoming paper to be published by IET Information Security, we (joint work with Peter Ryan and Piotr Zielinski) described a decentralized voting protocol called “Open Vote Network”.

In the Open Vote Network protocol, all communication data is open, and publicly verifiable. The protocol provides the maximum protection of the voter’s privacy; only a full collusion can break the privacy. In addition, the protocol is exceptionally efficient. It compares favorably to past solutions in terms of the round efficiency, computation load and bandwidth usage, and has been close to the best possible in each of these aspects.

With the same security properties, it seems unlikely to have a decentralized voting scheme that is significantly more efficient than ours. However, in cryptography, nothing is ever optimal, so we keep this question open.

A preprint of the paper is available here, and the slides here.

Aug 25, '09

The EMV standard, which is behind Chip & PIN, is not so much a protocol, but a toolkit from which protocols can be built. One component it offers is card authentication, which allows the terminal to discover whether a card is legitimate, without having to go online and contact the bank which issued it. Since the deployment of Chip & PIN, cards issued in the UK only offer SDA (static data authentication) for card authentication. Here, the card contains a digital signature over a selection of data records (e.g. card number, expiry date, etc). This digital signature is generated by the issuing bank, and the bank’s public key is, in turn, signed by the payment scheme (e.g. Visa or MasterCard).

The transaction process for an SDA card goes roughly as follows:

Card auth. Card → Terminal: records, sigBANK{records}
Cardholder verif. Terminal → Card: PIN entered
Card → Terminal: PIN OK
Transaction auth. Terminal → Card: transaction, nonce
Card → Terminal: MAC{transaction, nonce, PIN OK}

Some things to note here:

  • The card contains a static digital signature, which anyone can read and replay
  • The response to PIN verification is not authenticated

This means that anyone who has brief access to the card, can read the digital signature to create a clone which will pass card authentication. Moreover, the fraudster doesn’t need to know the customer’s PIN in order to use the clone, because it can simply return “yes” to any PIN and the terminal will be satisfied. Such clones (so-called “yes cards”) have been produced by security consultants as a demo, and also have been found in the wild.

(more…)

Feb 20, '09

(co-authored with Robert Watson)

Recently, our group was treated to a presentation by Ruby Lee of Princeton University, who discussed novel cache architectures which can prevent some cache-based side channel attacks against AES and RSA. The new architecture was fascinating, in particular because it may actually increase cache performance (though this point was spiritedly debated by several systems researchers in attendance). For the security group, though, it raised two interesting and troubling questions. What is the proper defence against side-channels due to processor cache? And why hasn’t it been implemented despite these attacks being around for years?

(more…)

Feb 4, '09

Many crypto protocols contain variable length fields: the names of the participants, different sizes of public key, and so on.

In my previous post, I mentioned how Liqun Chen has (re)discovered the attack that many protocols are broken if you don’t include the field lengths in MAC or signature computations (and, more to the point, a bunch of ISO standards fail to warn the implementor about this issue).

The problem applies to confidentiality, as well as integrity.

Many protocol verification tools (ProVerif, for example) will assume that the attacker is unable to distinguish enc(m1, k, iv) from enc(m2, k, iv) if they don’t know k.

If m1 and m2 are of different lengths, this may not be true: the length of the ciphertext leaks information about the length of the plaintext. With Cipher Block Chaining, you can tell the length of the plaintext to the nearest block, and with stream ciphers you can tell the exact length. So you can have protocols that are “proved” correct but are still broken, because the idealized protocol doesn’t properly represent what the implementation is really doing.

If you want different plaintexts to be observationally equivalent to the attacker, you can pad the variable-length fields to a fixed length before encrypting. But if there is a great deal of variation in length, this may be a very wasteful thing to do.

The alternative approach is to change your idealization of the protocol to reflect the reality of your encryption primitive. If your implementation sends m encrypted under a stream cipher, you can idealize it as sending an encrypted version of m together with L_m (the length of m) in the clear.

Feb 2, '09

At the end of last week, Microsoft Research hosted a meeting of “Cryptoforma”, a proposed new project (a so-called “network of excellence”) to bring together researchers working on applying formal methods to security. They don’t yet know whether or not this project will get funding from the EPSRC, but I wish them good luck.

There were several very interesting papers presented at the meeting, but today I want to talk about the one by Liqun Chen, “Parsing ambiguities in authentication and key establishment protocols”.

Some of the protocol specifications published by ISO specify how the protocol should be encoded on the wire, in sufficient detail to enable different implementations to interoperate. An example of a standard of this type is the one for the public key certificates that are used in SSL authentication of web sites (and many other applications).

The security standards produced by one group within ISO (SC27) aren’t like that. They specify the abstract protocols, but give the implementor considerable leeway in how they are encoded. This means that you can have different implementations that don’t interoperate. If these implementations are in different application domains, the lack of interoperability doesn’t matter. For example, Tuomas Aura and I recently wrote a paper in which we presented a protocol for privacy-preserving wireless LAN authentication, which we rather boldly claim to be based on the abstract protocol from ISO 9798-4.

You could think of these standards as separating concerns: the SC27 folks get the abstract crypto protocol correct, and then someone else standardises how to encode it in a particular application. But does the choice of concrete encoding affect the protocol’s correctness?

Liqun Chen points out one case where it clearly does. In the abstract protocols in ISO 9798-4 and others, data fields are joined by a double vertical bar operator. If you want to find out what that double vertical bar really means, you have to spend another 66 Swiss Francs and get a copy of ISO 9798-1, which tells you that Y || Z means “the result of the concatenation of the data items Y and Z in that order”.

Oops.

When we specify abstract protocols, it’s generally understood that the concrete encoding that gets signed or MAC’d contains enough information to unambigously identify the field boundaries: it contains length fields, a closing XML tag, or whatever. A signed message {Payee, Amount} K_A should not allow a payment of $3 to Bob12 to be mutated by the attacker into a payment of $23 to Bob1. But ISO 9798 (and a bunch of others) don’t say that. There’s nothing that says a conforming implementation can’t send the length field without authentication.

No of course, an implementor probably wouldn’t do that. But they might.

More generally: do these abstract protocols make a bunch of implicit, undocumented assumptions about the underlying crypto primitives and encodings that might turn out not to be true?

See also: Boyd, C. Hidden assumptions in cryptographic protocols. Computers and Digital Techniques, volume 137, issue 6, November 1990.

May 29, '08

Password Authenticated Key Exchange (PAKE) is one of the central topics in cryptography. It aims to address a practical security problem: how to establish secure communication between two parties solely based on their shared password without requiring a Public Key Infrastructure (PKI).

The solution to the above problem is very useful in practice — in fact, so useful that it spawns a lot “fights” over patents. Many techniques were patented, including the well-known Encrypted Key Exchange (EKE) and Simple Password Exponential Key Exchange (SPEKE). A secondary problem is technical; both the EKE and SPEKE protocols have subtle but worrying technical limitations (see the paper for details).

At the 16th Workshop on Security Protocols held in April 2008, Cambridge, UK, I presented a new solution (joint work with Peter Ryan) called Password Authenticated Key Exchange by Juggling (or J-PAKE). The essence of the protocol design inherits from the earlier work on solving the Dining Cryptographers problem; we adapted the same juggling technique to the two-party case to solve the PAKE problem. To our best knowledge, this design is significantly different from all past PAKE solutions.

Intuitively, the J-PAKE protocol works like a juggling game between two people — if we regard a public key as a “ball”. In round one, each person throws two ephemeral public keys (“balls”) to each other. In round 2, each person combines the available public keys and the password to form a new public key, and throws the new “ball” to each other.

After round 2, the two parties can securely compute a common session key, if they supplied the same passwords. Otherwise, the protocol leaks nothing more than: “the supplied passwords at two sides are not the same”. In other words, one can prove his knowledge of the password without revealing it. A Java implementation of the protocol on a MacBook Pro laptop shows that the total computation time at each side is merely 75 ms.

We hope this protocol is of usefulness to security engineers. For example, compared with SSL/TLS, J-PAKE is potentially much more resistant against phishing attacks, not to mention that it is PKI-free. Since this protocol is the result of an academic research project, we didn’t — and have no intention to — patent it. As explained in the paper, J-PAKE even has technical advantages over the patented EKE and SPEKE in terms of security, with comparable efficiency. It has been submitted as a follow-up to the possible future extension of IEEE P1363.2.

We believe the PAKE research is important and has strong practical relevance. This post is to facilitate discussions on this subject. The paper can be viewed here. Any comments or questions are welcome.

Update

  • 2008-06-28: a crude J-PAKE demo source code (.java) by me. (link broken)
  • 2008-11-04: a more refined J-PAKE in C and OpenSSL by Ben Laurie.
  • 2008-11-11: possible applications of J-PAKE in VPN and browser by James.
  • 2009-02-08: public group parameters for 112-bit and 128-bit security can be found in the comments.
  • 2009-03-15: fixed the broken link to the old Java file. Here is the link to the Java demo code.
  • 2010-04-17: a journal version of the paper available on IACR. No technical change to the protocol.
  • 2010-10-25: the journal version of the paper is accepted to publish on the TCS journal – Springer Transactions on Computational Science, the special issue on “Security in Computing”, 2011.
  • 2010-11-25: Sebastien Martini reported an implementation issue of J-PAKE in OpenSSL and OpenSSH. The issue is not applicable to the Java demo code that I wrote. As stated in the last paragraph of p. 11 in the paper, one shall check the element lies within the specified group. Stefan Arentz implemented a fix in OpenSSL. Official OpenSSL and OpenSSH patches can be found here and here.
  • 2011-01-11: Mozilla built J-PAKE into the base product of Firefox 4 ( beta 8 and later ). More details here.
  • 2012-01-18: Today, Mohsen Toorani uploaded a paper on IACR eprint to claim several attacks on J-PAKE. My response can be found here.
  • 2012-07-21: Phil Clay contributed a Java implementation of J-PAKE to bouncycastle.
  • 2013-02-24: J-PAKE included into bouncycastle 1.48.
  • 2013-03-28: a code example to show how to use J-PAKE in bouncycastle
  • 2013-05-21: Submitted two Internet Drafts to IETF (one on J-PAKE and the other one on Schnorr NIZK Proof)
  • 2013-12-30: a code example to show how to implement J-PAKE using Elliptic Curve (or ECDSA-like group setting)

May 16, '08

The root cause behind the last-but-one Wordpress cookie debacle was that the authors invented their own password hashing and cookie generation scheme. This is generally a bad idea, since it’s hard even for experts to get these right. Instead, whenever possible, a well-studied proposal should be chosen. It is for this reason that I suggested the phpass library for password hashing, and the Fu et al. stateless session cookie proposal.

These choices would be a substantial improvement on the previous custom design (had they been implemented correctly), but I still was not quite satisfied. The Fu et al. scheme has the property that an attacker who can read the cryptographic key stored in the database can create spoofed cookies. Given the history of Wordpress security, it seems likely that there will eventually be a vulnerability discovered which allows the key, which authenticates cookies, to be leaked.

It’s good practice in security engineering to design systems with the widest possible range of attacker capabilities in mind. I therefore designed a cookie scheme which would do all that the Fu et al. design did, but also maintained some of its security properties if the attacker has read-access to the authentication database and knows the cookie authentication key. I published a paper on this topic — Hardened stateless session cookies — at the 2008 Security Protocols Workshop.

The trick behind my scheme is to store the hash of the user’s password in the cookie, and the hash of that in the authentication database. This means that it’s possible for the server to verify cookies, but the authentication database doesn’t contain enough information to create a fake cookie. Thus an attacker with read-access to the database still needs to know the user’s password to log in, and that isn’t stored. There are some additional subtleties to resist different attacks, and those are described in the paper.

I hope this proposal will trigger discussion over this important problem and lead to improved cookie authentication schemes.

Apr 25, '08

Recently, I was preparing to give a talk on web authentication so was looking at the source code of Wordpress, which I had just upgraded to version 2.5. Unfortunately, I found a rather nasty security hole, which has now been disclosed. If a Wordpress installation is configured to permit account creation, the vulnerability allows an attacker to gain administrator access.

The problem is to do with how cookies are generated. The authentication code was substantially overhauled for Wordpress 2.5, in part to deal with security problems in the password database. Now, the authentication cookies take the form of:

wordpress_.COOKIEHASH = USERNAME . | . EXPIRY_TIME . | . MAC

Where:
COOKIEHASH
MD5 hash of the site URL (to maintain cookie uniqueness)
USERNAME
The username for the authenticated user
EXPIRY_TIME
When cookie should expire, in seconds since start of epoch
MAC
HMAC-MD5(USERNAME . EXPIRY_TIME) under a key derived from a secret and USERNAME . EXPIRY_TIME.

This scheme is based on two papers: Dos and Don’ts of Client Authentication on the Web by Fu et al. and A Secure Cookie Protocol by Liu et al. However, there is a small difference and, as is common in cryptographic systems, small changes can have big impact.

The problem is that USERNAME and EXPIRY_TIME are not delimited when calculating the MAC. This means that a MAC for one cookie is valid for any other, provided that USERNAME . EXPIRY_TIME is unchanged. So an attacker can register a username starting with “admin”, log in as usual, then modify their cookie so it’s valid for the administrator account.

Fu et al. called this the “cryptographic splicing” attack in their paper (Section 3.3), and is one of the many ways they show how people can slip up when implementing web authentication. Unfortunately, dynamic website frameworks, especially PHP, offer little assistance to programmers trying to implement secure applications.

I will expand on this topic in a future post but in the mean time, if you run Wordpress, you really should upgrade to 2.5.1. While Wordpress 2.3.3 doesn’t have the problem described here, it is still not secure.

There is some more detail on the cookie vulnerability in my disclosure (CVE-2008-1930). Wordpress have mentioned it in their release announcement and I’ve also just sent it to the usual mailing lists.

Jan 9, '08

At this year’s Chaos Communication Congress (24C3), I presented some work I’ve been doing with Saar Drimer: implementing a smart card relay attack and demonstrating that it can be prevented by distance bounding protocols. My talk (abstract) was filmed and the video can be found below. For more information, we produced a webpage and the details can be found in our paper.

[ slides (PDF 9.6M) | video (BitTorrent -- MPEG4, 106M) ]

Update 2008-01-15:
Liam Tung from ZDNet Australia has written an article on my talk: Bank card attack: Only Martians are safe.

Other highlights from the conference…

Nov 23, '07

After a few years of spectacular advances in breaking cryptographic hash function NIST has announced a competition to determine the next Secure Hash Algorithm, SHA-3. SHA-0 is considered broken, SHA-1 is still secure but no one knows for how long, and the SHA-2 family are desperately slow. (Do not even think about using MD5, or MD4 for which Prof. Wang can find collisions by hand, but RIPEMD-160 still stands.) Cryptographers are ecstatic about this development: as if they were a bit bored since the last NIST AES competition and depressed by the prospect of not having to design another significant block cipher for the next few years.

The rest of us should expect the next four years to be filled with news, first about advances in the design, then advances in the attacks against Hash functions, as teams with candidate hash algorithms will bitterly try to find flaws in each other’s proposals to ensure that their function becomes SHA-3. To fully appreciate the details of this competition, some of us may want a quick refresher on how to build secure hash function.

Here is a list of on-line resources for catching up with the state of the art:

  1. A very quick overview of hash functions and their applications is provided by Ilya Mironov. This is very introductory material, and does not go into the deeper details of what makes these functions secure, or how to break them.
  2. Chapter 9 on Hash Functions and Data Integrity of the Handbook of Applied Cryptography (Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone) provides a very good first overview of the properties expected from collision resistant hash function. It also presents the basic constructions for such functions from block ciphers (too slow for SHA-3), as well as from dedicated compression functions. Chapter 3 also quickly presents Floyd’s cycle finding algorithm to find collisions with negligible storage requirements.
  3. If your curiosity has not been satisfied, the second stop is Prof. Bart Preneel’s thesis entitled “Analysis and Design of Cryptographic Hash Functions“. This work provides a very good overview of the state of the art in hash function design up to the middle of the nineties (before SHA-1 was commissioned.) The back to the basics approach is very instructive, and frankly the thesis could be entitled “everything you wanted to know about hash functions and never dared ask.” Bart is one of the authors of RIPEMD-160 that is still considered secure, an algorithm worth studying.
  4. Hash functions do look like block ciphers under the hood, and an obvious idea might be to adapt aspects of AES and turn it into such a function. Whirlpool does exactly this, and is worth reading about. One of its authors, Paulo Barreto, also maintains a very thorough bibliography of hash function proposals along with all known cryptanalytic results against them (and a cute health status indicating their security.)
  5. Prof. Wang’s attacks that forced NIST to look for better functions are a must-read, even though they get very technical very soon. A gentler introduction to these attacks is provided in Martin Schlaffer’s Master’s thesis describing how the attacks are applied to MD4.
  6. Finally it is no fun observing a game without knowing the rules: the NIST SHA-3 requirements provide detailed descriptions of what the algorithm should look like, as well as the families of attacks it should resist. After reading it you might even be tempted to submit your own candidate!

Calendar

April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  

Posts by Month

Posts by Category