Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

The CHERI capability model: Revisiting RISC in an age of risk (ISCA 2014)

Last week, Jonathan Woodruff presented our joint paper on the CHERI memory model, The CHERI capability model: Revisiting RISC in an age of risk, at the 2014 International Symposium on Computer Architecture (ISCA) in Minneapolis (video, slides). This is our first full paper on Capability Hardware Enhanced RISC Instructions (CHERI), collaborative work between Simon Moore’s and my team composed of members of the Security, Computer Architecture, and Systems Research Groups at the University of Cambridge Computer Laboratory, Peter G. Neumann’s group at the Computer Science Laboratory at SRI International, and Ben Laurie at Google.

CHERI is an instruction-set extension, prototyped via an FPGA-based soft processor core named BERI, that integrates a capability-system model with a conventional memory-management unit (MMU)-based pipeline. Unlike conventional OS-facing MMU-based protection, the CHERI protection and security models are aimed at compilers and applications. CHERI provides efficient, robust, compiler-driven, hardware-supported, and fine-grained memory protection and software compartmentalisation (sandboxing) within, rather than between, addresses spaces. We run a version of FreeBSD that has been adapted to support the hardware capability model (CheriBSD) compiled with a CHERI-aware Clang/LLVM that supports C pointer integrity, bounds checking, and capability-based protection and delegation. CheriBSD also supports a higher-level hardware-software security model permitting sandboxing of application components within an address space based on capabilities and a Call/Return mechanism supporting mutual distrust.

The approach draws inspiration from Capsicum, our OS-facing hybrid capability-system model now shipping in FreeBSD and available as a patch for Linux courtesy Google. We found that capability-system approaches matched extremely well with least-privilege oriented software compartmentalisation, in which programs are broken up into sandboxed components to mitigate the effects of exploited vulnerabilities. CHERI similarly merges research capability-system ideas with a conventional RISC processor design, making accessible the security and robustness benefits of the former, while retaining software compatibility with the latter. In the paper, we contrast our approach with a number of others including Intel’s forthcoming Memory Protection eXtensions (MPX), but in particular pursue a RISC-oriented design instantiated against the 64-bit MIPS ISA, but the ideas should be portable to other RISC ISAs such as ARMv8 and RISC-V.

Our hardware prototype is implemented in Bluespec System Verilog, a high-level hardware description language (HDL) that makes it easier to perform design-space exploration. To facilitate both reproducibility for this work, and also future hardware-software research, we’ve open sourced the underlying Bluespec Extensible RISC Implementation (BERI), our CHERI extensions, and a complete software stack: operating system, compiler, and so on. In fact, support for the underlying 64-bit RISC platform, which implements a version of the 64-bit MIPS ISA, was upstreamed to FreeBSD 10.0, which shipped earlier this year. Our capability-enhanced versions of FreeBSD (CheriBSD) and Clang/LLVM are distributed via GitHub.

You can learn more about CHERI, BERI, and our larger clean-slate hardware-software agenda on the CTSRD Project Website. There, you will find copies of our prior workshop papers, full Bluespec source code for the FPGA processor design, hardware build instructions for our FPGA-based tablet, downloadable CheriBSD images, software source code, and also our recent technical report, Capability Hardware Enhanced RISC Instructions: CHERI Instruction-Set Architecture, and Jon Woodruff’s PhD dissertation on CHERI.

Jonathan Woodruff, Robert N. M. Watson, David Chisnall, Simon W. Moore, Jonathan Anderson, Brooks Davis, Ben Laurie, Peter G. Neumann, Robert Norton, and Michael Roe. The CHERI capability model: Revisiting RISC in an age of risk, Proceedings of the 41st International Symposium on Computer Architecture (ISCA 2014), Minneapolis, MN, USA, June 14–16, 2014.

EMV: Why Payment Systems Fail

In the latest edition of Communications of the ACM, Ross Anderson and I have an article in the Inside Risks column: “EMV: Why Payment Systems Fail” (DOI 10.1145/2602321).

Now that US banks are deploying credit and debit cards with chips supporting the EMV protocol, our article explores what lessons the US should learn from the UK experience of having chip cards since 2006. We address questions like whether EMV would have prevented the Target data breach (it wouldn’t have), whether Chip and PIN is safer for customers than Chip and Signature (it isn’t), whether EMV cards can be cloned (in some cases, they can) and whether EMV will protect against online fraud (it won’t).

While the EMV specification is the same across the world, they way each country uses it varies substantially. Even individual banks within a country may make different implementation choices which have an impact on security. The US will prove to be an especially interesting case study because some banks will be choosing Chip and PIN (as the UK has done) while others will choose Chip and Signature (as Singapore did). The US will act as a natural experiment addressing the question of whether Chip and PIN or Chip and Signature is better, and from whose perspective?

The US is also distinctive in that the major tussle over payment card security is over the “interchange” fees paid by merchants to the banks which issue the cards used. Interchange fees are about an order of magnitude higher than losses due to fraud, so while security is one consideration in choosing different sets of EMV features, the question of who pays how much in fees is a more important factor (even if the decision is later claimed to be justified by security). We’re already seeing results of this fight in the courts and through legislation.

EMV is coming to the US, so it is important that banks, customers, merchants and regulators know the likely consequences and how to manage the risks, learning from the lessons of the UK and elsewhere. Discussion of these and further issues can be found in our article.

PhD studentship: Model-based assessment of compromising emanations

We have a fully funded 3.5-year PhD Studentship on offer, from October 2014, for a research student to work on “Model-based assessment of compromising emanations”. The project aims to improve our understanding of electro-magnetic emissions that are unintentionally emitted by computing equipment, and the eavesdropping risks they pose. In particular, it aims to improve test and measurement procedures (TEMPEST) for computing equipment that processes extremely confidential data. We are looking for an Electrical Engineering, Computer Science or Physics graduate with an interest in electronics, software-defined radio, hardware security, side-channel cryptanalysis, digital signal processing, electromagnetic compatibility, or machine learning.

Check the full advert and contact Dr Markus Kuhn for more information if you are interested in applying, quoting NR03517. Application deadline: 23 June 2014.

Light Blue Touchpaper now on HTTPS

Light Blue Touchpaper now supports TLS, so as to protect passwords and authentication cookies from eavesdropping. TLS support is provided by the Pound load-balancer, because Varnish (our reverse-proxy cache) does not support TLS.

The configuration is intended to be a reasonable trade-off between security and usability, and gets an A– on the Qualys SSL report. The cipher suite list is based on the very helpful Qualys Security Labs recommendations and Apache header re-writing sets the HttpOnly and Secure cookie flags to resist cookie hijacking.

As always, we might have missed something, so if you notice problems such as incompatibilities with certain browsers, then please let us know on <lbt-admin@cl.cam.ac.uk>. or in the comments.

The pre-play vulnerability in Chip and PIN

Today we have published a new paper: “Chip and Skim: cloning EMV cards with the pre-play attack”, presented at the 2014 IEEE Symposium on Security and Privacy. The paper analyses the EMV protocol, the leading smart card payment system with 1.62 billion cards in circulation, and known as “Chip and PIN” in English-speaking countries. As a result of the Target data breach, banks in the US (which have lagged behind in Chip and PIN deployment compared to the rest of the world) have accelerated their efforts to roll out Chip and PIN capable cards to their customers.

However, our paper shows that Chip and PIN, as currently implemented, still has serious vulnerabilities, which might leave customers at risk of fraud. Previously we have shown how cards can be used without knowing the correct PIN, and that card details can be intercepted as a result of flawed tamper-protection. Our new paper shows that it is possible to create clone chip cards which normal bank procedures will not be able to distinguish from the real card.

When a Chip and PIN transaction is performed, the terminal requests that the card produces an authentication code for the transaction. Part of this transaction is a number that is supposed to be random, so as to stop an authentication code being generated in advance. However, there are two ways in which the protection can by bypassed: the first requires that the Chip and PIN terminal has a poorly designed random generation (which we have observed in the wild); the second requires that the Chip and PIN terminal or its communications back to the bank can be tampered with (which again, we have observed in the wild).

To carry out the attack, the criminal arranges that the targeted terminal will generate a particular “random” number in the future (either by predicting which number will be generated by a poorly designed random number generator, by tampering with the random number generator, or by tampering with the random number sent to the bank). Then the criminal gains temporary access to the card (for example by tampering with a Chip and PIN terminal) and requests authentication codes corresponding to the “random” number(s) that will later occur. Finally, the attacker loads the authentication codes on to the clone card, and uses this card in the targeted terminal. Because the authentication codes that the clone card provides match those which the real card would have provided, the bank cannot distinguish between the clone card and the real one.

Because the transactions look legitimate, banks may refuse to refund victims of fraud. So in the paper we discuss how bank procedures could be improved to detect whether this attack has occurred. We also describe how the Chip and PIN system could be improved. As a result of our research, work has started on mitigating one of the vulnerabilities we identified; the certification requirements for random number generators in Chip and PIN terminals have been improved, though old terminals may still be vulnerable. Attacks making use of tampered random number generators or communications are more challenging to prevent and have yet to be addressed.

Update (2014-05-20): There is press coverage of this paper in The Register, SC Magazine UK and Schneier on Security.
Update (2014-05-21): Also now covered in The Hacker News.

Heartbleed and RSA private keys

UPDATE:

Matthew Arcus pointed out that a freed buffer will always have the first 16 bytes (x86_64) corrupted by two free list pointers, according to glibc’s malloc implementation. This means the leak mechanism I described below will only explain the partial leak I exploited, but not complete prime number leaks others have seen. In fact, the uncorrupted numbers come from the long-lived copy of p and q in the Montgomery context.

——–

By now everyone knows about the OpenSSL Heartbleed vulnerability: a missing bounds check in one of the most popular TLS implementations has made millions of web servers (and others) leak all sorts of sensitive information from memory. This could leak login credentials, authentication cookies and web traffic to attackers. But could it be used to recover the site’s TLS private key? This would enable complete decryption of previously-recorded traffic if perfect forward secrecy was not negotiated at the time and otherwise Man-in-The-Middle attacks to all future TLS sessions.

Since this would be a much more serious consequence of Heartbleed, I decided to investigate. The results were positive: I was able to extract private keys from a test Nginx server after a few day’s work. Later I applied my techniques to solve the CloudFlare Challenge. Along with a few other security researchers, we independently demonstrated that RSA private keys are indeed at risk. In this blog post, I’ll provide some detail on how to extract the private key and why the attack is possible.

How to extract the private key

Readers not familiar with RSA can read about it here. To simplify things a bit, a large (2048 bits) number N is constructed by multiplying together two large randomly generated prime numbers p and q. N is made public while p and q are kept secret. Finding p or q allows recovery of the private key. A generic attack is just factorising N, but this is believed to be difficult. However, with a vulnerability like Heartbleed, the attack is much simpler: because the web server needs the private key in memory to sign the TLS handshake, p and q must live in memory and we can try to obtain them with Heartbleed packets. Then the problem simply becomes how to identify them in the returned data. This is easy, as we know p and q are 1024 bits (128 bytes) long, and OpenSSL represents big numbers little-endian in memory, so a brute-force approach of treating every 128 consecutive bytes in the heartbleed packets as little-endian numbers and testing if it divides N is sufficient to spot potential leaks. This is how most people solved the CloudFlare challenge.

Coppersmith improvement

But wait, isn’t our scenario just like a cold boot attack? And there has been a lot of research on recovering RSA private keys with partial information. One of the most famous papers from Coppersmith presents message recovery attacks on related messages or insufficient padding, as well as factorisation with partial knowledge with the help of lattice basis reduction. With the Coppersmith attack, N can be efficiently factorised if either the top or bottom half bits of p is known. Put it in context, we only need the top/bottom 64 bytes of p in order to compute the private keys, compared to the naive brute-force which requires all 128 bytes. In practice, reaching Coppersmith’s limit is computational expensive (although still much better than factorisation), but assuming 77 bytes (60%) known we can comb the heartbleed packets for potential private key segments very quickly.

In retrospect, there were 242 instances of private key remnants suitable for the Coppersmith attack, out of 10,000 packets (64 KB each) that I had collected. Implementation of the Coppersmith attack was made easy thanks to the comprehensive computer algebra building blocks from Sage (although I later found out that Sage already has Coppersmith implemented).

Can we do even better? If you have ever viewed a RSA private key using openssl rsa -text -in server.key, you will notice that there are quite a few numbers other than the two prime factors p and q. In fact, they are precomputed values for RSA’s Chinese Remainder Theorem optimisation. If some of them are leaked they can also be used to deduce p. And what about the Montgomery representations of p and q that OpenSSL use for fast multiplication? They also admit a variant of Coppersmith so partial bits are useful as well. With this in mind I set out to search for them in the collected packets from my test server where these values are all known. But not even single one of them appears partially (> 16 bytes) in the dataset. How is this possible?

Note: all my experiments and the CloudFlare challenge are targeting Nginx which is single-threaded. It is possible that for a multi-threaded web server, more leakage can be observed.

Why p and only p leaks

When heartbleed first came out, people argued that RSA private keys should not be leaked because they are loaded when the web server starts so they occupy a lower memory address, and because the heap grows upwards, a later-allocated buffer leaked by Heartbleed shouldn’t reach them. This argument is consistent with my inability to find any CRT precomputed values, but with one exception: p is definitely leaked somehow. If we assume this argument is correct, the question becomes: why is p leaked?

To add more to the mystery, OpenSSL apparently scrubs all temporary BigNums it used! To reduce the overhead of dynamically allocating temporary values OpenSSL provides a BN_CTX which is a pool of BigNums operating in stack (LIFO) fashion. Upon finishing the context is destroyed and all allocated buffers are scrubbed. This would mean that when the heartbleed packet is constructed there shouldn’t be any temporary values left in memory (again assuming single thread) because the BN_CTX has long been released.

I won’t bother you with all the pain I went through to identify the cause, so here is the spoiler: when a BigNum is being extended to a bigger buffer size, its original buffer is not zeroised before being freed. The chain of the control flow path that leads to the leak of p is even more subtle. During intial TLS handshake the server key exchange is signed with the private key. The CRT signing performs a modulo p operation, which caused p<<BN_BITS2 to be stored in a temporary variable allocated from the BN_CTX pool. Later in the CRT fault-injection check, that temporary variable is reused (remember BN_CTX operates like a stack) as val[0]. An interesting fact is that a reallocated temporary variable only gets its lowest nibble zeroised, and in the case of p<<BN_BITS2 nothing is destroyed. val[0] immediately receives a Montgomery-reduced value, but since the original buffer is unable to accommodate the new value, it gets extended and p gets released to free heap space, waiting to be grabbed. And because this happens every time a TLS handshake occurs, it can be spilled everywhere.

Since it is difficult to find which BigNums may be extended and leaked statically, I instrumented OpenSSL and experimented a bit. It turned out that a shifted version of the Montgomery representation of p will also be freed at the leaky point, but that only happened at the first RSA exponentiation when the Montgomery context is initialised, so it will live in a low memory address and I was unable to locate it in any collected packets.

The OpenSSL team has been notified about the above leak. Although to be fair this is not exactly a security bug by itself as OpenSSL is never explicitly designed to prevent sensitive data leakage to heap.

I’d like to thank Joseph Bonneau for the useful comments and proof reading.

Current state of anonymous email usability

As part of another project, I needed to demonstrate how the various user-interface options for sending anonymous email through Mixmaster appeared to the email sender. This is very difficult to explain in words, so I recorded some screencasts. The tools I used were the Mixmaster command line tool, the Mutt email client with Mixmaster plugin, QuickSilver Lite, and finally a web-based interface.

The project is now over, but in case these screencasts are of wider interest, I’ve put them on YouTube.

Overall, the usability of Mixmaster is not great. All of the secure options are difficult to configure and use (QuickSilver Lite is probably the best), emails take a long time to be sent, recipients of anonymous email can’t send replies, and there is a high chance that the email will be dropped en-route.

Continue reading Current state of anonymous email usability