Category Archives: Hardware & signals

Electrical engineering aspects of computer security: tamper resistance, eavesdropping, signal processing, etc.

Job ad: post-doctoral researcher in security, operating systems, computer architecture

We are pleased to announce a job opening at the University of Cambridge Computer Laboratory for a post-doctoral researcher working in the areas of security, operating systems, and computer architecture.

Research Associate
University of Cambridge – Faculty of Computer Science & Technology

Salary: £27,428 – £35,788 pa
The funds for this post are available for one year:

We are seeking a Post-doctoral Research Associate to join the CTSRD Project, which is investigating fundamental improvements to CPU architecture, operating system (OS), and programming language structure in support of computer security. The CTSRD Project is a collaboration between the University of Cambridge and SRI International, and part of the DARPA CRASH research programme on clean-slate computer system design.

This position will be an integral part of an international team of researchers spanning multiple institutions across academia and industry. The successful candidate will contribute to low-level aspects of system software: compilers, language run-times, and OS kernels. Responsibilities will include researching the application of novel dynamic techniques to C-language operating systems and applications, including adaptation of the FreeBSD kernel and LLVM compiler suite, and measurement of the resulting system.

An ideal candidate will hold (or be close to finishing) a PhD in Computer Science, Mathematics, or similar with a strong background in low-level system software development, which should include at least of one of strong kernel development experience (FreeBSD preferred; Linux acceptable), or compiler internals experience (LLVM preferred; gcc acceptable). Strong experience with the C programming language is critical. Some background in computer security is also recommended.

Candidates must be able to provide evidence of relevant work demonstrated by a research publication track record or industrial experience. Good interpersonal and organisational skills and the ability to work in a team are also essential. This post is intended to be filled as soon as practically possible after the closing date.

Applications should include:

  • Curriculum Vitae
  • Brief statement of the particular contribution you would make to the project
  • A completed form CHRIS6

Completed applications should be sent by post to: Personnel-Admin,Computer Laboratory, William Gates Building, JJ Thomson Avenue, Cambridge, CB3 0FD, or by email to: personnel-admin@cl.cam.ac.uk

Quote Reference: NR10692
Closing Date: 10 January 2012

The University values diversity and is committed to equality of opportunity.

Trusted Computing 2.0

There seems to be an attempt to revive the “Trusted Computing” agenda. The vehicle this time is UEFI which sets the standards for the PC BIOS. Proposed changes to the UEFI firmware spec would enable (in fact require) next-generation PC firmware to only boot an image signed by a keychain rooted in keys built into the PC. I hear that Microsoft (and others) are pushing for this to be mandatory, so that it cannot be disabled by the user, and it would be required for OS badging. There are some technical details here and here, and comment here.

These issues last arose in 2003, when we fought back with the Trusted Computing FAQ and economic analysis. That initiative petered out after widespread opposition. This time round the effects could be even worse, as “unauthorised” operating systems like Linux and FreeBSD just won’t run at all. (On an old-fashioned Trusted Computing platform you could at least run Linux – it just couldn’t get at the keys for Windows Media Player.)

The extension of Microsoft’s OS monopoly to hardware would be a disaster, with increased lock-in, decreased consumer choice and lack of space to innovate. It is clearly unlawful and must not succeed.

Make noise and whisper: a solution to relay attacks

About a moth ago I’ve presented at the Security Protocols Workshop a new idea to detect relay attacks, co-developed with Frank Stajano.

The idea relies on having a trusted box (which we call the T-Box as in the image below) between the physical interfaces of two communicating parties. The T-Box accepts 2 inputs (one from each party) and provides one output (seen by both parties). It ensures that none of the parties can determine the complete input of the other party.

T-Box

Therefore by connecting 2 instances of a T-Box together (as in the case of a relay attack) the message from one end to the other (Alice and Bob in the image above) gets distorted twice as much as it would in the case of a direct connection. That’s the basic idea.

One important question is how does the T-Box operate on the inputs such that we can detect a relay attack? In the paper we describe two example implementations based on a bi-directional channel (which is used for example between a smart card and a terminal). In order to help the reader understand these examples better and determine the usefulness of our idea Mike Bond and I have created a python simulation. This simulation allows you to choose the type of T-Box implementation, a direct or relay connection, as well as other parameters including the length of the anti-relay data stream and detection threshold.

In these two implementations we have restricted ourselves to make the T-Box part of the communication channel. The advantage is that we don’t rely on any party providing the T-Box since it is created automatically by communicating over the physical channel. The disadvantage is that a more powerful attacker can sample the line at twice the speed and overcome our T-Box solution.

The relay attack can be used against many applications, including all smart card based payments. There are already several ideas, including distance bounding, for detecting relay attacks. However our idea brings a new approach to the existing methods, and we hope that in the future we can find a practical implementation of our solutions, or a good scenario to use a physical T-Box which should not be affected by a powerful attacker.

The Smart Card Detective: a hand-held EMV interceptor

During my MPhil within the Computer Lab (supervised by Markus Kuhn) I developed a card-sized device (named Smart Card Detective – in short SCD) that can monitor Chip and PIN transactions. The main goal of the SCD was to offer a trusted display for anyone using credit cards, to avoid scams such as tampered terminals which show an amount on their screen but debit the card another (see this paper by Saar Drimer and Steven Murdoch). However, the final result is a more general device, which can be used to analyse and modify any part of an EMV (protocol used by Chip and PIN cards) transaction.

Using the SCD we have successfully shown how the relay attack can be mitigated by showing the real amount on the trusted display. Even more, we have tested the No PIN vulnerability (see the paper by Murdoch et al.) with the SCD. A reportage on this has been shown on Canal+ (video now available here).

After the “Chip and PIN is broken” paper was published some contra arguments referred to the difficulty of setting up the attack. The SCD can also show that such assumptions are many times incorrect.

More details on the SCD are on my MPhil thesis available here. Also important, the software is open source and along with the hardware schematics can be found in the project’s page. The aim of this is to make the SCD a useful tool for EMV research, so that other problems can be found and fixed.

Thanks to Saar Drimer, Mike Bond, Steven Murdoch and Sergei Skorobogatov for the help in this project. Also thanks to Frank Stajano and Ross Anderson for suggestions on the project.

Encoding integers in the EMV protocol

On the 1st of January 2010, many German bank customers found that their banking smart cards had stopped working. Details of why are still unclear, but indications are that the cards believed that the date was 2016, rather than 2010, and so refused to process a transaction supposedly after their expiry dates. This problem could turn out to be quite expensive for the cards’ manufacturer, Gemalto: their shares dropped almost 4%, and they have booked a €10 m charge to handle the consequences.

These cards implement the EMV protocol (the same one used for Chip and PIN in the UK). Here, the card is sent the current date in 3-byte YYMMDD binary-coded decimal (BCD) format, i.e. “100101” on 1 January 2010. If however this was interpreted as hexadecimal, then the card will think the year is 2016 (in hexadecimal, 1 January 2010 should have actually been “0a0101”). Since the numbers 0–9 are the same in both BCD and hexadecimal, we can see why this problem only occurred in 2010*.

In one sense, this looks like a foolish error, and should have been caught in testing. However, before criticizing too harshly, one should remember that EMV is almost impossible to implement perfectly. I have written a fairly complete implementation of the protocol and frequently find edge cases which are insufficiently documented, making dealing with them error-prone. Not only is the specification vague, but it is also long — the first public version in 1996 was 201 pages, and it grew to 765 pages by 2008. Moreover, much of the complexity is unnecessary. In this article I will give just one example of this — the fact that there are nine different ways to encode integers.

Continue reading Encoding integers in the EMV protocol

Interview with Steven Murdoch on Finextra

Today, Finextra (a financial technology news website), has published a video interview with me, discussing my research on banks using card readers for online banking, which was recently featured on TV.

In this interview, I discuss some of the more technical aspects of the attacks on card readers, including the one demonstrated on TV (which requires compromising a Chip & PIN terminal), as well as others which instead require that the victim’s PC be compromised, but which can be carried out on a larger scale.

I also compare the approaches taken by the banking community to protocol design, with that of the Internet community. Financial organizations typically develop protocols internally, and so are subject to public scrutiny late in deployment, if at all. This is in contrast with Internet protocols which are commonly first discussed within industry and academia, then the specification is made public, and only then is it implemented. As a consequence, vulnerabilities in banking security systems are often more expensive to fix.

Also, I discuss some of the non-technical design decisions involved in the deployment of security technology. Specifically, their design needs to take into account risk analysis, psychology and usability, not just cryptography. Organizational structures also need to incentivize security; groups who design security mechanisms should be responsible for failure. Organizational structures should also discourage knowledge of security failings from being hidden from management. If necessary a separate penetration testing team should report directly to board level.

Finally I mention one good design principle for security protocols: “make everything as simple as possible, but not simpler”.

The video (7 minutes) can be found below, and is also on the Finextra website.

TV coverage of online banking card-reader vulnerabilities

This evening (Monday 26th October 2009, at 19:30 UTC), BBC Inside Out will show Saar Drimer and I demonstrating how the use of smart card readers, being issued in the UK to authenticate online banking transactions, can be circumvented. The programme will be broadcast on BBC One, but only in the East of England and Cambridgeshire, however it should also be available on iPlayer.

In this programme, we demonstrate how a tampered Chip & PIN terminal could collect an authentication code for Barclays online banking, while a customer thinks they are buying a sandwich. The criminal could then, at their leisure, use this code and the customer’s membership number to fraudulently transfer up to £10,000.

Similar attacks are possible against all other banks which use the card readers (known as CAP devices) for online banking. We think that this type of scenario is particularly practical in targeted attacks, and circumvents any anti-malware protection, but criminals have already been seen using banking trojans to attack CAP on a wide scale.

Further information can be found on the BBC online feature, and our research summary. We have also published an academic paper on the topic, which was presented at Financial Cryptography 2009.

Update (2009-10-27): The full programme is now on BBC iPlayer for the next 6 days, and the segment can also be found on YouTube.

BBC Inside Out, Monday 26th October 2009, 19:30, BBC One (East)

Tuning in to random numbers

Tomorrow at Cryptographic Hardware and Embedded Systems 2009 I’m going to be presenting a frequency injection attack on random number generators formed from ring oscillators.

Random numbers are a vital part of cryptography — if predictable numbers are being used an attacker may be able to read secret messages, impersonate either party, or replay transactions. In addition, many countermeasures to attacks such as Differential Power Analysis involve adding randomness to operations — without the randomness algorithms such as RSA become susceptible.

To create unpredictable random numbers in a predictable computer involves measuring some kind of physical process. Examples include circuit noise, radioactive decay and timing variations. One method commonly used in low-cost circuits such as smartcards is measuring the jitter from free-running ring oscillators. The ring oscillators’ frequencies depend on environmental factors such as voltage and temperature, and by having many independent ring oscillators we can harvest small timing differences between them (jitter).

But what happens if they aren’t independent? In particular, what happens if the circuit is faced with an attacker who can manipulate the outside of the system?

The attack turns out to be fairly straightforward. An effect called injection locking, known since 1665, considers what happens if you have two oscillators very lightly connected. For example, two pendulum clocks mounted on a wall tend to synchronise the swing of their pendula through small vibrations transmitted through the wall.

In an electronic circuit, the attacker can inject a signal to force the ring oscillators to injection-lock. The simplest way involves forcing a frequency onto the power supply from which the ring oscillators are powered. If there are any imbalances in the circuit we suggest that this causes the circuit to ring to be more susceptible at that point to injection locking. So we examined the effects of power supply injection, and can envisage a similar attack by irradiation with electromagnetic fields.

And it works surprisingly well. We tried an old version of a secure microcontroller that has been used in banking ATMs (and is still recommended for new ones). For the 32 random bits that are used in an ATM transaction, we managed to reduce the number of possibilities from 4 billion to about 225.

So if an attacker can have access to your card and PIN, in a modified shop terminal for example, he can record some ATM transactions. Then he needs to take a fake card to the ATM containing this microcontroller. On average he’ll need to record 15 transactions (the square root of 225) on the card and try 15 transactions at the ATM before he can steal the money. This number may be small enough not to set off alarms at the bank. The customer’s card and PIN were used for the transaction, but at a time when he was nowhere near an ATM.

While we looked at power supply injection, the ATM could also be attacked electromagnetically. Park a car next to the ATM emitting a 10 GHz signal amplitude modulated by the ATM’s vulnerable frequency (1.8 MHz in our example). The 10 GHz will penetrate the ventilation slots but then be filtered away, leaving 1.8 MHz in the power supply. When the car drives away there’s no evidence that the random numbers were bad – and bad random numbers are very difficult to detect anyway.

We also tried the same attack on an EMV (‘Chip and PIN’) bank card. Before injection, the card failed only one of the 188 tests in the standard NIST suite for random number testing. With injection it failed 160 of 188. While we can’t completely predict the random number generator, there are some sequences that can be seen.

So, as ever, designing good random number generators turns out to be a hard problem not least because the attacker can tamper with your system in more ways than you might expect.

You can find the paper and slides on my website.

Optimised to fail: Card readers for online banking

A number of UK banks are distributing hand-held card readers for authenticating customers, in the hope of stemming the soaring levels of online banking fraud. As the underlying protocol — CAP — is secret, we reverse-engineered the system and discovered a number of security vulnerabilities. Our results have been published as “Optimised to fail: Card readers for online banking”, by Saar Drimer, Steven J. Murdoch, and Ross Anderson.

In the paper, presented today at Financial Cryptography 2009, we discuss the consequences of CAP having been optimised to reduce both the costs to the bank and the amount of typing done by customers. While the principle of CAP — two factor transaction authentication — is sound, the flawed implementation in the UK puts customers at risk of fraud, or worse.

When Chip & PIN was introduced for point-of-sale, the effective liability for fraud was shifted to customers. While the banking code says that customers are not liable unless they were negligent, it is up to the bank to define negligence. In practice, the mere fact that Chip & PIN was used is considered enough. Now that Chip & PIN is used for online banking, we may see a similar reduction of consumer protection.

Further information can be found in the paper and the talk slides.