Category Archives: Hardware & signals

Electrical engineering aspects of computer security: tamper resistance, eavesdropping, signal processing, etc.

Tuning in to random numbers

Tomorrow at Cryptographic Hardware and Embedded Systems 2009 I’m going to be presenting a frequency injection attack on random number generators formed from ring oscillators.

Random numbers are a vital part of cryptography — if predictable numbers are being used an attacker may be able to read secret messages, impersonate either party, or replay transactions. In addition, many countermeasures to attacks such as Differential Power Analysis involve adding randomness to operations — without the randomness algorithms such as RSA become susceptible.

To create unpredictable random numbers in a predictable computer involves measuring some kind of physical process. Examples include circuit noise, radioactive decay and timing variations. One method commonly used in low-cost circuits such as smartcards is measuring the jitter from free-running ring oscillators. The ring oscillators’ frequencies depend on environmental factors such as voltage and temperature, and by having many independent ring oscillators we can harvest small timing differences between them (jitter).

But what happens if they aren’t independent? In particular, what happens if the circuit is faced with an attacker who can manipulate the outside of the system?

The attack turns out to be fairly straightforward. An effect called injection locking, known since 1665, considers what happens if you have two oscillators very lightly connected. For example, two pendulum clocks mounted on a wall tend to synchronise the swing of their pendula through small vibrations transmitted through the wall.

In an electronic circuit, the attacker can inject a signal to force the ring oscillators to injection-lock. The simplest way involves forcing a frequency onto the power supply from which the ring oscillators are powered. If there are any imbalances in the circuit we suggest that this causes the circuit to ring to be more susceptible at that point to injection locking. So we examined the effects of power supply injection, and can envisage a similar attack by irradiation with electromagnetic fields.

And it works surprisingly well. We tried an old version of a secure microcontroller that has been used in banking ATMs (and is still recommended for new ones). For the 32 random bits that are used in an ATM transaction, we managed to reduce the number of possibilities from 4 billion to about 225.

So if an attacker can have access to your card and PIN, in a modified shop terminal for example, he can record some ATM transactions. Then he needs to take a fake card to the ATM containing this microcontroller. On average he’ll need to record 15 transactions (the square root of 225) on the card and try 15 transactions at the ATM before he can steal the money. This number may be small enough not to set off alarms at the bank. The customer’s card and PIN were used for the transaction, but at a time when he was nowhere near an ATM.

While we looked at power supply injection, the ATM could also be attacked electromagnetically. Park a car next to the ATM emitting a 10 GHz signal amplitude modulated by the ATM’s vulnerable frequency (1.8 MHz in our example). The 10 GHz will penetrate the ventilation slots but then be filtered away, leaving 1.8 MHz in the power supply. When the car drives away there’s no evidence that the random numbers were bad – and bad random numbers are very difficult to detect anyway.

We also tried the same attack on an EMV (‘Chip and PIN’) bank card. Before injection, the card failed only one of the 188 tests in the standard NIST suite for random number testing. With injection it failed 160 of 188. While we can’t completely predict the random number generator, there are some sequences that can be seen.

So, as ever, designing good random number generators turns out to be a hard problem not least because the attacker can tamper with your system in more ways than you might expect.

You can find the paper and slides on my website.

Optimised to fail: Card readers for online banking

A number of UK banks are distributing hand-held card readers for authenticating customers, in the hope of stemming the soaring levels of online banking fraud. As the underlying protocol — CAP — is secret, we reverse-engineered the system and discovered a number of security vulnerabilities. Our results have been published as “Optimised to fail: Card readers for online banking”, by Saar Drimer, Steven J. Murdoch, and Ross Anderson.

In the paper, presented today at Financial Cryptography 2009, we discuss the consequences of CAP having been optimised to reduce both the costs to the bank and the amount of typing done by customers. While the principle of CAP — two factor transaction authentication — is sound, the flawed implementation in the UK puts customers at risk of fraud, or worse.

When Chip & PIN was introduced for point-of-sale, the effective liability for fraud was shifted to customers. While the banking code says that customers are not liable unless they were negligent, it is up to the bank to define negligence. In practice, the mere fact that Chip & PIN was used is considered enough. Now that Chip & PIN is used for online banking, we may see a similar reduction of consumer protection.

Further information can be found in the paper and the talk slides.

When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities

(co-authored with Robert Watson)

Recently, our group was treated to a presentation by Ruby Lee of Princeton University, who discussed novel cache architectures which can prevent some cache-based side channel attacks against AES and RSA. The new architecture was fascinating, in particular because it may actually increase cache performance (though this point was spiritedly debated by several systems researchers in attendance). For the security group, though, it raised two interesting and troubling questions. What is the proper defence against side-channels due to processor cache? And why hasn’t it been implemented despite these attacks being around for years?

Continue reading When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities

Anti-theft Protocols

At last Friday’s Security Group meeting, we talked about security protocols that are intended to deter or reduce the consquences of theft, and how they go wrong.

Examples include:

  • GSM mobile phones have an identifier for the phone (separate from the identifier for the user) that can be blacklisted when the phone is stolen.
  • Some car radios will stop working when the battery is disconnected, and only start working again when a numeric code is entered. This is intended to deter theft of the radio.
  • In Windows Vista, Bitlocker can be used to encrypt files. One of the intended applications for this is that if someone steals your laptop, it will be difficult for them to gain access to your encrypted files.

Ross told a story of what happened when he needed to disconnect the battery on his car: the radio stopped working, and the code he had been given to reactivate it didn’t work – it was the wrong code.
Ross argues that these reactivation codes are unecessary, because other measures taken by the car manufacturers – such as making radios non-standard sizes, and hence not refittable in other car models – have made them redundant.

I described how the motherboard on a laptop had needed to be replaced recently. The motherboard contains the TPM chip, which contains the encryption keys needed to decrypt files protected with Bitlocker. If you replace the motherboard, the files on your hard disk will become unreadable, even if the disk is physically OK. Domain-joined Vista machines can be configured so that a sysadmin somewhere within your organization is able to recover the keys when this happens.

Both of these situations suffer from classic usability problems: the recovery procedures are invoked rarely (so users may not know what they’re supposed to do), and, if your system is configured incorrectly, you only find out when it is too late: you key in the code to your radio and it remains a doorstop; the admin you hoped was escrowing your keys turns out not to have the private key corresponding to the public key you were encrypting under (or, more subtly: the person with the authority to ask for your laptop’s key to be recovered is not you, because the appropriate admin has the wrong name for the laptop’s owner in their database).

I also described what happens when an XBox 360 is stolen. When you buy XBox downloadable content, you buy two licenses: one that’s valid on any XBox, as long as you’re logged in to XBox live; and one that’s valid on just your XBox, regardless of who’s logged in. If a burglar steals your Xbox, and you buy a new one, you need to get another license of the second type (for all the other people in your household who make use of it). The software makes this awkward, because it knows that you already have a license of the first type, and assumes that you couldn’t possibly want to buy it again. The work-around is to get a new email address, a new Microsoft Live Account, and a new Gamer Tag, and use these to repurchase the license. You can’t just change the gamertag, because XBox live doesn’t let the same Microsoft Live account have two gamertags. And yes, I know, your buddies in the MMORPG you were playing know you by your gamertag, so you don’t want to change it.

An improved clock-skew measurement technique for revealing hidden services

In 2006 I published a paper on remotely estimating a computer’s temperature, based on clock skew. I showed that by inducing load on a Tor hidden service, an attacker could cause measurable changes in clock skew and so allow the computer hosting the service to be re-identified. However, it takes a very long time (hours to days) to obtain a sufficiently accurate clock-skew estimate, even taking a sample every few seconds. If measurements are less granular than the 1 kHz TCP timestamp clock source I used, then it would take longer still.

This limits the attack since in many cases TCP timestamps may be unavailable. In particular, Tor hidden services operate at the TCP layer, stripping all TCP and IP headers. If an attacker wants to estimate clock skew over the hidden service channel, the only directly available clock source may be the 1 Hz HTTP timestamp. The quantization noise in this case is three orders of magnitude above the TCP timestamp case, making the approach I used in the paper effectively infeasible.

While visiting Cambridge in summer 2007, Sebastian Zander developed an improved clock skew measurement technique which would dramatically reduce the noise of clock-skew measurements from low-frequency clocks. The basic idea, shown below, is to only request timestamps very close to a clock transition, where the quantization noise is lowest. This requires the attacker to firstly lock-on to the phase of the clock, then keep tracking it even when measurements are distorted by network jitter.

Synchronized vs random sampling

Sebastian and I wrote a paper — An Improved Clock-skew Measurement Technique for Revealing Hidden Services — describing this technique, and showing results from testing it on a Tor hidden service installed on PlanetLab. The measurements show a large improvement over the original paper, with two orders of magnitude lower noise for low-frequency clocks (like the HTTP case). This approach will allow previous attacks to be executed faster, and make previously infeasible attacks possible.

The paper will be presented at the USENIX Security Symposium, San Jose, CA, US, 28 July – 1 August 2008.

"Covert channel vulnerabilities in anonymity systems" wins best thesis award

My PhD thesis “Covert channel vulnerabilities in anonymity systems” has been awarded this year’s best thesis prize by the ERCIM security and trust management working group. The announcement can be found on the working group homepage and I’ve been invited to give a talk at their upcoming workshop, STM 08, Trondheim, Norway, 16–17 June 2008.

Update 2007-07-07: ERCIM have also published a press release.

Chip & PIN terminals vulnerable to simple attacks

Steven J. Murdoch, Ross Anderson and I looked at how well PIN entry devices (PEDs) protect cardholder data. Our paper will be published at the IEEE Symposium on Security and Privacy in May, though an extended version is available as a technical report. A segment about this work will appear on BBC Two’s Newsnight at 22:30 tonight.

We were able to demonstrate that two of the most popular PEDs in the UK — the Ingenico i3300 and Dione Xtreme — are vulnerable to a “tapping attack” using a paper clip, a needle and a small recording device. This allows us to record the data exchanged between the card and the PED’s processor without triggering tamper proofing mechanisms, and in clear violation of their supposed security properties. This attack can capture the card’s PIN because UK banks have opted to issue cheaper cards that do not use asymmetric cryptography to encrypt data between the card and PED.

Ingenico attack Dione attack

In addition to the PIN, as part of the transaction, the PED reads an exact replica of the magnetic strip (for backwards compatibility). Thus, if an attacker can tap the data line between the card and the PED’s processor, he gets all the information needed to create a magnetic strip card and withdraw money out of an ATM that does not read the chip.

We also found that the certification process of these PEDs is flawed. APACS has been effectively approving PEDs for the UK market as Common Criteria (CC) Evaluated, which does not equal Common Criteria Certified (no PEDs are CC Certified). What APACS means by “Evaluated” is that an approved lab has performed the “evaluation”, but unlike CC Certified products, the reports are kept secret, and governmental Certification Bodies do not do quality control.

This process causes a race to the bottom, with PED developers able to choose labs that will approve rather than improve PEDs, at the lowest price. Clearly, the certification process needs to be more open to the cardholders, who suffer from the fraud. It also needs to be fixed such that defective devices are refused certification.

We notified APACS, Visa, and the PED manufactures of our results in mid-November 2007 and responses arrived only in the last week or so (Visa chose to respond only a few minutes ago!) The responses are the usual claims that our demonstrations can only be done in lab conditions, that criminals are not that sophisticated, the threat to cardholder data is minimal, and that their “layers of security” will detect fraud. There is no evidence to support these claims. APACS state that the PEDs we examined will not be de-certified or removed, and the same for the labs who certified them and would not even tell us who they are.

The threat is very real: tampered PEDs have already been used for fraud. See our press release and FAQ for basic points and the technical report where we discuss the work in detail.

Update 1 (2008-03-09): The segment of Newsnight featuring our contribution has been posted to Google Video.

Update 2 (2008-03-21): If the link above doesn’t work try YouTube: part1 and part 2.

Relay attacks on card payment: vulnerabilities and defences

At this year’s Chaos Communication Congress (24C3), I presented some work I’ve been doing with Saar Drimer: implementing a smart card relay attack and demonstrating that it can be prevented by distance bounding protocols. My talk (abstract) was filmed and the video can be found below. For more information, we produced a webpage and the details can be found in our paper.

[ slides (PDF 9.6M) | video (BitTorrent — MPEG4, 106M) ]

Update 2008-01-15:
Liam Tung from ZDNet Australia has written an article on my talk: Bank card attack: Only Martians are safe.

Other highlights from the conference…

Keep your keypads close

On a recent visit to a local supermarket I noticed something new being displayed on the keypad before the transaction starts:

Did you know that you can remove the PIN pad to enter your PIN?

(“Did you know that you can remove the PIN pad to enter your PIN?”)

Picking up the keypad will allow the cardholder to align it such that bystanders, or the merchant, cannot observe the PIN as it is entered. On the one hand, this seems sensible (if we assume that the only way to get the PIN is by observation, no cameras are present, and that even more cardholder liability is the solution for card fraud). On the other hand, it also makes some attacks easier. For example, the relay attack we demonstrated earlier this year, where the crook inserts a modified card into the terminal, hoping that the merchant does not ask to examine it. Allowing the cardholder to move the keypad separates the merchant, who could detect the attack, from the transaction. Can I now hide the terminal under my jacket while the transaction is processed? Can I turn my back to the merchant? What if I found a way to tamper with the terminal? Clearly, this would make the process easier for me. We’ve been doing some more work on payment terminals and will hopefully have some more to say about it soon.

Continue reading Keep your keypads close

The dinosaurs of five years ago

A project called NSA@home has been making the rounds. It’s a gem. Stanislaw Skowronek got some old HDTV hardware off of eBay, and managed to create himself a pre-image brute force attack machine against SHA-1. The claim is that it can find a pre-image for an 8 character password hash from a 64 character set in about 24 hours.

The key here is that this hardware board uses 15 field programmable gate arrays (FPGAs), which are generic integrated circuits that can perform any logic function within their size limit. So, Stanislaw reverse engineered the connections between the FPGAs, wrote his own designs and now has a very powerful processing unit. FPGAs are better at specific tasks compared to general purpose CPUs, especially for functions that can be divided into many independently-running smaller chunks operating in parallel. Some cryptographic functions are a perfect match; our own Richard Clayton and Mike Bond attacked the DES implementation in the IBM 4758 hardware security module using an FPGA prototyping board; DES was attacked on the FPGA-based custom hardware platform, the Transmogrifier 2a; more recently, the purpose-built COPACOBANA machine which uses 120 low-end FPGAs operating in parallel to break DES in about 7 days; a proprietary stream cipher on RFID tokens was attacked using 16 commercial FPGA boards operating in parallel; and finally, people are now in the midst of cracking the A5 stream cipher in real time using commercial FPGA modules. The unique development we see with NSA@home is that it uses a defunct piece of hardware.

Continue reading The dinosaurs of five years ago