Posts filed under 'Hardware & signals

Jan 9, '08

At this year’s Chaos Communication Congress (24C3), I presented some work I’ve been doing with Saar Drimer: implementing a smart card relay attack and demonstrating that it can be prevented by distance bounding protocols. My talk (abstract) was filmed and the video can be found below. For more information, we produced a webpage and the details can be found in our paper.

[ slides (PDF 9.6M) | video (BitTorrent -- MPEG4, 106M) ]

Update 2008-01-15:
Liam Tung from ZDNet Australia has written an article on my talk: Bank card attack: Only Martians are safe.

Other highlights from the conference…

Sep 15, '07

On a recent visit to a local supermarket I noticed something new being displayed on the keypad before the transaction starts:

Did you know that you can remove the PIN pad to enter your PIN?

(“Did you know that you can remove the PIN pad to enter your PIN?”)

Picking up the keypad will allow the cardholder to align it such that bystanders, or the merchant, cannot observe the PIN as it is entered. On the one hand, this seems sensible (if we assume that the only way to get the PIN is by observation, no cameras are present, and that even more cardholder liability is the solution for card fraud). On the other hand, it also makes some attacks easier. For example, the relay attack we demonstrated earlier this year, where the crook inserts a modified card into the terminal, hoping that the merchant does not ask to examine it. Allowing the cardholder to move the keypad separates the merchant, who could detect the attack, from the transaction. Can I now hide the terminal under my jacket while the transaction is processed? Can I turn my back to the merchant? What if I found a way to tamper with the terminal? Clearly, this would make the process easier for me. We’ve been doing some more work on payment terminals and will hopefully have some more to say about it soon.

(more…)

Sep 2, '07

A project called NSA@home has been making the rounds. It’s a gem. Stanislaw Skowronek got some old HDTV hardware off of eBay, and managed to create himself a pre-image brute force attack machine against SHA-1. The claim is that it can find a pre-image for an 8 character password hash from a 64 character set in about 24 hours.

The key here is that this hardware board uses 15 field programmable gate arrays (FPGAs), which are generic integrated circuits that can perform any logic function within their size limit. So, Stanislaw reverse engineered the connections between the FPGAs, wrote his own designs and now has a very powerful processing unit. FPGAs are better at specific tasks compared to general purpose CPUs, especially for functions that can be divided into many independently-running smaller chunks operating in parallel. Some cryptographic functions are a perfect match; our own Richard Clayton and Mike Bond attacked the DES implementation in the IBM 4758 hardware security module using an FPGA prototyping board; DES was attacked on the FPGA-based custom hardware platform, the Transmogrifier 2a; more recently, the purpose-built COPACOBANA machine which uses 120 low-end FPGAs operating in parallel to break DES in about 7 days; a proprietary stream cipher on RFID tokens was attacked using 16 commercial FPGA boards operating in parallel; and finally, people are now in the midst of cracking the A5 stream cipher in real time using commercial FPGA modules. The unique development we see with NSA@home is that it uses a defunct piece of hardware.

(more…)

Aug 8, '07

In May 2007, Saar Drimer and Steven Murdoch posted about “Distance bounding against smartcard relay attacks”. Today their paper won the “Best Student Paper” award at USENIX Security 2007 and their slides are now online. You can read more about this work on the Security Group’s banking security web page.

Steven and Saar at USENIX Security 2007

May 21, '07

Steven Murdoch and I have previously discussed issues concerning the tamper resistance of payment terminals and the susceptibility of Chip & PIN to relay attacks. Basically, the tamper resistance protects the banks but not the customers, who are left to trust any of the devices they provide their card and PIN to (the hundreds of different types of terminals do not help here). The problem some customers face is that when fraud happens, they are the ones being blamed for negligence instead of the banks owning up to a faulty system. Exacerbating the problem is the impossibility of customers to prove they have not been negligent with their secrets without the proper data that the banks have, but refuse to hand out.

(more…)

Apr 16, '07

We have recently been implementing an attack on ZigBee communication. The ZigBee chip we have been using works pretty much like any other — it listens on a selected channel and when there is a packet being transmitted, the data is stored in internal buffer. When the whole packet is received, an interrupt is signalled and micro-controller can read out the whole packet at once.

What we needed was a bit more direct access to the MAC layer. The very first idea was to find another chip as we could not do anything at the level of abstraction described. On the second thought, we carefully read the datasheet and found out that there is an “unbuffered mode” for receiving, as well as transmitting data. There is a sentence that reads “Un-buffered mode should be used for evaluation / debugging purposes only”, but why not to give it a go.

It took a while (the datasheet does not really get the description right, there are basic factual mistakes, and the micro-controller was a bit slower to serve hardware interrupts than expected) but we managed to do what we wanted to do — get interesting data before the whole packet is transmitted.

This was not the first occasion when debug mode or debug information saved us from a defeat when implementing an attack. This made me think a bit.

This sort of approach exactly represents the original meaning of hacking and hackers. It seems that this sort of activity is slowly returning to universities as more and more people are implementing attacks to demonstrate their ideas. It is not so much popular (my impression) to implement complicated systems like role-based access control systems because real life shows that there will be “buffer overflows” allowing all the cleverness to be bypassed. Not many people are interested in doing research into software vulnerabilities either. On the other hand, more attacks on hardware (stealthy, subtle ones) are being devised and implemented.

The second issue is much more general. Is it the case that there will always be a way to get around the official (or intended) application interface? Surely, there are products that restrict access to, or remove, debugging options when the product is prepared for production — smart-cards are a typical example. But disabling debug features introduces very strong limitations. It is very hard or even impossible to check correct functionality of the product (hardware chip, piece of software) — something not really desirable when the product should be used as a component in larger systems. And definitely not desirable for hackers …

Dec 12, '06

23C3 logoThe 23rd Chaos Communication Congress will be held later this month in Berlin, Germany on 27–30 December. I will be attending to give a talk on Hot or Not: Revealing Hidden Services by their Clock Skew. Another contributor to this blog, George Danezis, will be talking on An Introduction to Traffic Analysis.

This will be my third time speaking at the CCC (I previously talked on Hidden Data in Internet Published Documents and The Convergence of Anti-Counterfeiting and Computer Security in 2004 then Covert channels in TCP/IP: attack and defence in 2005) and I’ve always had a great time but this year looks to be the best yet. Here are a few highlights from the draft programme, although I am sure there are many great talks I have missed.

It’s looking like a great line-up, so I hope many of you can make it. See you there!

Nov 7, '06

The most impressive physical security research team in the world is probably Roger Johnston’s Vulnerability Assessment Team at Los Alamos. People outside the USA have been having some difficulty getting papers from their web pages, so I have cached their papers on one of our servers here:

http://www.cl.cam.ac.uk/~rja14/musicfiles/preprints/Johnston

Oct 8, '06

Recently, Kish proposed a “totally secure communication system” that uses only resistors, wires and Johnson noise. His paper—“Totally Secure Classical Communication Utilizing Johnson (-like) Noise and Kirchoff’s Law”—was published on Physics Letters (March 2006).

The above paper had been featured in Science magazine (Vol. 309), reported in News articles (Wired news, Physorg.com) and discussed in several weblogs (Schneier on security, Slashdot). The initial sensation created was that Quantum communication could now be replaced by a much cheaper means. But not quite so …

This paper—to appear in IEE Information Security—shows that the design of Kish’s system is fundamentally flawed. The theoretical model, which underpins Kish’s system, implicitly assumes thermal equilibrium throughout the communication channel. This assumption, however, is invalid in real communication systems.

Kish used a single symbol ‘T’ to denote the channel temperature throughout his analysis. This, however, disregards the fact that any real communication system has to span a distance and endure different conditions. A slight temperature difference between the two communicating ends will lead to security failure—allowing an eavesdropper to uncover the secret bits easily (more details are in the paper).

As a countermeasure, it might be possible to adjust the temperature difference at two ends to be as small as possible—for example, by using external thermal noise generators. However, this gives no security guarantee. Instead of requiring a fast computer, an eavesdropper now merely needs a voltage meter that is more accurate than the equipments used by Alice and Bob.

In addition, the transmission line must maintain the same temperature (and noise bandwidth) as the two ends to ensure “thermal equilibrium”, which is clearly impossible. Kish avoids this problem by assuming zero resistance on the transmission line in his paper. Since the problem with the finite resistance on the transmission line had been reported before, I will not discuss it further here.

To sum up, the mistake in Kish’s paper is that the author wrongly grafted assumptions from one subject into another. In circuit analysis, it is common practice to assume the same room temperate and ignore wire resistance in order to simplify the calculation; the resultant discrepancy is usually well within the tolerable range. However, the design of a secure communication is very different, as a tiny discrepancy could severely compromise the system security. Basing security upon invalid assumptions is a fundamental flaw in the design of Kish’s system.

Sep 4, '06

Next month I will be presenting my paper “Hot or Not: Revealing Hidden Services by their Clock Skew” at the 13th ACM Conference on Computer and Communications Security (CCS) held in Alexandria, Virginia.

It is well known that quartz crystals, as used for controlling system clocks of computers, change speed when their temperature is altered. The paper shows how to use this effect to attack anonymity systems. One such attack is to observe timestamps from a PC connected to the Internet and watch how the frequency of the system clock changes.

Absolute clock skew has been previously used to tell whether two apparently different machines are in fact running on the same hardware. My paper adds that because the skew depends on temperature, in principle, a PC can be located by finding out when the day starts and how long it is, or just observing that the pattern is the same as a computer in a known location.

However, the paper is centered around hidden services. This is a feature of Tor which allows servers to be run without giving away the identity of the operator. These can be attacked by repeatedly connecting to the hidden service, causing its CPU load, hence temperature, to increase and so change the clockskew. Then the attacker requests timestamps from all candidate servers and finds the one demonstrating the expected clockskew pattern. I tested this with a private Tor network and it works surprisingly well.

In the graph below, the temperature (orange circles) is modulated by either exercising the hidden service or not. This in turn alters the measured clock skew (blue triangles). The induced load pattern is clear in the clock skew and an attacker could use this to de-anonymise a hidden service. More details can be found in the paper (PDF 1.5M).

Clock skew graph

I happened upon this effect in a lucky accident, while trying to improve upon the results of the paper “Remote physical device fingerprinting“. A previous paper of mine, “Embedding Covert Channels into TCP/IP” showed how to extract high-precision timestamps from the Linux TCP initial sequence number generator. When I tested this hypothesis it did indeed improve the accuracy of clock skew measurement, to the extent that I noticed an unusual peak at about the time cron caused the hard disk on my test machine to spin-up. Eventually I realised the potential for this effect and ran the necessary further experiments to write the paper.


Calendar

April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  

Posts by Month

Posts by Category