Category Archives: Hardware & signals

Electrical engineering aspects of computer security: tamper resistance, eavesdropping, signal processing, etc.

There exists a classical model of the photon after all

Many people assume that quantum mechanics cannot emerge from classical phenomena, because no-one has so far been able to think of a classical model of light that is consistent with Maxwell’s equations and reproduces the Bell test results quantitatively.

Today Robert Brady and I unveil just such a model. It turns out that the solution was almost in plain sight, in James Clerk Maxwell’s 1861 paper On Phyiscal Lines of Force in which he derived Maxwell’s equations, on the assumption that magnetic lines of force were vortices in a fluid. Updating this with modern knowledge of quantised magnetic flux, we show that if you model a flux tube as a phase vortex in an inviscid compressible fluid, then wavepackets sent down this vortex obey Maxwell’s equations to first order; that they can have linear or circular polarisation; and that the correlation measured between the polarisation of two cogenerated wavepackets is exactly the same as is predicted by quantum mechanics and measured in the Bell tests.

This follows work last year in which we explained Yves Couder’s beautiful bouncing-droplet experiments. There, a completely classical system is able to exhibit quantum-mechanical behaviour as the wavefunction ψ appears as a modulation on the driving oscillation, which provides coherence across the system. Similarly, in the phase vortex model, the magnetic field provides the long-range order and the photon is a modulation of it.

We presented this work yesterday at the 2015 Symposium of the Trinity Mathematical Society. Our talk slides are here and there is an audio recording here.

If our sums add up, the consequences could be profound. First, it will explain why quantum computers don’t work, and blow away the security ‘proofs’ for entanglement-based quantum cryptosystems (we already wrote about that here and here). Second, if the fundamental particles are just quasiparticles in a superfluid quantum vacuum, there is real hope that we can eventually work out where all the mysterious constants in the Standard Model come from. And third, there is no longer any reason to believe in multiple universes, or effects that propagate faster than light or backward in time – indeed the whole ‘spooky action at a distance’ to which Einstein took such exception. He believed that action in physics was local and causal, as most people do; our paper shows that the main empirical argument against classical models of reality is unsound.

Curfew tags – the gory details

In previous posts I told the story of how Britain’s curfew tagging system can fail. Some prisoners are released early provided they wear a tag to enforce a curfew, which typically means that they have to stay home from 7pm to 7am; some petty offenders get a curfew instead of a prison sentence; and some people accused of serious crimes are tagged while on bail. In dozens of cases, curfewees had been accused of tampering with their tags, but had denied doing so. In a series of these cases, colleagues and I were engaged as experts, but when we demanded tags for testing, the prosecution was withdrawn and the case collapsed. In the most famous case, three men accused of terrorist offences were released; although one has since absconded, the other two are now free in the UK.

This year, a case finally came to trial. Our client, to whom we must refer simply as “Special Z”, was accused of tag tampering, which he denied vigorously. I was instructed as an expert along with my colleague Dr James Dean of Materials Science. Here is my expert report, together with James’s report and addendum, as well as a video of a tag being removed using much less than the amount of force required by the system specification.

The judge was not ready to set a precedent that could have thrown the UK tagging system into chaos. However, I understand our client has now been released on other grounds. Although the court did order us to hand back all the tags, and fragments of broken tags, so as to protect G4S’s intellectual property, it did not make a secrecy order on our expert reports. We publish them here in the hope that they might provide useful guidance to defendants in similar cases in the future, and to policymakers when tagging contracts come up for renewal, whether in the UK or overseas.

Nikka – Digital Strongbox (Crypto as Service)

Imagine, somewhere in the internet that no-one trusts, there is a piece of hardware, a small computer, that works just for you. You can trust it. You can depend on it. Things may get rough but it will stay there to get you through. That is Nikka, it is the fixed point on which you can build your security and trust. [Now as a Kickstarter project]

You may remember our proof-of-concept implementation of a password protection for servers – Hardware Scrambling (published here in March). The password scrambler was a small dongle that could be plugged to a Linux computer (we used Raspberry Pi). Its only purpose was to provide a simple API for encrypting passwords (but it could be credit cards or anything else up to 32 bytes of length). The beginning of something big?

It received some attention (Ars Technica, Slashdot, LWN, …), certainly more than we expected at the time. Following discussions have also taught us a couple of lessons about how people (mostly geeks in this contexts) view security – particularly about the default distrust expressed by those who discussed articles describing our password scrambler.

We eventually decided to build a proper hardware cryptographic platform that could be used for cloud applications. Our requirements were simple. We wanted something fast, “secure” (CC EAL5+ or even FIPS140-2 certified), scalable, easy to use (no complicated API, just one function call) and to be provided as a service so no-one has to pay upfront the price of an HSM if they just want to have a go at using proper cryptography for their new or old application. That was the beginning of Nikka.

nikka_setup

This is our concept: Nikka comprises a set of powerful servers installed in secure data centres. These servers can create clusters delivering high-availability and scalability for their clients. Secure hardware forms the backbone of each server that provides an interface for simple use. The second part of Nikka are user applications, plugins, and libraries for easy deployment and everyday “invisible” use. Operational procedures, processes, policies, and audit logs then guarantee that what we say is actually being done.

2014-07-04 08.17.35We have been building it for a few months now and the scalable cryptographic core seems to work. We have managed to run long-term tests of 150 HMAC transactions per second (HMAC & RNG for password scrambling) on a small development platform while fully utilising available secure hardware. The server is hosted at ideaSpace and we use it to run functional, configuration and load tests.

We have never before designed a system with so many independent processes – the core is completely asynchronous (starting with Netty for a TCP interface) and we have quickly started to appreciate detailed trace logging we’ve implemented from the very beginning. Each time we start digging we find something interesting. Real-time visualisation of the performance is quite nice as well.
real_time_monitoring

Nikka is basically a general purpose cryptographic engine with middleware layer for easy integration. The password HMAC is this time used only as one of test applications. Users can share or reserve processing units that have Common Criteria evaluations or even FIPS140-2 certification – with possible physical hardware separation of users.

If you like what you have read so far, you can keep reading, watching, supporting at Kickstarter. It has been great fun so far and we want to turn it into something useful in 2015. If it sounds interesting – maybe you would like to test it early next year, let us know! @DanCvrcek

PhD studentship: Model-based assessment of compromising emanations

We have a fully funded 3.5-year PhD Studentship on offer, from October 2014, for a research student to work on “Model-based assessment of compromising emanations”. The project aims to improve our understanding of electro-magnetic emissions that are unintentionally emitted by computing equipment, and the eavesdropping risks they pose. In particular, it aims to improve test and measurement procedures (TEMPEST) for computing equipment that processes extremely confidential data. We are looking for an Electrical Engineering, Computer Science or Physics graduate with an interest in electronics, software-defined radio, hardware security, side-channel cryptanalysis, digital signal processing, electromagnetic compatibility, or machine learning.

Check the full advert and contact Dr Markus Kuhn for more information if you are interested in applying, quoting NR03517. Application deadline: 23 June 2014.

The pre-play vulnerability in Chip and PIN

Today we have published a new paper: “Chip and Skim: cloning EMV cards with the pre-play attack”, presented at the 2014 IEEE Symposium on Security and Privacy. The paper analyses the EMV protocol, the leading smart card payment system with 1.62 billion cards in circulation, and known as “Chip and PIN” in English-speaking countries. As a result of the Target data breach, banks in the US (which have lagged behind in Chip and PIN deployment compared to the rest of the world) have accelerated their efforts to roll out Chip and PIN capable cards to their customers.

However, our paper shows that Chip and PIN, as currently implemented, still has serious vulnerabilities, which might leave customers at risk of fraud. Previously we have shown how cards can be used without knowing the correct PIN, and that card details can be intercepted as a result of flawed tamper-protection. Our new paper shows that it is possible to create clone chip cards which normal bank procedures will not be able to distinguish from the real card.

When a Chip and PIN transaction is performed, the terminal requests that the card produces an authentication code for the transaction. Part of this transaction is a number that is supposed to be random, so as to stop an authentication code being generated in advance. However, there are two ways in which the protection can by bypassed: the first requires that the Chip and PIN terminal has a poorly designed random generation (which we have observed in the wild); the second requires that the Chip and PIN terminal or its communications back to the bank can be tampered with (which again, we have observed in the wild).

To carry out the attack, the criminal arranges that the targeted terminal will generate a particular “random” number in the future (either by predicting which number will be generated by a poorly designed random number generator, by tampering with the random number generator, or by tampering with the random number sent to the bank). Then the criminal gains temporary access to the card (for example by tampering with a Chip and PIN terminal) and requests authentication codes corresponding to the “random” number(s) that will later occur. Finally, the attacker loads the authentication codes on to the clone card, and uses this card in the targeted terminal. Because the authentication codes that the clone card provides match those which the real card would have provided, the bank cannot distinguish between the clone card and the real one.

Because the transactions look legitimate, banks may refuse to refund victims of fraud. So in the paper we discuss how bank procedures could be improved to detect whether this attack has occurred. We also describe how the Chip and PIN system could be improved. As a result of our research, work has started on mitigating one of the vulnerabilities we identified; the certification requirements for random number generators in Chip and PIN terminals have been improved, though old terminals may still be vulnerable. Attacks making use of tampered random number generators or communications are more challenging to prevent and have yet to be addressed.

Update (2014-05-20): There is press coverage of this paper in The Register, SC Magazine UK and Schneier on Security.
Update (2014-05-21): Also now covered in The Hacker News.

A new side channel attack

Today we’re presenting a new side-channel attack in PIN Skimmer: Inferring PINs Through The Camera and Microphone at SPSM 2013. We found that software on your smartphone can work out what PIN you’re entering by watching your face through the camera and listening for the clicks as you type. Previous researchers had shown how to work out PINs using the gyro and accelerometer; we found that the camera works about as well. We watch how your face appears to move as you jiggle your phone by typing.

There are implications for the design of electronic wallets using mechanisms such as Trustzone which enable some apps to run in a more secure sandbox. Such systems try to prevent sensitive data such as bank credentials being stolen by malware. Our work shows it’s not enough for your electronic wallet software to grab hold of the screen, the accelerometers and the gyro; you’d better lock down the video camera, and the still camera too while you’re at it. (Our attack can use the still camera in burst mode.)

We suggest ways in which mobile phone operating systems might mitigate the risks. Meanwhile, if you’re developing payment apps, you’d better be aware that these risks exist.

Offender tagging

August was a slow month, but we got a legal case where our client was accused of tampering with a curfew tag, and I was asked for an expert report on the evidence presented by Serco, the curfew tagging contractor. Many offenders in the UK are released early (or escape prison altogether) on condition that they stay at home from 8pm to 8am and wear an ankle bracelet so their compliance can be monitored. These curfew tags have been used for fourteen years now but are controversial for various reasons; but with the prisons full and 17,500 people on tag at any one time, the objective of policy is to improve the system rather than abolish it.

In this spirit I offer a redacted version of my expert report which may give some insight into the frailty of the system. The logs relating to my defendant’s case showed large numbers of false alarms; some of these had good explanations (such as power cuts) but many didn’t. The overall impression is of an unreliable technology surrounded by chaotic procedures. Of policy concern too is that the tagging contractor not only supplies the tags and the back-end systems, but the call centre and the interface to the court system. What’s more, if you break your curfew, it isn’t the Crown Prosecution Service that takes you before the magistrates, but the contractor – relying on expert evidence from one of its subcontractors. Such closed systems are notoriously vulnerable to groupthink. Anyway, we asked the court for access not just to the tag in the case, but a complete set of tagging equipment for testing, plus system specifications, false alarm statistics and audit reports. The contractor promptly replied that “although we continue to feel that the defendant is in breach of the order, our attention has been drawn to a number of factors that would allow me to properly discontinue proceedings in the public interest.”

The report is published with the consent of my client and her solicitor. Long-time readers of this blog may recall similarities with the case of Jane Badger. If you’re designing systems on whose output someone may have to rely in court, you’d better think hard about how they’ll stand up to hostile review.

Eavesdropping a fax machine

I was intrigued this morning to see on the front page of the Guardian newspaper a new revelation by NSA whistleblower Edward Snowden: a US eavesdropping technique “DROPMIRE implanted on the Cryptofax at the EU embassy [Washington] D.C.”. I was even more intrigued by an image that accompanied the report (click for higher resolution):

The Guardian, 1 July 2013, page 1

Having done many experiments to eavesdrop on office equipment myself, the noisy image at the bottom third of the picture above looked instantly familiar: it is what you might get from listening with a radio receiver on the compromising emanations of a video signal of a page of text. Continue reading Eavesdropping a fax machine

Fixing popping/clicking audio on Raspberry Pi

I’m working on a security-related project with the Raspberry Pi and have encountered an annoying problem with the on-board sound output. I’ve managed to work around this, so thought it might be helpful the share my experiences with others in the same situation.

The problem manifests itself as a loud pop or click, just before sound is output and just after sound output is stopped. This is because a PWM output of the BCM2835 CPU is being used, rather than a standard DAC. When the PWM function is activated, there’s a jump in output voltage which results in the popping sound.

Until there’s a driver modification, the work-around suggested (other than using the HDMI sound output or an external USB sound card) is to run PulseAudio on top of ALSA and keep the driver active even when no sound is being output. This is achieved by disabling the module-suspend-on-idle PulseAudio module, then configuring applications to use PulseAudio rather than ALSA. Daniel Bader describes this work-around and how to configure MPD, in a blog post. However, when I tried this approach, the work-around didn’t work.

Continue reading Fixing popping/clicking audio on Raspberry Pi

Chip and Skim: cloning EMV cards with the pre-play attack

November last, on the Eurostar back from Paris, something struck me as I looked at the logs of ATM withdrawals disputed by Alex Gambin, a customer of HSBC in Malta. Comparing four grainy log pages on a tiny phone screen, I had to scroll away from the transaction data to see the page numbers, so I couldn’t take in the big picture in one go. I differentiated pages instead using the EMV Unpredictable Number field – a 32 bit field that’s supposed to be unique to each transaction. I soon got muddled up… it turned out that the unpredictable numbers… well… weren’t. Each shared 17 bits in common and the remaining 15 looked at first glance like a counter. The numbers are tabulated as follows:

F1246E04
F1241354
F1244328
F1247348

And with that the ball started rolling on an exciting direction of research that’s kept us busy the last nine months. You see, an EMV payment card authenticates itself with a MAC of transaction data, for which the freshly generated component is the unpredictable number (UN). If you can predict it, you can record everything you need from momentary access to a chip card to play it back and impersonate the card at a future date and location. You can as good as clone the chip. It’s called a “pre-play” attack. Just like most vulnerabilities we find these days some in industry already knew about it but covered it up; we have indications the crooks know about this too, and we believe it explains a good portion of the unsolved phantom withdrawal cases reported to us for which we had until recently no explanation.

Mike Bond, Omar Choudary, Steven J. Murdoch, Sergei Skorobogatov, and Ross Anderson wrote a paper on the research, and Steven is presenting our work as keynote speaker at Cryptographic Hardware and Embedded System (CHES) 2012, in Leuven, Belgium. We discovered that the significance of these numbers went far beyond this one case.

Continue reading Chip and Skim: cloning EMV cards with the pre-play attack