We have a fully funded 3.5-year PhD Studentship on offer, from October 2014, for a research student to work on “Model-based assessment of compromising emanations”. The project aims to improve our understanding of electro-magnetic emissions that are unintentionally emitted by computing equipment, and the eavesdropping risks they pose. In particular, it aims to improve test and measurement procedures (TEMPEST) for computing equipment that processes extremely confidential data. We are looking for an Electrical Engineering, Computer Science or Physics graduate with an interest in electronics, software-defined radio, hardware security, side-channel cryptanalysis, digital signal processing, electromagnetic compatibility, or machine learning.
Today we have published a new paper: “Chip and Skim: cloning EMV cards with the pre-play attack”, presented at the 2014 IEEE Symposium on Security and Privacy. The paper analyses the EMV protocol, the leading smart card payment system with 1.62 billion cards in circulation, and known as “Chip and PIN” in English-speaking countries. As a result of the Target data breach, banks in the US (which have lagged behind in Chip and PIN deployment compared to the rest of the world) have accelerated their efforts to roll out Chip and PIN capable cards to their customers.
However, our paper shows that Chip and PIN, as currently implemented, still has serious vulnerabilities, which might leave customers at risk of fraud. Previously we have shown how cards can be used without knowing the correct PIN, and that card details can be intercepted as a result of flawed tamper-protection. Our new paper shows that it is possible to create clone chip cards which normal bank procedures will not be able to distinguish from the real card.
When a Chip and PIN transaction is performed, the terminal requests that the card produces an authentication code for the transaction. Part of this transaction is a number that is supposed to be random, so as to stop an authentication code being generated in advance. However, there are two ways in which the protection can by bypassed: the first requires that the Chip and PIN terminal has a poorly designed random generation (which we have observed in the wild); the second requires that the Chip and PIN terminal or its communications back to the bank can be tampered with (which again, we have observed in the wild).
To carry out the attack, the criminal arranges that the targeted terminal will generate a particular “random” number in the future (either by predicting which number will be generated by a poorly designed random number generator, by tampering with the random number generator, or by tampering with the random number sent to the bank). Then the criminal gains temporary access to the card (for example by tampering with a Chip and PIN terminal) and requests authentication codes corresponding to the “random” number(s) that will later occur. Finally, the attacker loads the authentication codes on to the clone card, and uses this card in the targeted terminal. Because the authentication codes that the clone card provides match those which the real card would have provided, the bank cannot distinguish between the clone card and the real one.
Because the transactions look legitimate, banks may refuse to refund victims of fraud. So in the paper we discuss how bank procedures could be improved to detect whether this attack has occurred. We also describe how the Chip and PIN system could be improved. As a result of our research, work has started on mitigating one of the vulnerabilities we identified; the certification requirements for random number generators in Chip and PIN terminals have been improved, though old terminals may still be vulnerable. Attacks making use of tampered random number generators or communications are more challenging to prevent and have yet to be addressed.
Today we’re presenting a new side-channel attack in PIN Skimmer: Inferring PINs Through The Camera and Microphone at SPSM 2013. We found that software on your smartphone can work out what PIN you’re entering by watching your face through the camera and listening for the clicks as you type. Previous researchers had shown how to work out PINs using the gyro and accelerometer; we found that the camera works about as well. We watch how your face appears to move as you jiggle your phone by typing.
There are implications for the design of electronic wallets using mechanisms such as Trustzone which enable some apps to run in a more secure sandbox. Such systems try to prevent sensitive data such as bank credentials being stolen by malware. Our work shows it’s not enough for your electronic wallet software to grab hold of the screen, the accelerometers and the gyro; you’d better lock down the video camera, and the still camera too while you’re at it. (Our attack can use the still camera in burst mode.)
We suggest ways in which mobile phone operating systems might mitigate the risks. Meanwhile, if you’re developing payment apps, you’d better be aware that these risks exist.
August was a slow month, but we got a legal case where our client was accused of tampering with a curfew tag, and I was asked for an expert report on the evidence presented by Serco, the curfew tagging contractor. Many offenders in the UK are released early (or escape prison altogether) on condition that they stay at home from 8pm to 8am and wear an ankle bracelet so their compliance can be monitored. These curfew tags have been used for fourteen years now but are controversial for various reasons; but with the prisons full and 17,500 people on tag at any one time, the objective of policy is to improve the system rather than abolish it.
In this spirit I offer a redacted version of my expert report which may give some insight into the frailty of the system. The logs relating to my defendant’s case showed large numbers of false alarms; some of these had good explanations (such as power cuts) but many didn’t. The overall impression is of an unreliable technology surrounded by chaotic procedures. Of policy concern too is that the tagging contractor not only supplies the tags and the back-end systems, but the call centre and the interface to the court system. What’s more, if you break your curfew, it isn’t the Crown Prosecution Service that takes you before the magistrates, but the contractor – relying on expert evidence from one of its subcontractors. Such closed systems are notoriously vulnerable to groupthink. Anyway, we asked the court for access not just to the tag in the case, but a complete set of tagging equipment for testing, plus system specifications, false alarm statistics and audit reports. The contractor promptly replied that “although we continue to feel that the defendant is in breach of the order, our attention has been drawn to a number of factors that would allow me to properly discontinue proceedings in the public interest.”
The report is published with the consent of my client and her solicitor. Long-time readers of this blog may recall similarities with the case of Jane Badger. If you’re designing systems on whose output someone may have to rely in court, you’d better think hard about how they’ll stand up to hostile review.
I was intrigued this morning to see on the front page of the Guardian newspaper a new revelation by NSA whistleblower Edward Snowden: a US eavesdropping technique “DROPMIRE implanted on the Cryptofax at the EU embassy [Washington] D.C.”. I was even more intrigued by an image that accompanied the report (click for higher resolution):
Having done many experiments to eavesdrop on office equipment myself, the noisy image at the bottom third of the picture above looked instantly familiar: it is what you might get from listening with a radio receiver on the compromising emanations of a video signal of a page of text. Continue reading Eavesdropping a fax machine
I’m working on a security-related project with the Raspberry Pi and have encountered an annoying problem with the on-board sound output. I’ve managed to work around this, so thought it might be helpful the share my experiences with others in the same situation.
The problem manifests itself as a loud pop or click, just before sound is output and just after sound output is stopped. This is because a PWM output of the BCM2835 CPU is being used, rather than a standard DAC. When the PWM function is activated, there’s a jump in output voltage which results in the popping sound.
Until there’s a driver modification, the work-around suggested (other than using the HDMI sound output or an external USB sound card) is to run PulseAudio on top of ALSA and keep the driver active even when no sound is being output. This is achieved by disabling the
module-suspend-on-idle PulseAudio module, then configuring applications to use PulseAudio rather than ALSA. Daniel Bader describes this work-around and how to configure MPD, in a blog post. However, when I tried this approach, the work-around didn’t work.
November last, on the Eurostar back from Paris, something struck me as I looked at the logs of ATM withdrawals disputed by Alex Gambin, a customer of HSBC in Malta. Comparing four grainy log pages on a tiny phone screen, I had to scroll away from the transaction data to see the page numbers, so I couldn’t take in the big picture in one go. I differentiated pages instead using the EMV Unpredictable Number field – a 32 bit field that’s supposed to be unique to each transaction. I soon got muddled up… it turned out that the unpredictable numbers… well… weren’t. Each shared 17 bits in common and the remaining 15 looked at first glance like a counter. The numbers are tabulated as follows:
F1246E04 F1241354 F1244328 F1247348
And with that the ball started rolling on an exciting direction of research that’s kept us busy the last nine months. You see, an EMV payment card authenticates itself with a MAC of transaction data, for which the freshly generated component is the unpredictable number (UN). If you can predict it, you can record everything you need from momentary access to a chip card to play it back and impersonate the card at a future date and location. You can as good as clone the chip. It’s called a “pre-play” attack. Just like most vulnerabilities we find these days some in industry already knew about it but covered it up; we have indications the crooks know about this too, and we believe it explains a good portion of the unsolved phantom withdrawal cases reported to us for which we had until recently no explanation.
Mike Bond, Omar Choudary, Steven J. Murdoch, Sergei Skorobogatov, and Ross Anderson wrote a paper on the research, and Steven is presenting our work as keynote speaker at Cryptographic Hardware and Embedded System (CHES) 2012, in Leuven, Belgium. We discovered that the significance of these numbers went far beyond this one case.
We are pleased to announce a job opening at the University of Cambridge Computer Laboratory for a post-doctoral researcher working in the areas of security, operating systems, and computer architecture.
University of Cambridge – Faculty of Computer Science & Technology
Salary: £27,428 – £35,788 pa
The funds for this post are available for one year:
We are seeking a Post-doctoral Research Associate to join the CTSRD Project, which is investigating fundamental improvements to CPU architecture, operating system (OS), and programming language structure in support of computer security. The CTSRD Project is a collaboration between the University of Cambridge and SRI International, and part of the DARPA CRASH research programme on clean-slate computer system design.
This position will be an integral part of an international team of researchers spanning multiple institutions across academia and industry. The successful candidate will contribute to low-level aspects of system software: compilers, language run-times, and OS kernels. Responsibilities will include researching the application of novel dynamic techniques to C-language operating systems and applications, including adaptation of the FreeBSD kernel and LLVM compiler suite, and measurement of the resulting system.
An ideal candidate will hold (or be close to finishing) a PhD in Computer Science, Mathematics, or similar with a strong background in low-level system software development, which should include at least of one of strong kernel development experience (FreeBSD preferred; Linux acceptable), or compiler internals experience (LLVM preferred; gcc acceptable). Strong experience with the C programming language is critical. Some background in computer security is also recommended.
Candidates must be able to provide evidence of relevant work demonstrated by a research publication track record or industrial experience. Good interpersonal and organisational skills and the ability to work in a team are also essential. This post is intended to be filled as soon as practically possible after the closing date.
Applications should include:
- Curriculum Vitae
- Brief statement of the particular contribution you would make to the project
- A completed form CHRIS6
Completed applications should be sent by post to: Personnel-Admin,Computer Laboratory, William Gates Building, JJ Thomson Avenue, Cambridge, CB3 0FD, or by email to: firstname.lastname@example.org
Quote Reference: NR10692
Closing Date: 10 January 2012
The University values diversity and is committed to equality of opportunity.
There seems to be an attempt to revive the “Trusted Computing” agenda. The vehicle this time is UEFI which sets the standards for the PC BIOS. Proposed changes to the UEFI firmware spec would enable (in fact require) next-generation PC firmware to only boot an image signed by a keychain rooted in keys built into the PC. I hear that Microsoft (and others) are pushing for this to be mandatory, so that it cannot be disabled by the user, and it would be required for OS badging. There are some technical details here and here, and comment here.
These issues last arose in 2003, when we fought back with the Trusted Computing FAQ and economic analysis. That initiative petered out after widespread opposition. This time round the effects could be even worse, as “unauthorised” operating systems like Linux and FreeBSD just won’t run at all. (On an old-fashioned Trusted Computing platform you could at least run Linux – it just couldn’t get at the keys for Windows Media Player.)
The extension of Microsoft’s OS monopoly to hardware would be a disaster, with increased lock-in, decreased consumer choice and lack of space to innovate. It is clearly unlawful and must not succeed.
About a moth ago I’ve presented at the Security Protocols Workshop a new idea to detect relay attacks, co-developed with Frank Stajano.
The idea relies on having a trusted box (which we call the T-Box as in the image below) between the physical interfaces of two communicating parties. The T-Box accepts 2 inputs (one from each party) and provides one output (seen by both parties). It ensures that none of the parties can determine the complete input of the other party.
Therefore by connecting 2 instances of a T-Box together (as in the case of a relay attack) the message from one end to the other (Alice and Bob in the image above) gets distorted twice as much as it would in the case of a direct connection. That’s the basic idea.
One important question is how does the T-Box operate on the inputs such that we can detect a relay attack? In the paper we describe two example implementations based on a bi-directional channel (which is used for example between a smart card and a terminal). In order to help the reader understand these examples better and determine the usefulness of our idea Mike Bond and I have created a python simulation. This simulation allows you to choose the type of T-Box implementation, a direct or relay connection, as well as other parameters including the length of the anti-relay data stream and detection threshold.
In these two implementations we have restricted ourselves to make the T-Box part of the communication channel. The advantage is that we don’t rely on any party providing the T-Box since it is created automatically by communicating over the physical channel. The disadvantage is that a more powerful attacker can sample the line at twice the speed and overcome our T-Box solution.
The relay attack can be used against many applications, including all smart card based payments. There are already several ideas, including distance bounding, for detecting relay attacks. However our idea brings a new approach to the existing methods, and we hope that in the future we can find a practical implementation of our solutions, or a good scenario to use a physical T-Box which should not be affected by a powerful attacker.