Category Archives: Hardware & signals

Electrical engineering aspects of computer security: tamper resistance, eavesdropping, signal processing, etc.

Could a gaming app steal your bank PIN?

Have you ever wondered whether one app on your phone could spy on what you’re typing into another? We have. Five years ago we showed that you could use the camera to measure the phone’s motion during typing and use that to recover PINs. Then three years ago we showed that you could use interrupt timing to recover text entered using gesture typing. So what other attacks are possible?

Our latest paper shows that one of the apps on the phone can simply record the sound from its microphones and work out from that what you’ve been typing.

Your phone’s screen can be thought of as a drum – a membrane supported at the edges. It makes slightly different sounds depending on where you tap it. Modern phones and tablets typically have two microphones, so you can also measure the time difference of arrival of the sounds. The upshot is that can recover PIN codes and short words given a few measurements, and in some cases even long and complex words. We evaluate the new attack against previous ones and show that the accuracy is sometimes even better, especially against larger devices such as tablets.

This paper is based on Ilia Shumailov’s MPhil thesis project.

Struck by a Thunderbolt

At the Network and Distributed Systems Security Symposium in San Diego today we’re presenting Thunderclap, which describes a set of new vulnerabilities involving the security of computer peripherals and the open-source research platform used to discover them. This is a joint work with Colin Rothwell, Brett Gutstein, Allison Pearce, Peter Neumann, Simon Moore and Robert Watson.

We look at the security of input/output devices that use the Thunderbolt interface, which is available via USB-C ports in many modern laptops. Our work also covers PCI Express (PCIe) peripherals which are found in desktops and servers.

Such ports offer very privileged, low-level, direct memory access (DMA), which gives peripherals much more privilege than regular USB devices. If no defences are used on the host, an attacker has unrestricted memory access, and can completely take control of a target computer: they can steal passwords, banking logins, encryption keys, browser sessions and private files, and they can also inject malicious software that can run anywhere in the system.

We studied the defences of existing systems in the face of malicious DMA-enabled peripheral devices and found them to be very weak.

The primary defence is a component called the Input-Output Memory Management Unit (IOMMU), which, in principle, can allow devices to access only the memory needed to do their job and nothing else. However, we found existing operating systems do not use the IOMMU effectively.

To begin with, most systems don’t enable the IOMMU at all. Windows 7, Windows 8, and Windows 10 Home and Pro didn’t support the IOMMU. Windows 10 Enterprise can optionally use it, but in a very limited way that leaves most of the system undefended. Linux and FreeBSD do support using the IOMMU, but this support is not enabled by default in most distributions. MacOS is the only OS we studied that uses the IOMMU out of the box.

This state of affairs is not good, and our investigations revealed significant further vulnerabilities even when the IOMMU is enabled.

We built a fake network card that is capable of interacting with the operating system in the same way as a real one, including announcing itself correctly, causing drivers to attach, and sending and receiving network packets. To do this, we extracted a software model of an Intel E1000 from the QEMU full-system emulator and ran it on an FPGA. Because this is a software model, we can easily add malicious behaviour to find and exploit vulnerabilities.

We found the attack surface available to a network card was much richer and more nuanced than was previously thought. By examining the memory it was given access to while sending and receiving packets, our device was able to read traffic from networks that it wasn’t supposed to. This included VPN plaintext and traffic from Unix domain sockets that should never leave the machine.

On MacOS and FreeBSD, our network card was able to start arbitrary programs as the system administrator, and on Linux it had access to sensitive kernel data structures. Additionally, on MacOS devices are not protected from one another, so a network card is allowed to read the display contents and keystrokes from a USB keyboard.

Worst of all, on Linux we could completely bypass the enabled IOMMU, simply by setting a few option fields in the messages that our malicious network card sent.

Such attacks are very plausible in practice. The combination of power, video, and peripheral-device DMA over Thunderbolt 3 ports facilitates the creation of malicious charging stations or displays that function correctly but simultaneously take control of connected machines.

We’ve been collaborating with vendors about these vulnerabilities since 2016, and a number of mitigations have been shipped. We have also been working with vendors, helping them to use our Thunderclap tools to explore this vulnerability space and audit their systems for problems.

MacOS fixed the specific vulnerability we used to get administrator access in macOS 10.12.4 in 2016, although the more general scope of such attacks remain relevant. More recently, new laptops that ship with Windows 10 version 1803 or later have a feature called Kernel DMA Protection for Thunderbolt 3, which at least enables the IOMMU for Thunderbolt devices (but not PCI Express ones). Since this feature requires firmware support, older laptops that were shipped before 1803 remain vulnerable. Recently, Intel committed patches to Linux to enable the IOMMU for Thunderbolt devices, and to disable the ATS feature that allowed our IOMMU bypass. These are part of the Linux kernel 5.0 which is currently in the release process.

One major laptop vendor told us they would like to study these vulnerabilities in more detail before adding Thunderbolt to new product lines.

More generally, since this is a new space of many vulnerabilities, rather than a specific example, we believe all operating systems are vulnerable to similar attacks, and that more substantial design changes will be needed to remedy these problems. We noticed similarities between the vulnerability surface available to malicious peripherals in the face of IOMMU protections and that of the kernel system call interface, long a source of operating system vulnerabilities. The kernel system call interface has been subjected to much scrutiny, security analysis, and code hardening over the years, which must now be applied to the interface between peripherals and the IOMMU.

As well as asking vendors to improve the security of their systems, we advise users to update their systems and to be cautious about attaching unfamiliar USB-C devices to their machines – especially those in public places.

We have placed more background on our work and a list of FAQs on our website, thunderclap.io. There, we have also open sourced the Thunderclap research platform to allow other researchers to reproduce and extend our work, and to aid vendors in performing security evaluation of their products.

Thunderclap: Exploring Vulnerabilities in Operating System IOMMU Protection via DMA from Untrustworthy Peripherals A. Theodore Markettos, Colin Rothwell, Brett F. Gutstein, Allison Pearce, Peter G. Neumann, Simon W. Moore, Robert N. M. Watson. Proceedings of the Network and Distributed Systems Security Symposium (NDSS), 24-27 February 2019, San Diego, USA.

PhD studentship in side-channel security

I can offer a 3.5-year PhD studentship on radio-frequency side-channel security, starting in October 2019, to applicants interested in hardware security, radio communication, and digital signal processing. Due to the funding source, this studentship is restricted to UK nationals, or applicants who have been resident in the UK for the past 10 years. Contact me for details.

Making sense of the Supermicro motherboard attack

There has been a lot of ‘fog of war’ regarding the alleged implantation of Trojan hardware into Supermicro servers at manufacturing time. Other analyses have cast doubt on the story. But do all the pieces pass the sniff test?

In brief, the allegation is that an implant was added at manufacturing time, attached to the Baseboard Management Controller (BMC). When a desktop computer has a problem, common approaches are to reboot it or to reinstall the operating system. However in a datacenter it isn’t possible to physically walk up to the machine to do these things, so the BMC allows administrators to do them over the network.

Crucially, because the BMC has the ability to install the operating system, it can disrupt the process that boots the operating system – and fetch potentially malicious implant code, maybe even over the Internet.

The Bloomberg Businessweek reports are low on technical details, but they do show two interesting things. The first is a picture of the alleged implant. This shows a 6-pin silicon chip inside a roughly 1mm x 2mm ceramic package – as often used for capacitors and other so-called ‘passive’ components, which are typically overlooked.

The other is an animation highlighting this implant chip on a motherboard. Extracting the images from this animation shows the base image is of a Supermicro B1DRi board. As others have noted, this is mounted in a spare footprint between the BMC chip and a Serial-Peripheral Interface (SPI) flash chip that likely contains the BMC’s firmware. Perhaps the animation is an artist’s concept only, but this is just the right place to compromise the BMC.

SPI is a popular format for firmware flash memories – it’s a relatively simple, relatively slow interface, using only four signal wires. Quad SPI (QSPI), a faster version, uses six wires for faster transmission. The Supermicro board here appears to have a QSPI chip, but also a space for an SPI chip as a manufacturing-time option. The alleged implant is mounted in part of the space where the SPI chip would go. Limited interception or modification of SPI communication is something that a medium complexity digital chip (a basic custom chip, or an off-the-shelf programmable CPLD) could do – but not to a great extent. Six pins is enough to intercept the four SPI wires, plus two power. The packaging of this implant would, however, be completely custom.

What can an implant attached to the SPI wires do? The BMC itself is a computer, running an operating system which is stored in the SPI flash chip. The manual for a MBI-6128R-T2 server containing the B1DRi shows it has an AST2400 BMC chip.

The AST2400 uses a relatively old technology – a single-core 400MHz ARM9 CPU, broadly equivalent to a cellphone from the mid 2000s. Its firmware can come via SPI.

I downloaded the B1DRi BMC firmware from the Supermicro website and did some preliminary disassembly. The AST2400 in this firmware appears to run Linux, which is plausible given it supports complicated peripherals such as PCI Express graphics and USB. (It is not news to many of us working in this field that every system already has a Linux operating system running on an ARM CPU, before power is even applied to the main Intel CPUs — but many others may find this surprising).

It is possible that the implant simply replaces the entire BMC firmware, but there is another way.

In order to start its own Linux, AST2400 boots using the U-Boot bootloader. I noticed one of the options is for the AST2400 to pick up its Linux OS over the network (via TFTP or NFS). If (and it’s a substantial if) this is enabled in the AST2400 bootloader, it would not take a huge amount of modification to the SPI contents to divert the boot path so that the BMC fetched its firmware over the network (and potentially the Internet, subject to outbound firewalls).

Once the BMC operating system is compromised, it can then tamper with the main operating system. An obvious path would be to insert malicious code at boot time, via PCI Option ROMs. However, after such vulnerabilities came to light, defenses have been increased in this area.

But there’s another trick a bad BMC can do — it can simply read and write main memory once the machine is booted. The BMC is well-placed to do this, sitting on the PCI Express interconnect since it implements a basic graphics card. This means it potentially has access to large parts of system memory, and so all the data that might be stored on the server. Since the BMC also has access to the network, it’s feasible to exfiltrate that data over the Internet.

So this raises a critical question: how well is the BMC firmware defended? The BMC firmware download contains raw ARM code, and is exactly 32MiB in size. 32MiB is a common size of an SPI flash chip, and suggests this firmware image is written directly to the SPI flash at manufacture without further processing. Additionally, there’s the OpenBMC open source project which supports the AST2400. From what I can find, installing OpenBMC on the AST2400 does not require any code signing or validation process, and so modifying the firmware (for good or ill) looks quite feasible.

Where does this leave us? There are few facts, and much supposition. However, the following scenario does seem to make sense. Let’s assume an implant was added to the motherboard at manufacture time. This needed modification of both the board design, and the robotic component installation process. It intercepts the SPI lines between the flash and the BMC controller. Unless the implant was designed with a very high technology, it may be enough to simply divert the boot process to fetch firmware over the network (either the Internet or a compromised server in the organisation), and all the complex attacks build from there — possibly using PCI Express and/or the BMC for exfiltration.

If the implant is less sophisticated than others have assumed, it may be feasible to block it by firewalling traffic from the BMC — but I can’t see many current owners of such a board wanting to take that risk.

So, finally, what do we learn? In essence, this story seems to pass the sniff test. But it is likely news to many people that their systems are a lot more complex than they thought, and in that complexity can lurk surprising vulnerabilities.

Dr A. Theodore Markettos is a Senior Research Associate in hardware and platform security at the University of Cambridge, Department of Computer Science and Technology.

Hacking the iPhone PIN retry counter

At our security group meeting on the 19th August, Sergei Skorobogatov demonstrated a NAND backup attack on an iPhone 5c. I typed in six wrong PINs and it locked; he removed the flash chip (which he’d desoldered and led out to a socket); he erased and restored the changed pages; he put it back in the phone; and I was able to enter a further six wrong PINs.

Sergei has today released a paper describing the attack.

During the recent fight between the FBI and Apple, FBI Director Jim Comey said this kind of attack wouldn’t work.

Efficient multivariate statistical techniques for extracting secrets from electronic devices

That’s the title of my PhD thesis, supervised by Markus Kuhn, which has become available recently as CL tech report 878:
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-878.html

In this thesis I provide a detailed presentation of template attacks, which are considered the most powerful kind of side-channel attacks, and I present several methods for implementing and evaluating this attack efficiently in different scenarios.

These contributions may allow evaluation labs to perform their evaluations faster, show that we can determine almost perfectly an 8-bit target value even when this value is manipulated by a single LOAD instruction (may be the best published results of this kind), and show how to cope with differences across devices, among others.

Some of the datasets used in my experiments along with MATLAB scripts for reproducing my results are available here:
http://www.cl.cam.ac.uk/research/security/datasets/grizzly/

 

Four cool new jobs

We’re advertising for four people to join the security group from October.

The first three are for two software engineers to join our new cybercrime centre, to develop new ways of finding bad guys in the terabytes and (soon) petabytes of data we get on spam, phish and other bad stuff online; and a lawyer to explore and define the boundaries of how we share cybercrime data.

The fourth is in Security analysis of semiconductor memory. Could you help us come up with neat new ways of hacking chips? We’ve invented quite a few of these in the past, ranging from optical fault induction through semi-invasive attacks generally. What’s next?

There exists a classical model of the photon after all

Many people assume that quantum mechanics cannot emerge from classical phenomena, because no-one has so far been able to think of a classical model of light that is consistent with Maxwell’s equations and reproduces the Bell test results quantitatively.

Today Robert Brady and I unveil just such a model. It turns out that the solution was almost in plain sight, in James Clerk Maxwell’s 1861 paper On Phyiscal Lines of Force in which he derived Maxwell’s equations, on the assumption that magnetic lines of force were vortices in a fluid. Updating this with modern knowledge of quantised magnetic flux, we show that if you model a flux tube as a phase vortex in an inviscid compressible fluid, then wavepackets sent down this vortex obey Maxwell’s equations to first order; that they can have linear or circular polarisation; and that the correlation measured between the polarisation of two cogenerated wavepackets is exactly the same as is predicted by quantum mechanics and measured in the Bell tests.

This follows work last year in which we explained Yves Couder’s beautiful bouncing-droplet experiments. There, a completely classical system is able to exhibit quantum-mechanical behaviour as the wavefunction ψ appears as a modulation on the driving oscillation, which provides coherence across the system. Similarly, in the phase vortex model, the magnetic field provides the long-range order and the photon is a modulation of it.

We presented this work yesterday at the 2015 Symposium of the Trinity Mathematical Society. Our talk slides are here and there is an audio recording here.

If our sums add up, the consequences could be profound. First, it will explain why quantum computers don’t work, and blow away the security ‘proofs’ for entanglement-based quantum cryptosystems (we already wrote about that here and here). Second, if the fundamental particles are just quasiparticles in a superfluid quantum vacuum, there is real hope that we can eventually work out where all the mysterious constants in the Standard Model come from. And third, there is no longer any reason to believe in multiple universes, or effects that propagate faster than light or backward in time – indeed the whole ‘spooky action at a distance’ to which Einstein took such exception. He believed that action in physics was local and causal, as most people do; our paper shows that the main empirical argument against classical models of reality is unsound.

Curfew tags – the gory details

In previous posts I told the story of how Britain’s curfew tagging system can fail. Some prisoners are released early provided they wear a tag to enforce a curfew, which typically means that they have to stay home from 7pm to 7am; some petty offenders get a curfew instead of a prison sentence; and some people accused of serious crimes are tagged while on bail. In dozens of cases, curfewees had been accused of tampering with their tags, but had denied doing so. In a series of these cases, colleagues and I were engaged as experts, but when we demanded tags for testing, the prosecution was withdrawn and the case collapsed. In the most famous case, three men accused of terrorist offences were released; although one has since absconded, the other two are now free in the UK.

This year, a case finally came to trial. Our client, to whom we must refer simply as “Special Z”, was accused of tag tampering, which he denied vigorously. I was instructed as an expert along with my colleague Dr James Dean of Materials Science. Here is my expert report, together with James’s report and addendum, as well as a video of a tag being removed using much less than the amount of force required by the system specification.

The judge was not ready to set a precedent that could have thrown the UK tagging system into chaos. However, I understand our client has now been released on other grounds. Although the court did order us to hand back all the tags, and fragments of broken tags, so as to protect G4S’s intellectual property, it did not make a secrecy order on our expert reports. We publish them here in the hope that they might provide useful guidance to defendants in similar cases in the future, and to policymakers when tagging contracts come up for renewal, whether in the UK or overseas.

Nikka – Digital Strongbox (Crypto as Service)

Imagine, somewhere in the internet that no-one trusts, there is a piece of hardware, a small computer, that works just for you. You can trust it. You can depend on it. Things may get rough but it will stay there to get you through. That is Nikka, it is the fixed point on which you can build your security and trust. [Now as a Kickstarter project]

You may remember our proof-of-concept implementation of a password protection for servers – Hardware Scrambling (published here in March). The password scrambler was a small dongle that could be plugged to a Linux computer (we used Raspberry Pi). Its only purpose was to provide a simple API for encrypting passwords (but it could be credit cards or anything else up to 32 bytes of length). The beginning of something big?

It received some attention (Ars Technica, Slashdot, LWN, …), certainly more than we expected at the time. Following discussions have also taught us a couple of lessons about how people (mostly geeks in this contexts) view security – particularly about the default distrust expressed by those who discussed articles describing our password scrambler.

We eventually decided to build a proper hardware cryptographic platform that could be used for cloud applications. Our requirements were simple. We wanted something fast, “secure” (CC EAL5+ or even FIPS140-2 certified), scalable, easy to use (no complicated API, just one function call) and to be provided as a service so no-one has to pay upfront the price of an HSM if they just want to have a go at using proper cryptography for their new or old application. That was the beginning of Nikka.

nikka_setup

This is our concept: Nikka comprises a set of powerful servers installed in secure data centres. These servers can create clusters delivering high-availability and scalability for their clients. Secure hardware forms the backbone of each server that provides an interface for simple use. The second part of Nikka are user applications, plugins, and libraries for easy deployment and everyday “invisible” use. Operational procedures, processes, policies, and audit logs then guarantee that what we say is actually being done.

2014-07-04 08.17.35We have been building it for a few months now and the scalable cryptographic core seems to work. We have managed to run long-term tests of 150 HMAC transactions per second (HMAC & RNG for password scrambling) on a small development platform while fully utilising available secure hardware. The server is hosted at ideaSpace and we use it to run functional, configuration and load tests.

We have never before designed a system with so many independent processes – the core is completely asynchronous (starting with Netty for a TCP interface) and we have quickly started to appreciate detailed trace logging we’ve implemented from the very beginning. Each time we start digging we find something interesting. Real-time visualisation of the performance is quite nice as well.
real_time_monitoring

Nikka is basically a general purpose cryptographic engine with middleware layer for easy integration. The password HMAC is this time used only as one of test applications. Users can share or reserve processing units that have Common Criteria evaluations or even FIPS140-2 certification – with possible physical hardware separation of users.

If you like what you have read so far, you can keep reading, watching, supporting at Kickstarter. It has been great fun so far and we want to turn it into something useful in 2015. If it sounds interesting – maybe you would like to test it early next year, let us know! @DanCvrcek