Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

PhD studentship in side-channel security

I can offer a 3.5-year PhD studentship on radio-frequency side-channel security, starting in October 2019, to applicants interested in hardware security, radio communication, and digital signal processing. Due to the funding source, this studentship is restricted to UK nationals, or applicants who have been resident in the UK for the past 10 years. Contact me for details.

Open Source Summit Europe

I am at the 2018 Open Source Summit Europe in Edinburgh where I’ll be speaking about Hyperledger projects. In follow-ups to this post, I’ll live-blog security related talks and workshops.

The first workshop of the summit I attended was a crash course introduction to EdgeX Foundry by Jim White, the organization’s chief architect. EdgeX Foundry is an open source, vendor neutral software framework for IoT edge computing. One of the primary challenges that it faces is the sheer number of protocols and standards that need to be supported in the IoT space, both on the south side (various sensors and actuators) as well as the north side (cloud providers, on-premise servers). To do this, EdgeX uses a microservices based architecture where all components interact via configurable APIs and developers can choose to customize any component. While this architecture does help to alleviate the scaling issue, it raises interesting questions with respect to API security and device management. How do we ensure the integrity of the control and command modules when those modules themselves are federated across multiple third-party-contributed microservices? The team at EdgeX is aware of these issues and is a major theme of focus for their upcoming releases.

Privacy for Tigers

As mobile phone masts went up across the world’s jungles, savannas and mountains, so did poaching. Wildlife crime syndicates can not only coordinate better but can mine growing public data sets, often of geotagged images. Privacy matters for tigers, for snow leopards, for elephants and rhinos – and even for tortoises and sharks. Animal data protection laws, where they exist at all, are oblivious to these new threats, and no-one seems to have started to think seriously about information security.

So we have been doing some work on this, and presented some initial ideas via an invited talk at Usenix Security in August. A video of the talk is now online.

The most serious poaching threats involve insiders: game guards who go over to the dark side, corrupt officials, and (now) the compromise of data and tools assembled for scientific and conservation purposes. Aggregation of data makes things worse; I might not care too much about a single geotagged photo, but a corpus of thousands of such photos tells a poacher where to set his traps. Cool new AI tools for recognising individual animals can make his work even easier. So people developing systems to help in the conservation mission need to start paying attention to computer security. Compartmentation is necessary, but there are hundreds of conservancies and game reserves, many of which are mutually mistrustful; there is no central authority at Fort Meade to manage classifications and clearances. Data sharing is haphazard and poorly understood, and the limits of open data are only now starting to be recognised. What sort of policies do we need to support, and what sort of tools do we need to create?

This is joint work with Tanya Berger-Wolf of Wildbook, one of the wildlife data aggregation sites, which is currently redeveloping its core systems to incorporate and test the ideas we describe. We are also working to spread the word to both conservators and online service firms.

Making sense of the Supermicro motherboard attack

There has been a lot of ‘fog of war’ regarding the alleged implantation of Trojan hardware into Supermicro servers at manufacturing time. Other analyses have cast doubt on the story. But do all the pieces pass the sniff test?

In brief, the allegation is that an implant was added at manufacturing time, attached to the Baseboard Management Controller (BMC). When a desktop computer has a problem, common approaches are to reboot it or to reinstall the operating system. However in a datacenter it isn’t possible to physically walk up to the machine to do these things, so the BMC allows administrators to do them over the network.

Crucially, because the BMC has the ability to install the operating system, it can disrupt the process that boots the operating system – and fetch potentially malicious implant code, maybe even over the Internet.

The Bloomberg Businessweek reports are low on technical details, but they do show two interesting things. The first is a picture of the alleged implant. This shows a 6-pin silicon chip inside a roughly 1mm x 2mm ceramic package – as often used for capacitors and other so-called ‘passive’ components, which are typically overlooked.

The other is an animation highlighting this implant chip on a motherboard. Extracting the images from this animation shows the base image is of a Supermicro B1DRi board. As others have noted, this is mounted in a spare footprint between the BMC chip and a Serial-Peripheral Interface (SPI) flash chip that likely contains the BMC’s firmware. Perhaps the animation is an artist’s concept only, but this is just the right place to compromise the BMC.

SPI is a popular format for firmware flash memories – it’s a relatively simple, relatively slow interface, using only four signal wires. Quad SPI (QSPI), a faster version, uses six wires for faster transmission. The Supermicro board here appears to have a QSPI chip, but also a space for an SPI chip as a manufacturing-time option. The alleged implant is mounted in part of the space where the SPI chip would go. Limited interception or modification of SPI communication is something that a medium complexity digital chip (a basic custom chip, or an off-the-shelf programmable CPLD) could do – but not to a great extent. Six pins is enough to intercept the four SPI wires, plus two power. The packaging of this implant would, however, be completely custom.

What can an implant attached to the SPI wires do? The BMC itself is a computer, running an operating system which is stored in the SPI flash chip. The manual for a MBI-6128R-T2 server containing the B1DRi shows it has an AST2400 BMC chip.

The AST2400 uses a relatively old technology – a single-core 400MHz ARM9 CPU, broadly equivalent to a cellphone from the mid 2000s. Its firmware can come via SPI.

I downloaded the B1DRi BMC firmware from the Supermicro website and did some preliminary disassembly. The AST2400 in this firmware appears to run Linux, which is plausible given it supports complicated peripherals such as PCI Express graphics and USB. (It is not news to many of us working in this field that every system already has a Linux operating system running on an ARM CPU, before power is even applied to the main Intel CPUs — but many others may find this surprising).

It is possible that the implant simply replaces the entire BMC firmware, but there is another way.

In order to start its own Linux, AST2400 boots using the U-Boot bootloader. I noticed one of the options is for the AST2400 to pick up its Linux OS over the network (via TFTP or NFS). If (and it’s a substantial if) this is enabled in the AST2400 bootloader, it would not take a huge amount of modification to the SPI contents to divert the boot path so that the BMC fetched its firmware over the network (and potentially the Internet, subject to outbound firewalls).

Once the BMC operating system is compromised, it can then tamper with the main operating system. An obvious path would be to insert malicious code at boot time, via PCI Option ROMs. However, after such vulnerabilities came to light, defenses have been increased in this area.

But there’s another trick a bad BMC can do — it can simply read and write main memory once the machine is booted. The BMC is well-placed to do this, sitting on the PCI Express interconnect since it implements a basic graphics card. This means it potentially has access to large parts of system memory, and so all the data that might be stored on the server. Since the BMC also has access to the network, it’s feasible to exfiltrate that data over the Internet.

So this raises a critical question: how well is the BMC firmware defended? The BMC firmware download contains raw ARM code, and is exactly 32MiB in size. 32MiB is a common size of an SPI flash chip, and suggests this firmware image is written directly to the SPI flash at manufacture without further processing. Additionally, there’s the OpenBMC open source project which supports the AST2400. From what I can find, installing OpenBMC on the AST2400 does not require any code signing or validation process, and so modifying the firmware (for good or ill) looks quite feasible.

Where does this leave us? There are few facts, and much supposition. However, the following scenario does seem to make sense. Let’s assume an implant was added to the motherboard at manufacture time. This needed modification of both the board design, and the robotic component installation process. It intercepts the SPI lines between the flash and the BMC controller. Unless the implant was designed with a very high technology, it may be enough to simply divert the boot process to fetch firmware over the network (either the Internet or a compromised server in the organisation), and all the complex attacks build from there — possibly using PCI Express and/or the BMC for exfiltration.

If the implant is less sophisticated than others have assumed, it may be feasible to block it by firewalling traffic from the BMC — but I can’t see many current owners of such a board wanting to take that risk.

So, finally, what do we learn? In essence, this story seems to pass the sniff test. But it is likely news to many people that their systems are a lot more complex than they thought, and in that complexity can lurk surprising vulnerabilities.

Dr A. Theodore Markettos is a Senior Research Associate in hardware and platform security at the University of Cambridge, Department of Computer Science and Technology.

How Protocols Evolve

Over the last thirty years or so, we’ve seen security protocols evolving in different ways, at different speeds, and at different levels in the stack. Today’s TLS is much more complex than the early SSL of the mid-1990s; the EMV card-payment protocols we now use at ATMs are much more complex than the ISO 8583 protocols used in the eighties when ATM networking was being developed; and there are similar stories for GSM/3g/4g, SSH and much else.

How do we make sense of all this?

Reconciling Multiple Objectives – Politics or Markets? was particularly inspired by Jan Groenewegen’s model of innovation according to which the rate of change depends on the granularity of change. Can a new protocol be adopted by individuals, or does it need companies to adopt it en masse for internal use, or does it need to spread through a whole ecosystem, or – the hardest case of all – does it require a change in culture, norms or values?

Security engineers tend to neglect such “soft” aspects of engineering, and we probably shouldn’t. So we sketch a model of the innovation stack for security and draw a few lessons.

Perhaps the most overlooked need in security engineering, particularly in the early stages of a system’s evolution, is recourse. Just as early ATM and point-of-sale system operators often turned away fraud victims claiming “Our systems are secure so it must have been your fault”, so nowadays people who suffer abuse on social media can find that there’s nowhere to turn. A prudent engineer should anticipate disputes, and give some thought in advance to how they should be resolved.

Reconciling Multiple Objectives appeared at Security Protocols 2017. I forgot to put the accepted version online and in the repository after the proceedings were published in late 2017. Sorry about that. Fortunately the REF rule that papers must be made open access within three months doesn’t apply to conference proceedings that are a book series; it may be of value to others to know this!

Bitter Harvest: Systematically Fingerprinting Low- and Medium-interaction Honeypots at Internet Scale

Next week we will present a new paper at USENIX WOOT 2018, in which we show that we can find low- and medium-interaction honeypots on the Internet with a few packets. So if you are running such a honeypot (Cowrie, Glastopf, Conpot etc.), then “we know where you live” and the bad guys might soon as well.

In total, we identify 7,605 honeypot instances across nine different honeypot implementations for the most important network protocols SSH, Telnet, and HTTP.

These honeypots rely on standard libraries to implement large parts of the transport layer, but they were never intended to provide identical behaviour to the systems being impersonated. We show that fixing the identity string pretending to be OpenSSH or Apache and not “any” library or fixing other common identifiers such as error messages is not enough. The problem is that there are literally thousands of distinguishing protocol interactions, part of the contribution of the paper is to show how to pick the “best” one. Even worse, to fingerprint these honeypots, we do not need to send any credentials so it will be hard to tell from the logging that you have been detected.

We also find that many honeypots are deployed and forgotten about because part of the fingerprinting has been to determine how many people are not actively patching their systems! We find that  27% of the SSH honeypots have not been updated within the last 31 months and only 39% incorporate improvements from 7 months ago. It turns out that security professionals are as bad as anyone.

We argue that our method is a  ‘class break’ in that trivial patches cannot address the issue. Thus we need to move on from the current dominant honeypot architecture of python libraries and python programs for low- and medium-interaction honeypots. We also have developed a modified version of the OpenSSH daemon (sshd) which can front-end a Cowrie instance so that the protocol layer distinguishers will no longer work.

The paper is available here.

The two-time pad: midwife of information theory?

The NSA has declassified a fascinating account by John Tiltman, one of Britain’s top cryptanalysts during world war 2, of the work he did against Russian ciphers in the 1920s and 30s.

In it, he reveals (first para, page 8) that from the the time the Russians first introduced one-time pads in 1928, they actually allowed these pads to be used twice.

This was still a vast improvement on the weak ciphers and code books the Russians had used previously. Tiltman notes ruefully that “We were hardly able to read anything at all except in the case of one or two very stereotyped proforma messages”.

Now after Gilbert Vernam developed encryption using xor with a key tape, Joseph Mauborgne suggested using it one time only for security, and this may have seemed natural in the context of a cable company. When the Russians developed their manual system (which may have been inspired by the U.S. work or a German one-time pad developed earlier in the 1920s) they presumably reckoned that using them twice was safe enough.

They were spectacularly wrong. The USA started Operation Venona in 1943 to decrypt messages where one-time pads had been reused, and this later became one of the first applications of computers to cryptanalysis, leading to the exposure of spies such as Blunt and Cairncross. The late Bob Morris, chief scientist at the NSA, used to warn us enigmatically of “The Two-time pad”. The story up till now was that the Russians must have reused pads under pressure of war, when it became difficult to get couriers through to embassies. Now it seems to have been Russian policy all along.

Many people have wondered what classified war work might have inspired Claude Shannon to write his stunning papers at the end of WW2 in which he established the mathematical basis of cryptography, and of information more generally.

Good research usually comes from real problems. And here was a real problem, which demanded careful clarification of two questions. Exactly why was the one-time pad good and the two-time pad bad? And how can you measure the actual amount of information in an English (or Russian) plaintext telegram: is it more or less than half the amount of information you might squeeze into that many bits? These questions are very much sharper for the two-time pad than for rotor machines or the older field ciphers.

That at least was what suddenly struck me on reading Tiltman. Of course this is supposition; but perhaps there are interesting documents about Shannon’s war work to be flushed out with freedom of information requests. (Hat tip: thanks to Dave Banisar for pointing us at the Tiltman paper.)

Responsible vulnerability disclosure in Europe

There is a report out today from the European economics think-tank CEPS on how responsible vulnerability disclosure might be harmonised across Europe. I was one of the advisers to this effort which involved not just academics and NGOs but also industry.

It was inspired in part by earlier work reported here on standardisation and certification in the Internet of Things. What happens to car safety standards once cars get patched once a month, like phones and laptops? The answer is not just that safety becomes a moving target, rather than a matter of pre-market testing; we also need a regime whereby accidents, hazards, vulnerabilities and security breaches get reported. That will mean responsible disclosure not just to OEMs and component vendors, but also to safety regulators, standards bodies, traffic police, insurers and accident victims. If we get it right, we could have a learning system that becomes steadily safer and more secure. But we could also get it badly wrong.

Getting it might will involve significant organisational and legal changes, which we discussed in our earlier report and which we carry forward here. We didn’t get everything we wanted; for example, large software vendors wouldn’t support our recommendation to extend the EU Product Liability Directive to services. Nonetheless, we made some progress, so today’s report can be seen a second step on the road.

Euro S&P

I am at the IEEE Euro Security and Privacy Conference in London.

The keynote talk was by Sunny Consolvo, who runs Google’s security and privacy UX team, and her topic was user-facing threats to privacy and security. Her first theme was browser warnings, which try to stop users doing what they want to; it’s an interruption, it’s technical and there’s no obvious way forward other than clicking through the warning. In 2013 their SSL warning had a clickthrough rate of 68% while their more explicit and graphic malware warning had only 23% clickthrough. Mozilla’s SSL warning had a much lower 33%, with an icon of a policeman and more explicit tests. After four years of experimenting with watching eyes, corporate styling / branding and extra steps – none of which worked very well – they tried a strategy of clear instruction, attractive preferred choice, and unattractive alternative. The text had less jargon, a low reading level, brevity, specifics, an illustration and colour. Her CHI15 paper shows that the new design did much better, from 69% CTR to 41%. It turns out that many factors are at play; a strong signal is site quality, but this leads many people to continue anyway to sites they have come to trust. The malware clickthrough rate is now down to 5%, and SSL to 21%. That cost five years of a huge team effort, with back-end stuff too as well as UX. It involved huge internal fights, such as with a product manager who wanted the warning to say “this site contains malware” rather than “the site you’re trying to get to contains malware” as it was shorter. Her recent papers are here, here, and here.

A second thread of work is a longitudonal survey of public opinion on privacy ranging from government surveillance to cyber-bullying. This has run since 2015 in sixteen countries. 84% of respondents thought limiting access to online but not public data is very or extremely important. 84% were concerned about hackers vs 55% worried about governments and 53% companies. 20% of Germans are very angry about government access to personal data versus 10% of Brits. Most people believe national security justifies data access (except in South Korea) while no country’s people believes the government should have access to police non-violent crime. Most people everywhere support targeted monitoring but nowhere is there majority support for bulk surveillance. In Germany 53% believed everyone should have the right to send anonymous encrypted email while in the UK it’s 39%. Germans were pessimistic about technology with only 4% believing it was possible to be completely anonymous online. Over 88% believe that freedom of expression is very or extremely important and less than 1% unimportant; but over 70% didn’t believe that cyberbullying should be allowed. Opinions are more varied on extremist religious content, with 10.9% agreeing it should be allowed and 21% saying “it depends”.

Her third thread was intimate partner abuse, which has been experienced by 27% of women and 11% of men. There are typically three phases: a physical control phase where the abuser has access to the survivor’s device and may install malware, or even destroy devices; an escape phase which is high-risk as they try to find a new home, a job and so on; and a life-apart phase when they might want to shield location, email address and phone numbers to escape harassment, and may have lifelong concerns. Risks are greater for poorer people who may not be able to just buy a new phone. Sunny gave some case stories of extreme mate guarding and survivors’ strategies such as using a neighbour’s phone or a computer in a library or at work. It takes seven escape attempts on average to get to life apart. After escape, a survivor may have to restrict childrens’ online activities and sever mutual relationships; letting your child post anything can leak the school location and lead to the abuser turning up. She may have to change career as it can be impossible to work as a self-employed professional if she can no longer advertise. The takeaway is that designers should focus on usability during times of high stress and high risk; they should allow users to have multiple accounts; they should design things so that someone reviewing your history should not be able to tell you deleted anything; they should push 2-factor authentication, unusual activity notifications, and incognito mode. They should also think about how a survivor can capture evidence for use in divorce and custody cases while minimising the trauma. Finally she suggests serious research on other abuse survivors of different age groups and in different countries. For more see her paper here.

I will try to liveblog the rest of the talks in followups to this post.