Category Archives: Academic papers

Morello chip on board

Formal CHERI: rigorous engineering and design-time proof of full-scale architecture security properties

Memory safety bugs continue to be a major source of security vulnerabilities, with their root causes ingrained in the industry:

  • the C and C++ systems programming languages that do not enforce memory protection, and the huge legacy codebase written in them that we depend on;
  • the legacy design choices of hardware that provides only coarse-grain protection mechanisms, based on virtual memory; and
  • test-and-debug development methods, in which only a tiny fraction of all possible execution paths can be checked, leaving ample unexplored corners for exploitable bugs.

Over the last twelve years, the CHERI project has been working on addressing the first two of these problems by extending conventional hardware Instruction-Set Architectures (ISAs) with new architectural features to enable fine-grained memory protection and highly scalable software compartmentalisation, prototyped first as CHERI-MIPS and CHERI-RISC-V architecture designs and FPGA implementations, with an extensive software stack ported to run above them.

The academic experimental results are very promising, but achieving widespread adoption of CHERI needs an industry-scale evaluation of a high-performance silicon processor implementation and software stack. To that end, Arm have developed Morello, a CHERI-enabled prototype architecture (extending Armv8.2-A), processor (adapting the high-performance Neoverse N1 design), system-on-chip (SoC), and development board, within the UKRI Digital Security by Design (DSbD) Programme (see our earlier blog post on Morello). Morello is now being evaluated in a range of academic and industry projects.

Morello desktopMorello chip on board

However, how do we ensure that such a new architecture actually provides the security guarantees it aims to provide? This is crucial: any security flaw in the architecture will be present in any conforming hardware implementation, quite likely impossible to fix or work around after deployment.

In this blog post, we describe how we used rigorous engineering methods to provide high assurance of key security properties of CHERI architectures, with machine-checked mathematical proof, as well as to complement and support traditional design and development workflows, e.g. by automatically generating test suites. This is addressing the third problem, showing that, by judicious use of rigorous semantics at design time, we can do much better than test-and-debug development.
Continue reading Formal CHERI: rigorous engineering and design-time proof of full-scale architecture security properties

Text mining is harder than you think

Following last year’s row about Apple’s proposal to scan all the photos on your iPhone camera roll, EU Commissioner Johansson proposed a child sex abuse regulation that would compel providers of end-to-end encrypted messaging services to scan all messages in the client, and not just for historical abuse images but for new abuse images and for text messages containing evidence of grooming.

Now that journalists are distracted by the imminent downfall of our great leader, the Home Office seems to think this is a good time to propose some amendments to the Online Safety Bill that will have a similar effect. And while the EU planned to win the argument against the pedophiles first and then expand the scope to terrorist radicalisation and recruitment too, Priti Patel goes for the terrorists from day one. There’s some press coverage in the Guardian and the BBC.

We explained last year why client-side scanning is a bad idea. However, the shift of focus from historical abuse images to text scanning makes the government story even less plausible.

Detecting online wickedness from text messages alone is hard. Since 2016, we have collected over 99m messages from cybercrime forums and over 49m from extremist forums, and these corpora are used by 179 licensees in 55 groups from 42 universities in 18 countries worldwide. Detecting hate speech is a good proxy for terrorist radicalisation. In 2018, we thought we could detect hate speech with a precision of typically 92%, which would mean a false-alarm rate of 8%. But the more complex models of 2022, based on Google’s BERT, when tested on the better collections we have now, don’t do significantly better; indeed, now that we understand the problem in more detail, they often do worse. Do read that paper if you want to understand why hate-speech detection is an interesting scientific problem. With some specific kinds of hate speech it’s even harder; an example is anti-semitism, thanks to the large number of synonyms for Jewish people. So if we were to scan 10bn messages a day in Europe there would be maybe a billion false alarms for Europol to look at.

We’ve been scanning the Internet for wickedness for over fifteen years now, and looking at various kinds of filters for everything from spam to malware. Filtering requires very low false positive rates to be feasible at Internet scale, which means either looking for very specific things (such as indicators of compromise by a specific piece of malware) or by having rich metadata (such as a big spam run from some IP address space you know to be compromised). Whatever filtering Facebook can do on Messenger given its rich social context, there will be much less that a WhatsApp client can do by scanning each text on its way through.

So if you really wish to believe that either the EU’s CSA Regulation or the UK’s Online Harms Bill is an honest attempt to protect kids or catch terrorists, good luck.

European Commission prefers breaking privacy to protecting kids

Today, May 11, EU Commissioner Ylva Johannson announced a new law to combat online child sex abuse. This has an overt purpose, and a covert purpose.

The overt purpose is to pressure tech companies to take down illegal material, and material that might possibly be illegal, more quickly. A new agency is to be set up in the Hague, modeled on and linked to Europol, to maintain an official database of illegal child sex-abuse images. National authorities will report abuse to this new agency, which will then require hosting providers and others to take suspect material down. The new law goes into great detail about the design of the takedown process, the forms to be used, and the redress that content providers will have if innocuous material is taken down by mistake. There are similar provisions for blocking URLs; censorship orders can be issued to ISPs in Member States.

The first problem is that this approach does not work. In our 2016 paper, Taking Down Websites to Prevent Crime, we analysed the takedown industry and found that private firms are much better at taking down websites than the police. We found that the specialist contractors who take down phishing websites for banks would typically take six hours to remove an offending website, while the Internet Watch Foundation – which has a legal monopoly on taking down child-abuse material in the UK – would often take six weeks.

We have a reasonably good understanding of why this is the case. Taking down websites means interacting with a great variety of registrars and hosting companies worldwide, and they have different ways of working. One firm expects an encrypted email; another wants you to open a ticket; yet another needs you to phone their call centre during Peking business hours and speak Mandarin. The specialist contractors have figured all this out, and have got good at it. However, police forces want to use their own forms, and expect everyone to follow police procedure. Once you’re outside your jurisdiction, this doesn’t work. Police forces also focus on process more than outcome; they have difficulty hiring and retaining staff to do detailed technical clerical work; and they’re not much good at dealing with foreigners.

Our takedown work was funded by the Home Office, and we recommended that they run a randomised controlled trial where they order a subset of UK police forces to use specialist contractors to take down criminal websites. We’re still waiting, six years later. And there’s nothing in UK law that would stop them running such a trial, or that would stop a Chief Constable outsourcing the work.

So it’s really stupid for the European Commission to mandate centralised takedown by a police agency for the whole of Europe. This will be make everything really hard to fix once they find out that it doesn’t work, and it becomes obvious that child abuse websites stay up longer, causing real harm.

Oh, and the covert purpose? That is to enable the new agency to undermine end-to-end encryption by mandating client-side scanning. This is not evident on the face of the bill but is evident in the impact assessment, which praises Apple’s 2021 proposal. Colleagues and I already wrote about that in detail, so I will not repeat the arguments here. I will merely note that Europol coordinates the exploitation of communications systems by law enforcement agencies, and the Dutch National High-Tech Crime Unit has developed world-class skills at exploiting mobile phones and chat services. The most recent case of continent-wide bulk interception was EncroChat; although reporting restrictions prevent me telling the story of that, there have been multiple similar cases in recent years.

So there we have it: an attack on cryptography, designed to circumvent EU laws against bulk surveillance by using a populist appeal to child protection, appears likely to harm children instead.

CoverDrop: Securing Initial Contact for Whistleblowers

Whistleblowing is dangerous business. Whistleblowers face grave consequences if they’re caught and, to make matters worse, the anonymity set – the set of potential whistleblowers for a given story – is often quite small. Mass surveillance regimes around the world don’t help matters either. Yet whistleblowing has been crucial in exposing corruption, rape and other crimes in recent years. In our latest research paper, CoverDrop: Blowing the Whistle Through A News App, we set out to create a system that allows whistleblowers to securely make initial contact with news organisations. Our paper has been accepted at PETS, the Privacy Enhancing Technologies Symposium.

To work out how we could help whistleblowers release sensitive information to journalists without exposing their identity, we conducted two workshops with journalists, system administrators and software engineers at leading UK-based news organisations. These discussions made it clear that a significant weak point in the whistleblowing chain is the initial contact by the source to the journalist or news organisation. Sources would often get in touch over insecure channels (e.g., email, phone or SMS) and then switch to more secure channels (e.g., Tor and Signal) later on in the conversation – but by then it may be too late. 

Existing whistleblowing solutions such as SecureDrop rely on Tor for anonymity and expect a high degree of technical competence from its users. But in many cases, simply connecting to the Tor network is enough to single out the whistleblower from a small anonymity set. 

CoverDrop takes a different approach. Instead of connecting to Tor, we embed the whistleblowing mechanism in the mobile news app published by respective news organisations and use the traffic generated by all users of the app as cover traffic, hiding any messages from whistleblowers who use it. We implemented CoverDrop and have shown it to be secure against a global passive network adversary that also has the ability to issue warrants on all infrastructure as well as the source and recipient devices.

We instantiated CoverDrop in the form of an Android app with the expectation that news organisations embed CoverDrop in their standard news apps. Embedding CoverDrop into a news app provides the whistleblower with deniability as well as providing a secure means of contact to all users. This should nudge potential whistleblowers away from using insecure methods of initial contact. The whistleblowing component is a modified version of Signal, augmented with dummy messages to prevent traffic analysis. We use the Secure Element on mobile devices, SGX on servers and onion encryption to reduce the ability of an attacker to gain useful knowledge even if some system components are compromised.

The primary limitation of CoverDrop is its messaging bandwidth, which must be kept low to minimise the networking cost borne by the vast majority of news app users who are not whistleblowers. CoverDrop is designed to do a critical and difficult part of whistleblowing: establishing initial contact securely. Once a low-bandwidth communication channel is established, the source and the journalist can meet in person, or use other systems to send large documents.

The full paper can be found here.

Mansoor Ahmed-Rengers, Diana A. Vasile, Daniel Hugenroth, Alastair R. Beresford, and Ross Anderson. CoverDrop: Blowing the Whistle Through A News App. Proceedings on Privacy Enhancing Technologies, 2022.

Rollercoaster: Communicating Efficiently and Anonymously in Large Groups

End-to-end (E2E) encryption is now widely deployed in messaging apps such as WhatsApp and Signal and billions of people around the world have the contents of their message protected against strong adversaries. However, while the message contents are encrypted, their metadata still leaks sensitive information. For example, it is easy for an infrastructure provider to tell which customers are communicating, with whom and when.

Anonymous communication hides this metadata. This is crucial for the protection of individuals such as whistleblowers who expose criminal wrongdoing, activists organising a protest, or embassies coordinating a response to a diplomatic incident. All these face powerful adversaries for whom the communication metadata alone (without knowing the specific message text) can result in harm for the individuals concerned.

Tor is a popular tool that achieves anonymous communication by forwarding messages through multiple intermediate nodes or relays. At each relay the outermost layer of the message is decrypted and the inner message is forwarded to the next relay. An adversary who wants to figure out where A’s messages are finally delivered can attempt to follow a message as it passes through each relay. Alternatively, an adversary might confirm a suspicion that user A talks to user B by observing traffic patterns at A’s and B’s access points to the network instead. If indeed A and B are talking to each other, there will be a correlation between their traffic patterns. For instance, if an adversary observes that A sends three messages and three messages arrive at B shortly afterwards, this provides some evidence that A talks to B. The adversary can increase their certainty by collecting traffic over a longer period of time.

Mix networks such as Loopix use a different design, which defends against such traffic analysis attacks by using (i) traffic shaping and (ii) more intermediate nodes, so called mix nodes. In a simple mix network, each client only sends packets of a fixed length and at predefined intervals (e.g. 1 KiB every 5 seconds). When there is no payload to send, a cover packet is crafted that is indistinguishable to the adversary from a payload packet. If there is more than one payload packet to be sent, packets are queued and sent one by one on the predefined schedule. This traffic shaping ensures that an observer cannot gain any information from observing outgoing network packets. Moreover, mix nodes typically delay each incoming message by a random amount of time before forwarding it (with the delay chosen independently for each message), making it harder for an adversary to correlate a mix node’s incoming and outgoing messages, since they are likely to be reordered. In contrast, Tor relays forward messages as soon as possible in order to minimise latency.

Mix Networks work well for pairwise communication, but we found that group communication creates a unique challenge. Such group communication encompasses both traditional chat groups (e.g. WhatsApp groups or IRC) and collaborative editing (e.g. Google Docs, calendar sync, todo lists) where updates need to be disseminated to all other participants who are viewing or editing the content. There are many scenarios where anonymity requirements meet group communication, such as coordination between activists, diplomatic correspondence between embassies, and organisation of political campaigns.

The traffic shaping of mix networks makes efficient group communication difficult. The limited rate of outgoing messages means that sequentially sending a message to each group member can take a long time. For instance, assuming that the outgoing rate is 1 message every 5 seconds, it will take more than 8 minutes to send the message to all members in a group of size 100. During this process the sender’s output queue is blocked and they cannot send any other messages.

In our paper we propose a scheme named Rollercoaster that greatly improves the latency for group communication in mix networks. The basic idea is that group members who have already received a message can help distribute it to other members of the group. Like a chain reaction, the distribution of the message gains momentum as the number of recipients grows. In an ideal execution of this scheme, the number of users who have received a message doubles with every round, leading to substantially more efficient message delivery across the group.

Rollercoaster works well because there is typically plenty of spare capacity in the network. At any given time most clients will not be actively communicating and they are therefore mostly sending cover traffic. As a result, Rollercoaster actually improves the efficiency of the network and reduces the rate of cover traffic, which in turn reduces the overall required network bandwidth. At the same time, Rollercoaster does not require any changes to the existing Mix network protocol and can benefit from the existing user base and anonymity set.

The basic idea requires more careful consideration in a realistic environment where clients are offline or do not behave faithfully. A fault-tolerant version of our Rollercoaster scheme addresses these concerns by waiting for acknowledgement messages from recipients. If those acknowledgement messages are not received by the sender in a fixed period of time, forwarding roles are reassigned and another delivery attempt is made via a new route. We also show how a single number can seed the generation of a deterministic forwarding schedule. This allows efficient communication of different forwarding schedules and balances individual workloads within the group.

We presented our paper at USENIX Security ‘21 (paper, slides, and recording). It contains more extensions and optimisations than we can summarise here. There is also an extended version available as a tech report with more detailed security arguments in the appendices. The paper reference is:
Daniel Hugenroth, Martin Kleppmann, and Alastair R. Beresford. Rollercoaster: An Efficient Group-Multicast Scheme for Mix Networks. Proceedings of the 30th USENIX Security Symposium (USENIX Security), 2021.

Trojan Source: Invisible Vulnerabilities

Today we are releasing Trojan Source: Invisible Vulnerabilities, a paper describing cool new tricks for crafting targeted vulnerabilities that are invisible to human code reviewers.

Until now, an adversary wanting to smuggle a vulnerability into software could try inserting an unobtrusive bug in an obscure piece of code. Critical open-source projects such as operating systems depend on human review of all new code to detect malicious contributions by volunteers. So how might wicked code evade human eyes?

We have discovered ways of manipulating the encoding of source code files so that human viewers and compilers see different logic. One particularly pernicious method uses Unicode directionality override characters to display code as an anagram of its true logic. We’ve verified that this attack works against C, C++, C#, JavaScript, Java, Rust, Go, and Python, and suspect that it will work against most other modern languages.

This potentially devastating attack is tracked as CVE-2021-42574, while a related attack that uses homoglyphs – visually similar characters – is tracked as CVE-2021-42694. This work has been under embargo for a 99-day period, giving time for a major coordinated disclosure effort in which many compilers, interpreters, code editors, and repositories have implemented defenses.

This attack was inspired by our recent work on Imperceptible Perturbations, where we use directionality overrides, homoglyphs, and other Unicode features to break the text-based machine learning systems used for toxic content filtering, machine translation, and many other NLP tasks.

More information about the Trojan Source attack can be found at trojansource.codes, and proofs of concept can also be found on GitHub. The full paper can be found here.

Bugs in our pockets?

In August, Apple announced a system to check all our iPhones for illegal images, then delayed its launch after widespread pushback. Yet some governments continue to press for just such a surveillance system, and the EU is due to announce a new child protection law at the start of December.

Now, in Bugs in our Pockets: The Risks of Client-Side Scanning, colleagues and I take a long hard look at the options for mass surveillance via software embedded in people’s devices, as opposed to the current practice of monitoring our communications. Client-side scanning, as the agencies’ new wet dream is called, has a range of possible missions. While Apple and the FBI talked about finding still images of sex abuse, the EU was talking last year about videos and text too, and of targeting terrorism once the argument had been won on child protection. It can also use a number of possible technologies; in addition to the perceptual hash functions in the Apple proposal, there’s talk of machine-learning models. And, as a leaked EU internal report made clear, the preferred outcome for governments may be a mix of client-side and server-side scanning.

In our report, we provide a detailed analysis of scanning capabilities at both the client and the server, the trade-offs between false positives and false negatives, and the side effects – such as the ways in which adding scanning systems to citizens’ devices will open them up to new types of attack.

We did not set out to praise Apple’s proposal, but we ended up concluding that it was probably about the best that could be done. Even so, it did not come close to providing a system that a rational person might consider trustworthy.

Even if the engineering on the phone were perfect, a scanner brings within the user’s trust perimeter all those involved in targeting it – in deciding which photos go on the naughty list, or how to train any machine-learning models that riffle through your texts or watch your videos. Even if it starts out trained on images of child abuse that all agree are illegal, it’s easy for both insiders and outsiders to manipulate images to create both false negatives and false positives. The more we look at the detail, the less attractive such a system becomes. The measures required to limit the obvious abuses so constrain the design space that you end up with something that could not be very effective as a policing tool; and if the European institutions were to mandate its use – and there have already been some legislative skirmishes – they would open up their citizens to quite a range of avoidable harms. And that’s before you stop to remember that the European Court of Justice struck down the Data Retention Directive on the grounds that such bulk surveillance, without warrant or suspicion, was a grossly disproportionate infringement on privacy, even in the fight against terrorism. A client-side scanning mandate would invite the same fate.

But ‘if you build it, they will come’. If device vendors are compelled to install remote surveillance, the demands will start to roll in. Who could possibly be so cold-hearted as to argue against the system being extended to search for missing children? Then President Xi will want to know who has photos of the Dalai Lama, or of men standing in front of tanks; and copyright lawyers will get court orders blocking whatever they claim infringes their clients’ rights. Our phones, which have grown into extensions of our intimate private space, will be ours no more; they will be private no more; and we will all be less secure.

WEIS 2021 – Liveblog

I’ll be trying to liveblog the twentieth Workshop on the Economics of Information Security (WEIS), which is being held online today and tomorrow (June 28/29). The event was introduced by the co-chairs Dann Arce and Tyler Moore. 38 papers were submitted, and 15 accepted. My summaries of the sessions of accepted papers will appear as followups to this post; there will also be a panel session on the 29th, followed by a rump session for late-breaking results. (Added later: videos of the sessions are linked from the start of the followups that describe them.)