European Commission prefers breaking privacy to protecting kids

Today, May 11, EU Commissioner Ylva Johannson announced a new law to combat online child sex abuse. This has an overt purpose, and a covert purpose.

The overt purpose is to pressure tech companies to take down illegal material, and material that might possibly be illegal, more quickly. A new agency is to be set up in the Hague, modeled on and linked to Europol, to maintain an official database of illegal child sex-abuse images. National authorities will report abuse to this new agency, which will then require hosting providers and others to take suspect material down. The new law goes into great detail about the design of the takedown process, the forms to be used, and the redress that content providers will have if innocuous material is taken down by mistake. There are similar provisions for blocking URLs; censorship orders can be issued to ISPs in Member States.

The first problem is that this approach does not work. In our 2016 paper, Taking Down Websites to Prevent Crime, we analysed the takedown industry and found that private firms are much better at taking down websites than the police. We found that the specialist contractors who take down phishing websites for banks would typically take six hours to remove an offending website, while the Internet Watch Foundation – which has a legal monopoly on taking down child-abuse material in the UK – would often take six weeks.

We have a reasonably good understanding of why this is the case. Taking down websites means interacting with a great variety of registrars and hosting companies worldwide, and they have different ways of working. One firm expects an encrypted email; another wants you to open a ticket; yet another needs you to phone their call centre during Peking business hours and speak Mandarin. The specialist contractors have figured all this out, and have got good at it. However, police forces want to use their own forms, and expect everyone to follow police procedure. Once you’re outside your jurisdiction, this doesn’t work. Police forces also focus on process more than outcome; they have difficulty hiring and retaining staff to do detailed technical clerical work; and they’re not much good at dealing with foreigners.

Our takedown work was funded by the Home Office, and we recommended that they run a randomised controlled trial where they order a subset of UK police forces to use specialist contractors to take down criminal websites. We’re still waiting, six years later. And there’s nothing in UK law that would stop them running such a trial, or that would stop a Chief Constable outsourcing the work.

So it’s really stupid for the European Commission to mandate centralised takedown by a police agency for the whole of Europe. This will be make everything really hard to fix once they find out that it doesn’t work, and it becomes obvious that child abuse websites stay up longer, causing real harm.

Oh, and the covert purpose? That is to enable the new agency to undermine end-to-end encryption by mandating client-side scanning. This is not evident on the face of the bill but is evident in the impact assessment, which praises Apple’s 2021 proposal. Colleagues and I already wrote about that in detail, so I will not repeat the arguments here. I will merely note that Europol coordinates the exploitation of communications systems by law enforcement agencies, and the Dutch National High-Tech Crime Unit has developed world-class skills at exploiting mobile phones and chat services. The most recent case of continent-wide bulk interception was EncroChat; although reporting restrictions prevent me telling the story of that, there have been multiple similar cases in recent years.

So there we have it: an attack on cryptography, designed to circumvent EU laws against bulk surveillance by using a populist appeal to child protection, appears likely to harm children instead.

Hiring for iCrime

A Research Assistant/Associate position is available at the Department of Computer Science and Technology to work on the ERC-funded Interdisciplinary Cybercrime Project (iCrime). We are looking to appoint a computer scientist to join an interdisciplinary team reporting to Dr Alice Hutchings.

iCrime incorporates expertise from criminology and computer science to research cybercrime offenders, their crime type, the place (such as online black markets), and the response. Within iCrime, we sustain robust data collection infrastructure to gather unique, high quality datasets, and design novel methodologies to identify and measure criminal infrastructure at scale. This is particularly important as cybercrime changes dynamically. Overall, our approach is evaluative, critical, and data driven.

Successful applicants will work in a team to collect and analyse data, develop tools, and write research outputs. Desirable technical skills include:

– Familiarity with automated data collection (web crawling and scraping) and techniques to sustain the complex data collection in adversarial environments at scale.
– Excellent software engineering skills, being familiar with Python, Bash scripting, and web development, particularly NodeJS and ReactJS.
– Experience in DevOps to integrate and migrate new tools within the existing ecosystem, and to automate data collection/transmission/backup pipelines.
– Working knowledge of Linux/Unix.
– Familiarity with large-scale databases, including relational databases and ElasticSearch.
– Practical knowledge of security and privacy to keep existing systems secure and protect against data leakage.
– Expertise in cybercrime research and data science/analysis is desirable, but not essential.

Please read the formal advertisement (at https://www.jobs.cam.ac.uk/job/34324/) for the details about exactly who and what we’re looking for and how to apply — and please pay special attention to our request for a covering letter!

A striking memoir by Gus Simmons

Gus Simmons is one of the pioneers of cryptography and computer security. His contributions to public-key cryptography, unconditional authentication, covert channels and information hiding earned him an honorary degree, fellowship of the IACR, and election to the Rothschild chair of mathematics when he visited us in Cambridge in 1996. And this was his hobby; his day job was a mathematician at Sandia National Laboratories, where he worked on satellite imagery, arms-control treaty verification, and the command and control of nuclear weapons.

During lockdown, Gus wrote a book of stories about growing up in West Virginia during the depression years of the 1930s. After he circulated it privately to a few friends in the cryptographic community, we persuaded him to put it online so everyone can read it. During this desolate time, coal mines closed and fired their workers, who took over abandoned farms and survived as best they could. Gus’s memoir is a gripping oral history of a period when some parts of the U.S.A. were just as poor as rural Africa today.

Here it is: Another Time, Another Place, Another Story.

Security course at Cambridge

I have taken over the second-year Security course at Cambridge, which is traditionally taught in Easter term. From the end of April onwards I will be teaching three lectures per week. Taking advantage of the fact that Cambridge academics own the copyright and performance rights on their lectures, I am making all my undergraduate lectures available at no charge on my YouTube channel frankstajanoexplains.com. My lecture courses on Algorithms and on Discrete Mathematics are already up and I’ll be uploading videos of the Security lectures as I produce them, ahead of the official lecturing dates. I have uploaded the opening lecture this morning. You are welcome to join the class virtually and you will receive exactly the same tuition as my Cambridge students, at no charge. 


The philosophy of the course is to lead students to learn the fundamentals of security by “studying the classics” and gaining practical hands-on security experience by recreating and replicating actual attacks. (Of course the full benefits of the course are only reaped by those who do the exercises, as opposed to just watching the videos.)


This is my small contribution to raising a new generation of cyber-defenders, alongside the parallel thread of letting young bright minds realise that security is challenging and exciting by organising CTFs (Capture-The-Flag competitions) for them to take part in, which I have been doing since 2015 and continue to do. On that note, any students (undergraduate, master or PhD) currently studying in a university in UK, Israel, USA, Japan, Australia and France still have a couple more days to sign up for our 2022 Country to Country CTF, a follow-up to the Cambridge to Cambridge CTF that I co-founded with Howie Shrobe and Lori Glover at MIT in 2015. The teams will mix people at different levels so no prior experience is required. Go for it!

CoverDrop: Securing Initial Contact for Whistleblowers

Whistleblowing is dangerous business. Whistleblowers face grave consequences if they’re caught and, to make matters worse, the anonymity set – the set of potential whistleblowers for a given story – is often quite small. Mass surveillance regimes around the world don’t help matters either. Yet whistleblowing has been crucial in exposing corruption, rape and other crimes in recent years. In our latest research paper, CoverDrop: Blowing the Whistle Through A News App, we set out to create a system that allows whistleblowers to securely make initial contact with news organisations. Our paper has been accepted at PETS, the Privacy Enhancing Technologies Symposium.

To work out how we could help whistleblowers release sensitive information to journalists without exposing their identity, we conducted two workshops with journalists, system administrators and software engineers at leading UK-based news organisations. These discussions made it clear that a significant weak point in the whistleblowing chain is the initial contact by the source to the journalist or news organisation. Sources would often get in touch over insecure channels (e.g., email, phone or SMS) and then switch to more secure channels (e.g., Tor and Signal) later on in the conversation – but by then it may be too late. 

Existing whistleblowing solutions such as SecureDrop rely on Tor for anonymity and expect a high degree of technical competence from its users. But in many cases, simply connecting to the Tor network is enough to single out the whistleblower from a small anonymity set. 

CoverDrop takes a different approach. Instead of connecting to Tor, we embed the whistleblowing mechanism in the mobile news app published by respective news organisations and use the traffic generated by all users of the app as cover traffic, hiding any messages from whistleblowers who use it. We implemented CoverDrop and have shown it to be secure against a global passive network adversary that also has the ability to issue warrants on all infrastructure as well as the source and recipient devices.

We instantiated CoverDrop in the form of an Android app with the expectation that news organisations embed CoverDrop in their standard news apps. Embedding CoverDrop into a news app provides the whistleblower with deniability as well as providing a secure means of contact to all users. This should nudge potential whistleblowers away from using insecure methods of initial contact. The whistleblowing component is a modified version of Signal, augmented with dummy messages to prevent traffic analysis. We use the Secure Element on mobile devices, SGX on servers and onion encryption to reduce the ability of an attacker to gain useful knowledge even if some system components are compromised.

The primary limitation of CoverDrop is its messaging bandwidth, which must be kept low to minimise the networking cost borne by the vast majority of news app users who are not whistleblowers. CoverDrop is designed to do a critical and difficult part of whistleblowing: establishing initial contact securely. Once a low-bandwidth communication channel is established, the source and the journalist can meet in person, or use other systems to send large documents.

The full paper can be found here.

Mansoor Ahmed-Rengers, Diana A. Vasile, Daniel Hugenroth, Alastair R. Beresford, and Ross Anderson. CoverDrop: Blowing the Whistle Through A News App. Proceedings on Privacy Enhancing Technologies, 2022.

Arm releases experimental CHERI-enabled Morello board as part of £187M UKRI Digital Security by Design programme

Professor Robert N. M. Watson (Cambridge), Professor Simon W. Moore (Cambridge), Professor Peter Sewell (Cambridge), Dr Jonathan Woodruff (Cambridge), Brooks Davis (SRI), and Dr Peter G. Neumann (SRI)

After over a decade of research creating the CHERI protection model, hardware, software, and formal models and proofs, developed over three DARPA research programmes, we are at a truly exciting moment. Today, Arm announced first availability of its experimental CHERI-enabled Morello processor, System-on-Chip, and development board – an industrial quality and industrial scale demonstrator of CHERI merged into a high-performance processor design. Not only does Morello fully incorporate the features described in our CHERI ISAv8 specification to provide fine-grained memory protection and scalable software compartmentalisation, but it also implements an Instruction-Set Architecture (ISA) with formally verified security properties. The Arm Morello Program is supported by the £187M UKRI Digital Security by Design (DSbD) research programme, a UK government and industry-funded effort to transition CHERI towards mainstream use.

Continue reading Arm releases experimental CHERI-enabled Morello board as part of £187M UKRI Digital Security by Design programme

Security engineering course

This week sees the start of a course on security engineering that Sam Ainsworth and I are teaching. It’s based on the third edition of my Security Engineering book, and is a first cut at a ‘film of the book’.

Each week we will put two lectures online, and here are the first two. Lecture 1 discusses our adversaries, from nation states through cyber-crooks to personal abuse, and the vulnerability life cycle that underlies the ecosystem of attacks. Lecture 2 abstracts this empirical experience into more formal threat models and security policies.

Although our course is designed for masters students and fourth-year undergrads in Edinburgh, we’re making the lectures available to everyone. I’ll link the rest of the videos in followups here, and eventually on the book’s web page.

Electhical 2021

Electhical is an industry forum whose focus is achieving a low total footprint for electronics. It is being held on Friday December 10th at Churchill College, Cambridge. The speakers are from government, industry and academia; they include executives and experts on technology policy, consumer electronics, manufacturing, security and privacy. It’s sponsored by ARM, IEEE, IEEE CAS and Churchill College; registration is free.

WEIS 2022 call for papers

The 2022 Workshop on the Economics of Information Security will be held at Tulsa, Oklahoma, on 21-22 June 2022. Paper submissions are due by 28 February 2022. After two virtual events we’re eager to get back to meeting in person if we possibly can.

The program chairs for 2022 are Sadia Afroz and Laura Brandimarte, and here is the call for papers.

We originally set this as 20-21, being unaware that June 20 is the Juneteenth holiday in the USA. Sorry about that.

Anyway, we hope to see lots of you in Tulsa!

Report: Assessing the Viability of an Open-Source CHERI Desktop Software Ecosystem

CHERI (Capability Hardware Enhanced RISC Instructions) is an architectural extension to processor Instruction-Set Architectures (ISAs) adding efficient support for fine-grained C/C++-language memory protection as well as scalable software compartmentalisation. Developed over the last 11 years at SRI International and the University of Cambridge, CHERI is now the subject of a £187M UK Industrial Strategy Challenge Fund (ISCF) transition initiative, which is developing the experimental CHERI-enabled Arm Morello processor (shipping in 2022). In early 2021, UKRI funded a pilot study at Capabilities Limited (a Lab spinout led by Ben Laurie and I) to explore potential uses of CHERI and Morello as the foundation for a more secure desktop computer system. CHERI use case studies to date have focused on server and mobile scenarios, but desktop system security is essential as well, as it is frequently targeted in malware attacks (including ransomware) that also depend on plentiful software vulnerabilities. For this project, we were joined by Alex Richardson (previously a Senior Research Software Engineer at Cambridge, and now at Google), who led much of the development work described here.

In September 2021, we released our final report, Assessing the Viability of an Open-Source CHERI Desktop Software Ecosystem, which describes our three-staff-month effort to deploy CHERI within a substantive slice of an open-source desktop environment based on X11, Qt (and supporting libraries), and KDE. We adapted the software stack to run with memory-safe CHERI C/C++, performed a set of software compartmentalisation white boarding experiments, and concluded with a detailed 5-year retrospective vulnerability analysis to explore how memory safety and compartmentalisation would have affected past critical security vulnerabilities for a subset of that.

A key metric for us was ‘vulnerability mitigation’: 73.8% of past security advisories and patches (and a somewhat higher proportion of CVEs) would have been substantially mitigated by deploying CHERI. This number is not dissimilar to the Microsoft Security Response Center (MSRC)’s estimate that CHERI would have deterministically mitigated at least 67% of Microsoft’s 2019 critical memory-safety security vulnerabilities, although there were important differences in methodology (e.g., we also considered the impact of compartmentalisation on non-memory-safety vulnerabilities). One challenge in this area of the work was in establish de facto threat models for various open-source packages, as few open source vendors provide concrete definition of which bugs might (or might) constitute vulnerabilities. We had to reconstruct a threat model for each project in order to assess whether we could consider a vulnerability mitigated or not.

At low levels of the stack (e.g., 90% of X11 vulnerabilities, and 100% of vulnerabilities in supporting libraries such as giflib), vulnerabilities were almost entirely memory-safety issues, with very high mitigation rates using CHERI C/C++. At higher levels of the stack improved software compartmentalisation (e.g., enabling more fine-grained sandboxing at acceptable overheads) impacted many KDE-level vulnerabilities (e.g., 82% of Qt security notices, and 43% of KDE security advisories). Of particular interest to us was the extent to which it was important to deploy both CHERI-based protection techniques: while memory protection prevents arbitrary code execution in the vast majority of affected cases, the potential outcome of software crashing then required better compartmentalisation (e.g., of image-processing libraries) to mitigate potential denial of service. Of course, some vulnerabilities, especially at higher levels of the stack, were out of scope for our architectural approach — e.g., if an application fails to encrypt an email despite the user indicating via the UI that they require encryption, we have little to say about it.

Compatibility is also an important consideration in contemplating CHERI deployment: We estimated that we had to modify 0.026% LoC relative a 6-million line C and C++ source code base to run the stack with CHERI C/C++ memory safety. This figure compares favourably with %LoC modification requirements we have published relating to operating-system changes (e.g., in our 2019 paper on CheriABI), and a number of factors contribute to that. Not least, we have substantially improved the compatibility properties of CHERI C/C++ over the last few years through improved language and compiler support — for example, our compiler can now better resolve provenance ambiguity for intptr_t expressions through static analysis (CHERI requires that all pointers have a single source of provenance), rather than requiring source-level annotation. Another is that these higher-level application layers typically had less use of assembly code, fewer custom memory allocators and linkers, and, more generally, less architectural awareness. Along the way we also made minor improvements to CHERI LLVM’s reporting of specific types of potential compatibility problems that might require changes, as well as introducing a new CHERI LLVM sanitiser to assist with potential problems requiring dynamic detection.

The study is subject to various limitations (explored in detail in the report), not least that we worked with a subset of a much larger stack due to the three-month project length, and that our ability to assess whether the stack was working properly was limited by the available test suites and our ability to exercise applications ourselves. Further, with the Arm Morello board becoming available next year, we have not yet been able to assess the performance impact of these changes, which are another key consideration in considering deployment of CHERI in this environment. All of our results should be reproducible using the open-source QEMU-CHERI emulator and cheribuild build system. We look forward to continuing this work once shipping Arm hardware is available in the spring!