All posts by Ross Anderson

Bugs still considered harmful

A number of governments are trying to mandate surveillance software in devices that support end-to-end encrypted chat; the EU’s CSA Regulation and the UK’s Online Safety bill being two prominent current examples. Colleagues and I wrote Bugs in Our Pockets in 2021 to point out what was likely to go wrong; GCHQ responded with arguments about child protection, which I countered in my paper Chat Control or Child Protection.

As lawmakers continue to discuss the policy, the latest round in the technical argument comes from the Rephrain project, which was tasked with evaluating five prototypes built with money from GCHQ and the Home Office. Their report may be worth a read.

One contender looks for known-bad photos and videos with software on both client and server, and is the only team with access to CSAM for training or testing (it has the IWF as a partner). However it has inadequate controls both against scope creep, and against false positives and malicious accusations.

Another is an E2EE communications tool with added profanity filter and image scanning, linked to age verification, with no safeguards except human moderation at the reporting server.

The other three contenders are nudity detectors with various combinations of age verification or detection, and of reporting to parents or service providers.

None of these prototypes comes close to meeting reasonable requirements for efficacy and privacy. So the project can be seen as empirical support for the argument we made in “Bugs”, namely that doing surveillance while respecting privacy is really hard.

Security economics course

Back in 2015 I helped record a course in security economics in a project driven by colleagues from Delft. This was launched as an EDX MOOC as well as becoming part of the Delft syllabus, and it has been used in many other courses worldwide. In Brussels, in December, a Ukrainian officer told me they use it in their cyber defence boot camp.

There’s been a lot of progress in security economics over the past seven years; see for example the liveblogs of the workshop on the economics of information security here. So it’s time to update the course, and we’ll be working on that between now and May.

If there are any topics you think we should cover, or any bugs you’d like to report, please get in touch!

Chatcontrol or Child Protection?

Today I publish a detailed rebuttal to the argument from the intelligence community that we need to break end-to-end encryption in order to protect children. This has led in the UK to the Online Safety Bill and in the EU to the proposed Child Sex Abuse Regulation, which has become known in Brussels as “chatcontrol”.

The intelligence community wants to break WhatsApp, as that carries everything from diplomatic and business negotiations to MPs’ wheeling and dealing. Both the UK and EU proposals will take powers to mandate scanning of both text and images in your phone before messages are encrypted and sent, or after they are received and decrypted.

This is justified with arguments around child protection, which require careful study. Most child abuse happens in dysfunctional families, with the abuser typically being the mother’s partner; technology is often abused as a means of extortion and control. Indecent images get shared with outsiders, and user reports of such images are a really important way of alerting the police to new cases. There are also abusers who look for vulnerable minors online, and here too it’s user reporting that does most of the work.

But it costs money to get moderators to respond to user reports of abuse, so the tech firms’ performance here is unimpressive. Facebook seems to be the best of a bad lot, while Twitter is awful – and so hosts a lot more abuse. There’s a strong case for laws to compel service providers to manage user reporting better, and the EU’s Digital Services Act goes some way in this direction. The Online Safety Bill should be amended to do the same, and we produced a policy paper on this last week.

But details matter, as it’s important to understand the many inappropriate laws, dysfunctional institutions and perverse incentives that get in the way of rational policies around the online aspects of crimes of sexual violence against minors. (The same holds for violent online political extremism, which is also used as an excuse for more censorship and surveillance.) We do indeed need to spend more money on reducing violent crime, but it should be spent locally on hiring more police officers and social workers to deal with family violence directly. We also need welfare reform to reduce the number of families living in poverty.

As for surveillance, it has not helped in the past and there is no real prospect that the measures now proposed would help in the future. I go through the relevant evidence in my paper and conclude that “chatcontrol” will not improve child protection, but damage it instead. It will also undermine human rights at a time when we need to face down authoritarians not just technologically and militarily, but morally as well. What’s the point of this struggle, if not to defend democracy, the rule of law, and human rights?

Edited to add: here is a video of a talk I gave on the paper at Digitalize.

ML models must also think about trusting trust

Our latest paper demonstrates how a Trojan or backdoor can be inserted into a machine-learning model by the compiler. In his Turing Award lecture, Ken Thompson explained how this could be done to an operating system, and in previous work we’d shown you you can subvert a model by manipulating the order in which training data are presented. Could these ideas be combined?

The answer is yes. The trick is for the compiler to recognise what sort of model it’s compiling – whether it’s processing images or text, for example – and then devising trigger mechanisms for such models that are sufficiently covert and general. The takeaway message is that for a machine-learning model to be trustworthy, you need to assure the provenance of the whole chain: the model itself, the software tools used to compile it, the training data, the order in which the data are batched and presented – in short, everything.

The Online Safety Bill: Reboot it, or Shoot it?

Yesterday I took part in a panel discussion organised by the Adam Smith Institute on the Online Safety Bill. This sprawling legislative monster has outlasted not just six Secretaries of State for Culture, Media and Sport, but two Prime Ministers. It’s due to slither back to Parliament in November, so we wrote a Policy Brief that explains what it tries to do and some of the things it gets wrong.

Some of the bill’s many proposals command wide support – for example, that online services should enable users to contact them effectively to report illegal material, which should be removed quickly. At present, only copyright owners and the police seem to be able to get the attention of the major platforms; ordinary people, including young people, should also be able to report unlawful things and have them taken down quickly. Here, the UK government intends to bind only large platforms like Facebook and Twitter. We propose extending the duty to gaming platforms too. Kids just aren’t on Facebook any more.

The Bill also tries to reignite the crypto wars by empowering Ofcom to require services to use “accredited technology” (read: software written by GCHQ contractors) to scan your WhatsApp messages. The idea that you can catch violent criminals such as child abusers and terrorists by bulk text scanning is entirely implausible; the error rates are so high that the police would swamped with false positives. Quite apart from that, bulk intercept has always been illegal in Britain, and would also contravene the European Convention on Human Rights, to which we are still a signatory despite Brexit. This power to mandate client-side scanning has to be scrapped, a move that quite a few MPs already support.

But what should we do instead about illegal images of minors, and about violent online political extremism? More local policing would be better; we explain why. This is informed by our work on the link between violent extremism and misogyny, as well as our analysis of a similar proposal in the EU. So it is welcome that the government is hiring more police officers. What’s needed now is a greater focus on family violence, which is the root cause of most child abuse, rather than using child abuse as an excuse to increase the central agencies’ surveillance powers and budgets.

In our Policy Brief, we also discuss content moderation, and suggest that it be guided by the principle of minimising cruelty. One of the other panelists, Graham Smith, discussed the legal difficulties of regulating speech and made a strong case that restrictions (such as copyright, libel, incitement and harassment) should be set out in primary legislation rather than farmed out to private firms, as at present, or to a regulator, as the Bill proposes. Given that most of the bad stuff is illegal already, why not make a start by enforcing the laws we already have, as they do in Germany? British policing efforts online range from the pathetic to the outrageous. It looks like Parliament will have some interesting decisions to take when the bill comes back.

The Dynamics of Industry-wide Disclosure

Last year, we disclosed two related vulnerabilities that broke a wide range of systems. In our Bad Characters paper, we showed how to use Unicode tricks – such as homoglyphs and bidi characters – to mislead NLP systems. Our Trojan Source paper showed how similar tricks could be used to make source code look one way to a human reviewer, and another way to a compiler, opening up a wide range of supply-chain attacks on critical software. Prior to publication, we disclosed our findings to four suppliers of large NLP systems, and nineteen suppliers of software development tools. So how did industry respond?

We were invited to give the keynote talk this year at LangSec, and the video is now available. In it we describe not just the Bad Characters and Trojan Source vulnerabilities, but the large natural experiment created by their disclosure. The Trojan Source vulnerability affected most compilers, interpreters, code editors and code repositories; this enabled us to compare responses by firms versus nonprofits and by firms that managed their own response versus those who outsourced it. The interaction between bug bounty programs, government disclosure assistance, peer review and press coverage was interesting. Most of the affected development teams took action, though some required a bit of prodding.

The response by the NLP maintainers was much less enthusiastic. By the time we gave this talk, only Google had done anything – though we now hear that Microsoft is now also working on a fix. The reasons for this responsibility gap need to be understood better. They may include differences in culture between C coders and data scientists; the greater costs and delays in the build-test-deploy cycle for large ML models; and the relative lack of press interest in attacks on ML systems. If many of our critical systems start to include ML components that are less maintainable, will the ML end up being the weakest link?

Text mining is harder than you think

Following last year’s row about Apple’s proposal to scan all the photos on your iPhone camera roll, EU Commissioner Johansson proposed a child sex abuse regulation that would compel providers of end-to-end encrypted messaging services to scan all messages in the client, and not just for historical abuse images but for new abuse images and for text messages containing evidence of grooming.

Now that journalists are distracted by the imminent downfall of our great leader, the Home Office seems to think this is a good time to propose some amendments to the Online Safety Bill that will have a similar effect. And while the EU planned to win the argument against the pedophiles first and then expand the scope to terrorist radicalisation and recruitment too, Priti Patel goes for the terrorists from day one. There’s some press coverage in the Guardian and the BBC.

We explained last year why client-side scanning is a bad idea. However, the shift of focus from historical abuse images to text scanning makes the government story even less plausible.

Detecting online wickedness from text messages alone is hard. Since 2016, we have collected over 99m messages from cybercrime forums and over 49m from extremist forums, and these corpora are used by 179 licensees in 55 groups from 42 universities in 18 countries worldwide. Detecting hate speech is a good proxy for terrorist radicalisation. In 2018, we thought we could detect hate speech with a precision of typically 92%, which would mean a false-alarm rate of 8%. But the more complex models of 2022, based on Google’s BERT, when tested on the better collections we have now, don’t do significantly better; indeed, now that we understand the problem in more detail, they often do worse. Do read that paper if you want to understand why hate-speech detection is an interesting scientific problem. With some specific kinds of hate speech it’s even harder; an example is anti-semitism, thanks to the large number of synonyms for Jewish people. So if we were to scan 10bn messages a day in Europe there would be maybe a billion false alarms for Europol to look at.

We’ve been scanning the Internet for wickedness for over fifteen years now, and looking at various kinds of filters for everything from spam to malware. Filtering requires very low false positive rates to be feasible at Internet scale, which means either looking for very specific things (such as indicators of compromise by a specific piece of malware) or by having rich metadata (such as a big spam run from some IP address space you know to be compromised). Whatever filtering Facebook can do on Messenger given its rich social context, there will be much less that a WhatsApp client can do by scanning each text on its way through.

So if you really wish to believe that either the EU’s CSA Regulation or the UK’s Online Harms Bill is an honest attempt to protect kids or catch terrorists, good luck.

European Commission prefers breaking privacy to protecting kids

Today, May 11, EU Commissioner Ylva Johannson announced a new law to combat online child sex abuse. This has an overt purpose, and a covert purpose.

The overt purpose is to pressure tech companies to take down illegal material, and material that might possibly be illegal, more quickly. A new agency is to be set up in the Hague, modeled on and linked to Europol, to maintain an official database of illegal child sex-abuse images. National authorities will report abuse to this new agency, which will then require hosting providers and others to take suspect material down. The new law goes into great detail about the design of the takedown process, the forms to be used, and the redress that content providers will have if innocuous material is taken down by mistake. There are similar provisions for blocking URLs; censorship orders can be issued to ISPs in Member States.

The first problem is that this approach does not work. In our 2016 paper, Taking Down Websites to Prevent Crime, we analysed the takedown industry and found that private firms are much better at taking down websites than the police. We found that the specialist contractors who take down phishing websites for banks would typically take six hours to remove an offending website, while the Internet Watch Foundation – which has a legal monopoly on taking down child-abuse material in the UK – would often take six weeks.

We have a reasonably good understanding of why this is the case. Taking down websites means interacting with a great variety of registrars and hosting companies worldwide, and they have different ways of working. One firm expects an encrypted email; another wants you to open a ticket; yet another needs you to phone their call centre during Peking business hours and speak Mandarin. The specialist contractors have figured all this out, and have got good at it. However, police forces want to use their own forms, and expect everyone to follow police procedure. Once you’re outside your jurisdiction, this doesn’t work. Police forces also focus on process more than outcome; they have difficulty hiring and retaining staff to do detailed technical clerical work; and they’re not much good at dealing with foreigners.

Our takedown work was funded by the Home Office, and we recommended that they run a randomised controlled trial where they order a subset of UK police forces to use specialist contractors to take down criminal websites. We’re still waiting, six years later. And there’s nothing in UK law that would stop them running such a trial, or that would stop a Chief Constable outsourcing the work.

So it’s really stupid for the European Commission to mandate centralised takedown by a police agency for the whole of Europe. This will be make everything really hard to fix once they find out that it doesn’t work, and it becomes obvious that child abuse websites stay up longer, causing real harm.

Oh, and the covert purpose? That is to enable the new agency to undermine end-to-end encryption by mandating client-side scanning. This is not evident on the face of the bill but is evident in the impact assessment, which praises Apple’s 2021 proposal. Colleagues and I already wrote about that in detail, so I will not repeat the arguments here. I will merely note that Europol coordinates the exploitation of communications systems by law enforcement agencies, and the Dutch National High-Tech Crime Unit has developed world-class skills at exploiting mobile phones and chat services. The most recent case of continent-wide bulk interception was EncroChat; although reporting restrictions prevent me telling the story of that, there have been multiple similar cases in recent years.

So there we have it: an attack on cryptography, designed to circumvent EU laws against bulk surveillance by using a populist appeal to child protection, appears likely to harm children instead.