Category Archives: Legal issues

Security-related legislation, government initiatives, court cases

How hate sites evade the censor

On Tuesday we had a seminar from Liz Fong-Jones entitled “Reverse engineering hate” about how she, and a dozen colleagues, have been working to take down a hate speech forum called Kiwi Farms. We already published a measurement study of their campaign, which forced the site offline repeatedly in 2022. As a result of that paper, Liz contacted us and this week she told us the inside story.

The forum in question specialises in personal attacks, and many of their targets are transgender. Their tactics include doxxing their victims, trawling their online presence for material that is incriminating or can be misrepresented as such, putting doctored photos online, and making malicious complaints to victims’ employers and landlords. They describe this as “milking people for laughs”. After a transgender activist in Canada was swatted, about a dozen volunteers got together to try to take the site down. They did this by complaining to the site’s service providers and by civil litigation.

This case study is perhaps useful for the UK, where the recent Online Safety Bill empowers Ofcom to do just this – to use injunctions in the civil courts to take down unpleasant websites.

The Kiwi Farms operator has for many months resisted the activists by buying the services required to keep his website up, including his data centre floor space, his transit, his AS, his DNS service and his DDoS protection, through a multitude of changing shell companies. The current takedown mechanisms require a complainant to first contact the site operator; he publishes complaints, so his followers can heap abuse on them. The takedown crew then has to work up a chain of suppliers. Their processes are usually designed to stall complainants, so that getting through to a Tier 1 and getting them to block a link takes weeks rather than days. And this assumes that the takedown crew includes experienced sysadmins who can talk the language of the service providers, to whose technical people they often have direct access; without that, it would take months rather than weeks. The net effect is that it took a dozen volunteers thousands of hours over six months from October 22 to April 23 to get all the Tier 1s to drop KF, and over $100,000 in legal costs. If the bureaucrats at Ofcom are going to do this work for a living, without the skills and access of Liz and her team, it could be harder work than they think.

Liz’s seminar slides are here.

Extending transparency, and happy birthday to the archive

I was delighted by two essays by Anton Howes on The Replication Crisis in History Open History. We computerists have long had an open culture: we make our publications open, as well as sharing the software we write and the data we analyse. My work on security economics and security psychology has taught me that this culture is not yet as well-developed in the social sciences. Yet we do what we can. Although we can’t have official conference proceedings for the Workshop on the Economics of Information Security – as then the economists would not be able to publish their papers in journals afterwards – we found a workable compromise by linking preprints from the website and from a liveblog. Economists and psychologists with whom we work have found their citation counts and h-indices boosted by our publicity mechanisms; they have incentives to learn.

A second benefit of transparency is reproducibility, the focus of Anton’s essay. Scholars are exposed to many temptations, which vary by subject matter, but are more tempting when it’s hard for others to check your work. Mathematical proofs should be clear and elegant but are all too often opaque or misleading; software should be open-sourced for others to play with; and we do what we can to share the data we collect for research on cybercrime and abuse.

Anton describes how more and more history books are found to have weak foundations, where historians quote things out of context, ignore contrary evidence, and elaborate myths and false facts into misleading stories that persist for decades. How can history correct itself more quickly? The answer, he argues, is Open History: making as many sources publicly available as possible, just like we computerists do.

As it happens, I scanned a number of old music manuscripts years ago to help other traditional music enthusiasts, but how can this be done at scale? One way forward comes from my college’s Archives Centre, which holds the personal papers of Sir Winston Churchill as well as other politicians and a number of eminent scientists. There the algorithm is that when someone requests a document, it’s also scanned and put online; so anything Alice looked at, Bob can look at too. This has raised some interesting technical problems around indexing and long-term archiving which I believe we have under control now, and I’m pleased to say that the Archives Centre is now celebrating its 50th anniversary.

It would also be helpful if old history books were as available online as they are in our library. Given that the purpose of copyright law is to maximise the amount of material that’s eventually available to all, I believe we should change the law to make continued copyright conditional on open access after an initial commercial period. Otherwise our historians’ output vanishes from the time that their books come off sale, to the time copyright expires maybe a century later.

My own Security Engineering book may show the way. With both the first edition in 2001 and the second edition in 2008, I put six chapters online for free at once, then released the others four years after publication. For the third edition, I negotiated an agreement with the publishers to put the chapters online for review as I wrote them. So the book came out by instalments, like Dickens’ novels, from April 2019 to September 2020. On the first of November 2020, all except seven sample chapters disappeared from this page for a period of 42 months; I’m afraid Wiley insisted on that. But after that, the whole book will be free online forever.

This also makes commercial sense. For both the 2001 and 2008 editions, paid-for sales of paper copies increased significantly after the whole book went online. People found my book online, liked what they saw, and then bought a paper copy rather than just downloading it all and printing out a thousand-odd pages. Open access after an exclusive period works for authors, for publishers and for history. It should be the norm.

The Pre-play Attack in Real Life

Recently I was contacted by a Falklands veteran who was a victim of what appears to have been a classic pre-play attack; his story is told here.

Almost ten years ago, after we wrote a paper on the pre-play attack, we were contacted by a Scottish sailor who’d bought a drink in a bar in Las Ramblas in Barcelona for €33, and found the following morning that he’d been charged €33,000 instead. The bar had submitted ten transactions an hour apart for €3,300 each, and when we got the transaction logs it turned out that these transactions had been submitted through three different banks. What’s more, although the transactions came from the same terminal ID, they had different terminal characteristics. When the sailor’s lawyer pointed this out to Lloyds Bank, they grudgingly accepted that it had been technical fraud and refunded the money.

In the years since then, I’ve used this as a teaching example both in tutorial talks and in university lectures. A payment card user has no trustworthy user interface, so the PIN entry device can present any transaction, or series of transactions, for authentication, and the customer is none the wiser. The mere fact that a customer’s card authenticated a transaction does not imply that the customer mandated that payment.

Payment by phone should eventually fix this, but meantime the frauds continue. They’re particularly common in nightlife establishments, both here and overseas. In the first big British case, the Spearmint Rhino in Bournemouth had special conditions attached to its license for some time after a series of frauds; a second case affected a similar establishment in Soho; there have been others. Overseas, we’ve seen cases affecting UK cardholders in Poland and the Baltic states. The technical modus operandi can involve a tampered terminal, a man-in-the-middle device or an overlay SIM card.

By now, such attacks are very well-known and there really isn’t any excuse for banks pretending that they don’t exist. Yet, in this case, neither the first responder at Barclays nor the case handler at the Financial Ombudsman Service seemed to understand such frauds at all. Multiple transactions against one cardholder, coming via different merchant accounts, and with delay, should have raised multiple red flags. But the banks have gone back to sleep, repeating the old line that the card was used and the customer PIN was entered, so it must all be the customer’s fault. This is the line they took twenty years ago when chip and pin was first introduced, and indeed thirty years ago when we were suffering ATM fraud at scale from mag-strip copying. The banks have learned nothing, except perhaps that they can often get away with lying about the security of their systems. And the ombudsman continues to claim that it’s independent.

2023 Workshop on the Economics of Information Security

WEIS 2023, the 22nd Workshop on the Economics of Information Security, will be held in Geneva from July 5-7, with a theme of Digital Sovereignty. We now have a list of sixteen accepted papers; there will also be three invited speakers, ten posters, and ten challenges for a Digital Sovereignty Hack on July 7-8.

The deadline for early registration is June 10th, and we have discount hotel bookings reserved until then. As Geneva gets busy in summer, we suggest you reserve your room now!

Interop: One Protocol to Rule Them All?

Everyone’s worried that the UK Online Safety Bill and the EU Child Sex Abuse Regulation will put an end to end-to-end encryption. But might a law already passed by the EU have the same effect?

The Digital Markets Act ruled that users on different platforms should be able to exchange messages with each other. This opens up a real Pandora’s box. How will the networks manage keys, authenticate users, and moderate content? How much metadata will have to be shared, and how?

In our latest paper, One Protocol to Rule Them All? On Securing Interoperable Messaging, we explore the security tensions, the conflicts of interest, the usability traps, and the likely consequences for individual and institutional behaviour.

Interoperability will vastly increase the attack surface at every level in the stack – from the cryptography up through usability to commercial incentives and the opportunities for government interference.

Twenty-five years ago, we warned that key escrow mechanisms would endanger cryptography by increasing complexity, even if the escrow keys themselves can be kept perfectly secure. Interoperability is complexity on steroids.

Bugs still considered harmful

A number of governments are trying to mandate surveillance software in devices that support end-to-end encrypted chat; the EU’s CSA Regulation and the UK’s Online Safety bill being two prominent current examples. Colleagues and I wrote Bugs in Our Pockets in 2021 to point out what was likely to go wrong; GCHQ responded with arguments about child protection, which I countered in my paper Chat Control or Child Protection.

As lawmakers continue to discuss the policy, the latest round in the technical argument comes from the Rephrain project, which was tasked with evaluating five prototypes built with money from GCHQ and the Home Office. Their report may be worth a read.

One contender looks for known-bad photos and videos with software on both client and server, and is the only team with access to CSAM for training or testing (it has the IWF as a partner). However it has inadequate controls both against scope creep, and against false positives and malicious accusations.

Another is an E2EE communications tool with added profanity filter and image scanning, linked to age verification, with no safeguards except human moderation at the reporting server.

The other three contenders are nudity detectors with various combinations of age verification or detection, and of reporting to parents or service providers.

None of these prototypes comes close to meeting reasonable requirements for efficacy and privacy. So the project can be seen as empirical support for the argument we made in “Bugs”, namely that doing surveillance while respecting privacy is really hard.

Chatcontrol or Child Protection?

Today I publish a detailed rebuttal to the argument from the intelligence community that we need to break end-to-end encryption in order to protect children. This has led in the UK to the Online Safety Bill and in the EU to the proposed Child Sex Abuse Regulation, which has become known in Brussels as “chatcontrol”.

The intelligence community wants to break WhatsApp, as that carries everything from diplomatic and business negotiations to MPs’ wheeling and dealing. Both the UK and EU proposals will take powers to mandate scanning of both text and images in your phone before messages are encrypted and sent, or after they are received and decrypted.

This is justified with arguments around child protection, which require careful study. Most child abuse happens in dysfunctional families, with the abuser typically being the mother’s partner; technology is often abused as a means of extortion and control. Indecent images get shared with outsiders, and user reports of such images are a really important way of alerting the police to new cases. There are also abusers who look for vulnerable minors online, and here too it’s user reporting that does most of the work.

But it costs money to get moderators to respond to user reports of abuse, so the tech firms’ performance here is unimpressive. Facebook seems to be the best of a bad lot, while Twitter is awful – and so hosts a lot more abuse. There’s a strong case for laws to compel service providers to manage user reporting better, and the EU’s Digital Services Act goes some way in this direction. The Online Safety Bill should be amended to do the same, and we produced a policy paper on this last week.

But details matter, as it’s important to understand the many inappropriate laws, dysfunctional institutions and perverse incentives that get in the way of rational policies around the online aspects of crimes of sexual violence against minors. (The same holds for violent online political extremism, which is also used as an excuse for more censorship and surveillance.) We do indeed need to spend more money on reducing violent crime, but it should be spent locally on hiring more police officers and social workers to deal with family violence directly. We also need welfare reform to reduce the number of families living in poverty.

As for surveillance, it has not helped in the past and there is no real prospect that the measures now proposed would help in the future. I go through the relevant evidence in my paper and conclude that “chatcontrol” will not improve child protection, but damage it instead. It will also undermine human rights at a time when we need to face down authoritarians not just technologically and militarily, but morally as well. What’s the point of this struggle, if not to defend democracy, the rule of law, and human rights?

Edited to add: here is a video of a talk I gave on the paper at Digitalize.

The Online Safety Bill: Reboot it, or Shoot it?

Yesterday I took part in a panel discussion organised by the Adam Smith Institute on the Online Safety Bill. This sprawling legislative monster has outlasted not just six Secretaries of State for Culture, Media and Sport, but two Prime Ministers. It’s due to slither back to Parliament in November, so we wrote a Policy Brief that explains what it tries to do and some of the things it gets wrong.

Some of the bill’s many proposals command wide support – for example, that online services should enable users to contact them effectively to report illegal material, which should be removed quickly. At present, only copyright owners and the police seem to be able to get the attention of the major platforms; ordinary people, including young people, should also be able to report unlawful things and have them taken down quickly. Here, the UK government intends to bind only large platforms like Facebook and Twitter. We propose extending the duty to gaming platforms too. Kids just aren’t on Facebook any more.

The Bill also tries to reignite the crypto wars by empowering Ofcom to require services to use “accredited technology” (read: software written by GCHQ contractors) to scan your WhatsApp messages. The idea that you can catch violent criminals such as child abusers and terrorists by bulk text scanning is entirely implausible; the error rates are so high that the police would swamped with false positives. Quite apart from that, bulk intercept has always been illegal in Britain, and would also contravene the European Convention on Human Rights, to which we are still a signatory despite Brexit. This power to mandate client-side scanning has to be scrapped, a move that quite a few MPs already support.

But what should we do instead about illegal images of minors, and about violent online political extremism? More local policing would be better; we explain why. This is informed by our work on the link between violent extremism and misogyny, as well as our analysis of a similar proposal in the EU. So it is welcome that the government is hiring more police officers. What’s needed now is a greater focus on family violence, which is the root cause of most child abuse, rather than using child abuse as an excuse to increase the central agencies’ surveillance powers and budgets.

In our Policy Brief, we also discuss content moderation, and suggest that it be guided by the principle of minimising cruelty. One of the other panelists, Graham Smith, discussed the legal difficulties of regulating speech and made a strong case that restrictions (such as copyright, libel, incitement and harassment) should be set out in primary legislation rather than farmed out to private firms, as at present, or to a regulator, as the Bill proposes. Given that most of the bad stuff is illegal already, why not make a start by enforcing the laws we already have, as they do in Germany? British policing efforts online range from the pathetic to the outrageous. It looks like Parliament will have some interesting decisions to take when the bill comes back.

Text mining is harder than you think

Following last year’s row about Apple’s proposal to scan all the photos on your iPhone camera roll, EU Commissioner Johansson proposed a child sex abuse regulation that would compel providers of end-to-end encrypted messaging services to scan all messages in the client, and not just for historical abuse images but for new abuse images and for text messages containing evidence of grooming.

Now that journalists are distracted by the imminent downfall of our great leader, the Home Office seems to think this is a good time to propose some amendments to the Online Safety Bill that will have a similar effect. And while the EU planned to win the argument against the pedophiles first and then expand the scope to terrorist radicalisation and recruitment too, Priti Patel goes for the terrorists from day one. There’s some press coverage in the Guardian and the BBC.

We explained last year why client-side scanning is a bad idea. However, the shift of focus from historical abuse images to text scanning makes the government story even less plausible.

Detecting online wickedness from text messages alone is hard. Since 2016, we have collected over 99m messages from cybercrime forums and over 49m from extremist forums, and these corpora are used by 179 licensees in 55 groups from 42 universities in 18 countries worldwide. Detecting hate speech is a good proxy for terrorist radicalisation. In 2018, we thought we could detect hate speech with a precision of typically 92%, which would mean a false-alarm rate of 8%. But the more complex models of 2022, based on Google’s BERT, when tested on the better collections we have now, don’t do significantly better; indeed, now that we understand the problem in more detail, they often do worse. Do read that paper if you want to understand why hate-speech detection is an interesting scientific problem. With some specific kinds of hate speech it’s even harder; an example is anti-semitism, thanks to the large number of synonyms for Jewish people. So if we were to scan 10bn messages a day in Europe there would be maybe a billion false alarms for Europol to look at.

We’ve been scanning the Internet for wickedness for over fifteen years now, and looking at various kinds of filters for everything from spam to malware. Filtering requires very low false positive rates to be feasible at Internet scale, which means either looking for very specific things (such as indicators of compromise by a specific piece of malware) or by having rich metadata (such as a big spam run from some IP address space you know to be compromised). Whatever filtering Facebook can do on Messenger given its rich social context, there will be much less that a WhatsApp client can do by scanning each text on its way through.

So if you really wish to believe that either the EU’s CSA Regulation or the UK’s Online Harms Bill is an honest attempt to protect kids or catch terrorists, good luck.