Text mining is harder than you think

Following last year’s row about Apple’s proposal to scan all the photos on your iPhone camera roll, EU Commissioner Johansson proposed a child sex abuse regulation that would compel providers of end-to-end encrypted messaging services to scan all messages in the client, and not just for historical abuse images but for new abuse images and for text messages containing evidence of grooming.

Now that journalists are distracted by the imminent downfall of our great leader, the Home Office seems to think this is a good time to propose some amendments to the Online Safety Bill that will have a similar effect. And while the EU planned to win the argument against the pedophiles first and then expand the scope to terrorist radicalisation and recruitment too, Priti Patel goes for the terrorists from day one. There’s some press coverage in the Guardian and the BBC.

We explained last year why client-side scanning is a bad idea. However, the shift of focus from historical abuse images to text scanning makes the government story even less plausible.

Detecting online wickedness from text messages alone is hard. Since 2016, we have collected over 99m messages from cybercrime forums and over 49m from extremist forums, and these corpora are used by 179 licensees in 55 groups from 42 universities in 18 countries worldwide. Detecting hate speech is a good proxy for terrorist radicalisation. In 2018, we thought we could detect hate speech with a precision of typically 92%, which would mean a false-alarm rate of 8%. But the more complex models of 2022, based on Google’s BERT, when tested on the better collections we have now, don’t do significantly better; indeed, now that we understand the problem in more detail, they often do worse. Do read that paper if you want to understand why hate-speech detection is an interesting scientific problem. With some specific kinds of hate speech it’s even harder; an example is anti-semitism, thanks to the large number of synonyms for Jewish people. So if we were to scan 10bn messages a day in Europe there would be maybe a billion false alarms for Europol to look at.

We’ve been scanning the Internet for wickedness for over fifteen years now, and looking at various kinds of filters for everything from spam to malware. Filtering requires very low false positive rates to be feasible at Internet scale, which means either looking for very specific things (such as indicators of compromise by a specific piece of malware) or by having rich metadata (such as a big spam run from some IP address space you know to be compromised). Whatever filtering Facebook can do on Messenger given its rich social context, there will be much less that a WhatsApp client can do by scanning each text on its way through.

So if you really wish to believe that either the EU’s CSA Regulation or the UK’s Online Harms Bill is an honest attempt to protect kids or catch terrorists, good luck.

5 thoughts on “Text mining is harder than you think

  1. Thank you for your work and for this piece in particular. You explain the high false positive rate and why it’s a problem. Is there any research about false negative rates? You mention the problems regarding anti-semitic hate speech, which may mean a significant rate or undetected cases. Are you aware of any estimates regarding undetected grooming or actual child abuse?

  2. We’re unable to do any effective research with CSA and grooming, because CSA material is illegal, and the ‘mens rea’ test doesn’t apply. So the only people who can do that research are firms like Facebook and Google who come across it in the course of their business, and government agencies involved in enforcement.

    However, it does seems likely that if you’re trying to find a 50-year-old MP who’s trying to groom a 15-year-old boy, by striking up a conversation in an online model railway club, then their conversation will not be much different from thousands of entirely innocuous chats in these forums. The giveaway, if any, will be in metadata. If it’s on Facebook, and Facebook happens to know everyone’s age and address and sexual preferences, then maybe they can raise some kind of alarm.

    Whether they should raises many other questions, of the kind that have been raised around the UK’s vetting and barring scheme. Was it sensible to bring that in after the Soham murders? Did it do any good? Even if it did, was it greater than the harm it did?

  3. “Detecting online wickedness from text messages alone is hard”.

    That’s not surprising, because merely identifying “wickedness” is very hard indeed. Do you know of someone who is planning to kill dozens of innocent civilians without warning? Obviously wicked, eh? But what if it’s your own nation’s “defence miinistry”?

    Someone planning to give millions in “foreign aid” to a backward foreign country? Clearly not wicked. Until you discover that the “aid” takes the form of lethal weapons and training in torture.

    “Detecting online wickedness from text messages” can only become a goal if you are prepared to believe implicitly what some authority tells you about who and what is “wicked” – which no educated adult should ever do.

    1. Great comment, especially regarding adulthood. One wonders what Ross is.

      Free speech laws exist for a reason. Conflict is natural to humans and cannot be get rid of as long as human senses are necessarily local to their bodies. Hate speech is just one of the manifestations of this conflict. Unless one abolishes humanity there’s no way to get rid of conflict.

Leave a Reply

Your email address will not be published. Required fields are marked *