Category Archives: Cybercrime

Chatcontrol or Child Protection?

Today I publish a detailed rebuttal to the argument from the intelligence community that we need to break end-to-end encryption in order to protect children. This has led in the UK to the Online Safety Bill and in the EU to the proposed Child Sex Abuse Regulation, which has become known in Brussels as “chatcontrol”.

The intelligence community wants to break WhatsApp, as that carries everything from diplomatic and business negotiations to MPs’ wheeling and dealing. Both the UK and EU proposals will take powers to mandate scanning of both text and images in your phone before messages are encrypted and sent, or after they are received and decrypted.

This is justified with arguments around child protection, which require careful study. Most child abuse happens in dysfunctional families, with the abuser typically being the mother’s partner; technology is often abused as a means of extortion and control. Indecent images get shared with outsiders, and user reports of such images are a really important way of alerting the police to new cases. There are also abusers who look for vulnerable minors online, and here too it’s user reporting that does most of the work.

But it costs money to get moderators to respond to user reports of abuse, so the tech firms’ performance here is unimpressive. Facebook seems to be the best of a bad lot, while Twitter is awful – and so hosts a lot more abuse. There’s a strong case for laws to compel service providers to manage user reporting better, and the EU’s Digital Services Act goes some way in this direction. The Online Safety Bill should be amended to do the same, and we produced a policy paper on this last week.

But details matter, as it’s important to understand the many inappropriate laws, dysfunctional institutions and perverse incentives that get in the way of rational policies around the online aspects of crimes of sexual violence against minors. (The same holds for violent online political extremism, which is also used as an excuse for more censorship and surveillance.) We do indeed need to spend more money on reducing violent crime, but it should be spent locally on hiring more police officers and social workers to deal with family violence directly. We also need welfare reform to reduce the number of families living in poverty.

As for surveillance, it has not helped in the past and there is no real prospect that the measures now proposed would help in the future. I go through the relevant evidence in my paper and conclude that “chatcontrol” will not improve child protection, but damage it instead. It will also undermine human rights at a time when we need to face down authoritarians not just technologically and militarily, but morally as well. What’s the point of this struggle, if not to defend democracy, the rule of law, and human rights?

Edited to add: here is a video of a talk I gave on the paper at Digitalize.

The Online Safety Bill: Reboot it, or Shoot it?

Yesterday I took part in a panel discussion organised by the Adam Smith Institute on the Online Safety Bill. This sprawling legislative monster has outlasted not just six Secretaries of State for Culture, Media and Sport, but two Prime Ministers. It’s due to slither back to Parliament in November, so we wrote a Policy Brief that explains what it tries to do and some of the things it gets wrong.

Some of the bill’s many proposals command wide support – for example, that online services should enable users to contact them effectively to report illegal material, which should be removed quickly. At present, only copyright owners and the police seem to be able to get the attention of the major platforms; ordinary people, including young people, should also be able to report unlawful things and have them taken down quickly. Here, the UK government intends to bind only large platforms like Facebook and Twitter. We propose extending the duty to gaming platforms too. Kids just aren’t on Facebook any more.

The Bill also tries to reignite the crypto wars by empowering Ofcom to require services to use “accredited technology” (read: software written by GCHQ contractors) to scan your WhatsApp messages. The idea that you can catch violent criminals such as child abusers and terrorists by bulk text scanning is entirely implausible; the error rates are so high that the police would swamped with false positives. Quite apart from that, bulk intercept has always been illegal in Britain, and would also contravene the European Convention on Human Rights, to which we are still a signatory despite Brexit. This power to mandate client-side scanning has to be scrapped, a move that quite a few MPs already support.

But what should we do instead about illegal images of minors, and about violent online political extremism? More local policing would be better; we explain why. This is informed by our work on the link between violent extremism and misogyny, as well as our analysis of a similar proposal in the EU. So it is welcome that the government is hiring more police officers. What’s needed now is a greater focus on family violence, which is the root cause of most child abuse, rather than using child abuse as an excuse to increase the central agencies’ surveillance powers and budgets.

In our Policy Brief, we also discuss content moderation, and suggest that it be guided by the principle of minimising cruelty. One of the other panelists, Graham Smith, discussed the legal difficulties of regulating speech and made a strong case that restrictions (such as copyright, libel, incitement and harassment) should be set out in primary legislation rather than farmed out to private firms, as at present, or to a regulator, as the Bill proposes. Given that most of the bad stuff is illegal already, why not make a start by enforcing the laws we already have, as they do in Germany? British policing efforts online range from the pathetic to the outrageous. It looks like Parliament will have some interesting decisions to take when the bill comes back.

Screenshot of the archived version of the Action Fraud website linked to from the NCA contact us page.

Reporting cybercrime is hard: NCA link to Action Fraud broken for 3 years

Screenshot of the archived version of the Action Fraud website linked to from the NCA contact us page.
Archived version of Action Fraud website

Yesterday I was asked for advice on anonymously reporting a new crypto scam that a potential victim had spotted before they lost money (hint: to a first approximation all cryptocurrencies and cryptoassets are a scam). In the end they got fed up with the difficulty of finding someone they could tell and gave up. However, to give the advice I thought I would check what the National Crime Agency’s National Cyber Crime Unit suggested so I searched “NCA NCCU report scam” and the first result was for the NCA’s Contact us page. Sounds good. It has a “Fraud” section which (as expected) talks about Action Fraud. However, since 2019 this page has linked to the National Archives archive of an old version of the Action Fraud website. So for three years if you followed the NCA’s website’s advice on how to report fraud you would have got very confused until you worked out you were on a (clearly labelled) archive rather than the proper website, which is why none of the forms work.

I reported this problem yesterday and I do not expect it to have been fixed by the time of writing but this problem going unresolved for three years is a clear example of the difficulties faced by victims of cybercrime.

2019 is also the year that Police Scotland declined to pay for Action Fraud as they did not consider it to provide value for money and instead handle fraud reporting internally.

I am PI of a jointly supervised between the University of Strathclyde and the University of Edinburgh PhD project funded by the Scottish Institute for Policing Research and the University of Strathclyde on Improving Cybercrime Reporting. Do get in touch with other stories of the difficulties of reporting cybercrime. The student, Juraj Sikra has published a systematic literature review on Improving Cybercrime Reporting in Scotland. It is clear that there is a long way to go to provide person centred cybercrime reporting for victims and potential victims. However, UK law enforcement in general, and Police Scotland in particular know there is a problem and do want to fix it.

Hiring for iCrime

A Research Assistant/Associate position is available at the Department of Computer Science and Technology to work on the ERC-funded Interdisciplinary Cybercrime Project (iCrime). We are looking to appoint a computer scientist to join an interdisciplinary team reporting to Dr Alice Hutchings.

iCrime incorporates expertise from criminology and computer science to research cybercrime offenders, their crime type, the place (such as online black markets), and the response. Within iCrime, we sustain robust data collection infrastructure to gather unique, high quality datasets, and design novel methodologies to identify and measure criminal infrastructure at scale. This is particularly important as cybercrime changes dynamically. Overall, our approach is evaluative, critical, and data driven.

Successful applicants will work in a team to collect and analyse data, develop tools, and write research outputs. Desirable technical skills include:

– Familiarity with automated data collection (web crawling and scraping) and techniques to sustain the complex data collection in adversarial environments at scale.
– Excellent software engineering skills, being familiar with Python, Bash scripting, and web development, particularly NodeJS and ReactJS.
– Experience in DevOps to integrate and migrate new tools within the existing ecosystem, and to automate data collection/transmission/backup pipelines.
– Working knowledge of Linux/Unix.
– Familiarity with large-scale databases, including relational databases and ElasticSearch.
– Practical knowledge of security and privacy to keep existing systems secure and protect against data leakage.
– Expertise in cybercrime research and data science/analysis is desirable, but not essential.

Please read the formal advertisement (at https://www.jobs.cam.ac.uk/job/34324/) for the details about exactly who and what we’re looking for and how to apply — and please pay special attention to our request for a covering letter!

Security engineering course

This week sees the start of a course on security engineering that Sam Ainsworth and I are teaching. It’s based on the third edition of my Security Engineering book, and is a first cut at a ‘film of the book’.

Each week we will put two lectures online, and here are the first two. Lecture 1 discusses our adversaries, from nation states through cyber-crooks to personal abuse, and the vulnerability life cycle that underlies the ecosystem of attacks. Lecture 2 abstracts this empirical experience into more formal threat models and security policies.

Although our course is designed for masters students and fourth-year undergrads in Edinburgh, we’re making the lectures available to everyone. I’ll link the rest of the videos in followups here, and eventually on the book’s web page.

WEIS 2022 call for papers

The 2022 Workshop on the Economics of Information Security will be held at Tulsa, Oklahoma, on 21-22 June 2022. Paper submissions are due by 28 February 2022. After two virtual events we’re eager to get back to meeting in person if we possibly can.

The program chairs for 2022 are Sadia Afroz and Laura Brandimarte, and here is the call for papers.

We originally set this as 20-21, being unaware that June 20 is the Juneteenth holiday in the USA. Sorry about that.

Anyway, we hope to see lots of you in Tulsa!

Is Apple’s NeuralMatch searching for abuse, or for people?

Apple stunned the tech industry on Thursday by announcing that the next version of iOS and macOS will contain a neural network to scan photos for sex abuse. Each photo will get an encrypted ‘safety voucher’ saying whether or not it’s suspect, and if more than about ten suspect photos are backed up to iCloud, then a clever cryptographic scheme will unlock the keys used to encrypt them. Apple staff or contractors can then look at the suspect photos and report them.

We’re told that the neural network was trained on 200,000 images of child sex abuse provided by the US National Center for Missing and Exploited Children. Neural networks are good at spotting images “similar” to those in their training set, and people unfamiliar with machine learning may assume that Apple’s network will recognise criminal acts. The police might even be happy if it recognises a sofa on which a number of acts took place. (You might be less happy, if you own a similar sofa.) Then again, it might learn to recognise naked children, and flag up a snap of your three-year-old child on the beach. So what the new software in your iPhone actually recognises is really important.

Now the neural network described in Apple’s documentation appears very similar to the networks used in face recognition (hat tip to Nicko van Someren for spotting this). So it seems a fair bet that the new software will recognise people whose faces appear in the abuse dataset on which it was trained.

So what will happen when someone’s iPhone flags ten pictures as suspect, and the Apple contractor who looks at them sees an adult with their clothes on? There’s a real chance that they’re either a criminal or a witness, so they’ll have to be reported to the police. In the case of a survivor who was victimised ten or twenty years ago, and whose pictures still circulate in the underground, this could mean traumatic secondary victimisation. It might even be their twin sibling, or a genuine false positive in the form of someone who just looks very much like them. What processes will Apple use to manage this? Not all US police forces are known for their sensitivity, particularly towards minority suspects.

But that’s just the beginning. Apple’s algorithm, NeuralMatch, stores a fingerprint of each image in its training set as a short string called a NeuralHash, so new pictures can easily be added to the list. Once the tech is built into your iPhone, your MacBook and your Apple Watch, and can scan billions of photos a day, there will be pressure to use it for other purposes. The other part of NCMEC’s mission is missing children. Can Apple resist demands to help find runaways? Could Tim Cook possibly be so cold-hearted as to refuse at add Madeleine McCann to the watch list?

After that, your guess is as good as mine. Depending on where you are, you might find your photos scanned for dissidents, religious leaders or the FBI’s most wanted. It also reminds me of the Rasterfahndung in 1970s Germany – the dragnet search of all digital data in the country for clues to the Baader-Meinhof gang. Only now it can be done at scale, and not just for the most serious crimes either.

Finally, there’s adversarial machine learning. Neural networks are fairly easy to fool in that an adversary can tweak images so they’re misclassified. Expect to see pictures of cats (and of Tim Cook) that get flagged as abuse, and gangs finding ways to get real abuse past the system. Apple’s new tech may end up being a distributed person-search machine, rather than a sex-abuse prevention machine.

Such a technology requires public scrutiny, and as the possession of child sex abuse images is a strict-liability offence, academics cannot work with them. While the crooks will dig out NeuralMatch from their devices and play with it, we cannot. It is possible in theory for Apple to get NeuralMatch to ignore faces; for example, it could blur all the faces in the training data, as Google does for photos in Street View. But they haven’t claimed they did that, and if they did, how could we check? Apple should therefore publish full details of NeuralMatch plus a set of NeuralHash values trained on a public dataset with which we can legally work. It also needs to explain how the system it deploys was tuned and tested; and how dragnet searches of people’s photo libraries will be restricted to those conducted by court order so that they are proportionate, necessary and in accordance with the law. If that cannot be done, the technology must be abandoned.

Cybercrime gangs as tech startups

In our latest paper, we propose a better way of analysing cybercrime.

Crime has been moving online, like everything else, for the past 25 years, and for the past decade or so it’s accounted for more than half of all property crimes in developed countries. Criminologists have tried to apply their traditional tools and methods to measure and understand it, yet even when these research teams include technologists, it always seems that there’s something missing. The people who phish your bank credentials are just not the same people who used to burgle your house. They have different backgrounds, different skills and different organisation.

We believe a missing factor is entrepreneurship. Cyber-crooks are running tech startups, and face the same problems as other tech entrepreneurs. There are preconditions that create the opportunity. There are barriers to entry to be overcome. There are pathways to scaling up, and bottlenecks that inhibit scaling. There are competitive factors, whether competing crooks or motivated defenders. And finally there may be saturation mechanisms that inhibit growth.

One difference with regular entrepreneurship is the lack of finance: a malware gang can’t raise VC to develop a cool new idea, or cash out by means on an IPO. They have to use their profits not just to pay themselves, but also to invest in new products and services. In effect, cybercrooks are trying to run a tech startup with the financial infrastructure of an ice-cream stall.

We have developed this framework from years of experience dealing with many types of cybercrime, and it appears to prove a useful way of analysing new scams, so we can spot those developments which, like ransomware, are capable of growing into a real problem.

Our paper Silicon Den: Cybercrime is Entrepreneurship will appear at WEIS on Monday.

Hiring for iCrime

We are hiring two Research Assistants/Associates to work on the ERC-funded Interdisciplinary Cybercrime Project (iCrime). We are looking to appoint one computer scientist and one social scientist to work in an interdisciplinary team reporting to Dr Alice Hutchings.

iCrime incorporates expertise from criminology and computer science to research cybercrime offenders, their crime type, the place (such as online black markets), and the response. We will map out the pathways of cybercrime offenders and the steps and skills required to successfully undertake complex forms of cybercrime. We will analyse the social dynamics and economies surrounding cybercrime markets and forums. We will use our findings to inform crime prevention initiatives and use experimental designs to evaluate their effects.

Within iCrime, we will develop tools to identify and measure criminal infrastructure at scale. We will use and develop unique datasets and design novel methodologies. This is particularly important as cybercrime changes dynamically. Overall, our approach will be evaluative, critical, and data driven.

If you’re a computer scientist, please follow the link at: https://www.jobs.cam.ac.uk/job/30100/

If you’re a social scientist, please follow the link at: https://www.jobs.cam.ac.uk/job/30099/

Please read the formal advertisements for the details about exactly who and what we’re looking for and how to apply — and please pay special attention to our request for a covering letter!

10/06/21 Edited to add new links