Hiring for iCrime

We are hiring two Research Assistants/Associates to work on the ERC-funded Interdisciplinary Cybercrime Project (iCrime). We are looking to appoint one computer scientist and one social scientist to work in an interdisciplinary team reporting to Dr Alice Hutchings.

iCrime incorporates expertise from criminology and computer science to research cybercrime offenders, their crime type, the place (such as online black markets), and the response. We will map out the pathways of cybercrime offenders and the steps and skills required to successfully undertake complex forms of cybercrime. We will analyse the social dynamics and economies surrounding cybercrime markets and forums. We will use our findings to inform crime prevention initiatives and use experimental designs to evaluate their effects.

Within iCrime, we will develop tools to identify and measure criminal infrastructure at scale. We will use and develop unique datasets and design novel methodologies. This is particularly important as cybercrime changes dynamically. Overall, our approach will be evaluative, critical, and data driven.

If you’re a computer scientist, please follow the link at: https://www.jobs.cam.ac.uk/job/30100/

If you’re a social scientist, please follow the link at: https://www.jobs.cam.ac.uk/job/30099/

Please read the formal advertisements for the details about exactly who and what we’re looking for and how to apply — and please pay special attention to our request for a covering letter!

10/06/21 Edited to add new links

A new way to detect ‘deepfake’ picture editing

Common graphics software now offers powerful tools for inpainting – using machine-learning models to reconstruct missing pieces of an image. They are widely used for picture editing and retouching, but like many sophisticated tools they can also be abused. They can remove someone from a picture of a crime scene, or remove a watermark from a stock photo. Could we make such abuses more difficult?

We introduce Markpainting, which uses adversarial machine-learning techniques to fool the inpainter into making its edits evident to the naked eye. An image owner can modify their image in subtle ways which are not themselves very visible, but will sabotage any attempt to inpaint it by adding visible information determined in advance by the markpainter.

One application is tamper-resistant marks. For example, a photo agency that makes stock photos available on its website with copyright watermarks can markpaint them in such a way that anyone using common editing software to remove a watermark will fail; the copyright mark will be markpainted right back. So watermarks can be made a lot more robust.

In the fight against fake news, markpainting news photos would mean that anyone trying to manipulate them would risk visible artefacts. So bad actors would have to check and retouch photos manually, rather than trying use inpainting tools to automate forgery at scale.

This paper has been accepted at ICML.

Patient confidentiality in remote consultations

During the lockdown last year, I was asked by the International Psychoanalytic Association (IPA) to help them update their guidance on remote consultations. I spoke to a range of GPs, surgeons, psychologists and psychoanalysts about what they’d learned during the first lockdown about working over the phone, or over Skype or Zoom. The IPA has now published my report, on a web page that also has their guidance to members both before and after the exercise.

Before the pandemic, remote consultation did happen, but not all therapists offered it; and confidentiality concerns tended to focus on technical security measures such as whether the call was encrypted end-to-end. After everyone was forced online in March and April 2020, clinicians learned rapidly to focus on the endpoints. Patients often have problems finding a private space to talk; there may be a family member in earshot, whether by accident, or because they’re cooped up in a tiny apartment, or because they have a controlling partner or parent. A clinician may return a patient’s call and catch them in a supermarket queue. And the clinic too can be interrupted, if the clinician is practicing from home.

Technical endpoint compromise is occasionally an issue; a controlling family member could inspect a patient’s device and discover a therapeutic relationship that had not been disclosed. By far the worst endpoint compromise that happened during the study period was when the Vastaamo chain of clinics in Finland was hit by ransomware; 45,000 patients’ records were stolen, and some were put online by extortionists demanding bitcoin payments. (And now we face an even larger-scale issue in the UK as the government plans to hoover up all our GP records for sale to drug companies unless we opt out by June 25; see here for how to do that.)

Such horrors aside, the core problem is to establish a therapeutic space where both patient and clinician can interact effectively, which means being able to concentrate and also to relax. There’s more to this than just being comfortable trusting the endpoint environments, the devices, the communications medium and any record-keeping mechanism. Interaction matters too. Many clinician communities discovered independently that the plain old telephone system often works better than new-fangled stuff such as skype and zoom. Video calls add maybe half a second of latency for buffering, which destroys conversational turn-taking. A further advantage of the phone is that you’re not staring at someone’s face at an unnatural distance. You can walk around the room, or even walk around the park.

Since doing this work I’ve started to avoid zoom and teams in favour of phone calls when I can, and use end-to-end encrypted voice calls on WhatsApp or Signal where call costs or client confidentiality make it sensible.

Robots, manners and stress

Humans and other animals have evolved to be aware of whether we’re under threat. When we’re on safe territory with family and friends we relax, but when we sense that a rival or a predator might be nearby, our fight-or-flight response kicks in. Situational awareness is vital, as it’s just too stressful to be alert all the time.

We’ve started to realise that this is likely to be just as important in many machine-learning applications. Take as an example machine vision in an automatic driver assistance system, whose goal is automatic lane keeping and automatic emergency braking. Such systems use deep neural networks, as they perform way better than the alternatives; but they can be easily fooled by adversarial examples. Should we worry? Sure, a bad person might cause a car crash by projecting a misleading image on a motorway bridge – but they could as easily steal some traffic cones from the road works. Nobody sits up at night worrying about that. But the car industry does actually detune vision systems from fear of deceptive attacks!

We therefore started a thread of research aimed at helping machine-learning systems detect whether they’re under attack. Our first idea was the Taboo Trap. You raise your kids to observe social taboos – to behave well and speak properly – and yet once you send them to school they suddenly know words that would make your granny blush. The taboo violation shows they’ve been exposed to ‘adversarial inputs’, as an ML engineer would call them. So we worked out how to train a neural network to avoid certain taboo values, both of outputs (forbidden utterances) and intermediate activations (forbidden thoughts). The taboos can be changed every time you retrain the network, giving the equivalent of a cryptographic key. Thus even though adversarial samples will always exist, you can make them harder to find; an attacker can’t just find one that works against one model of car and use it against every other model. You can take a view, based on risk, of how many different keys you need.

We then showed how you can also attack the availability of neural networks using sponge examples – inputs designed to soak up as much energy, and waste as much time, as possible. An alarm can be simpler to build in this case: just monitor how long your classifier takes to run.

Are there broader lessons? We suspect so. As robots develop situational awareness, like humans, and react to real or potential attacks by falling back to a more cautious mode of operation, a hostile environment will cause the equivalent of stress. Sometimes this will be deliberate; one can imagine constant low-level engagement between drones at tense national borders, just as countries currently probe each others’ air defences. But much of the time it may well be a by-product of poor automation design coupled with companies hustling aggressively for consumers’ attention.

This suggests a missing factor in machine-learning research: manners. We’ve evolved manners to signal to others that our intent is not hostile, and to negotiate the many little transactions that in a hostile environment might lead to a tussle for dominance. Yet these are hard for robots. Food-delivery robots can become unpopular for obstructing and harassing other pavement users; and one of the show-stoppers for automated driving is the difficulty that self-driving cars have in crossing traffic, or otherwise negotiating precedence with other road users. And even in the military, manners have a role – from the chivalry codes of medieval knights to the more modern protocols whereby warships and warplanes warn other craft before opening fire. If we let loose swarms of killer drones with no manners, conflict will be more likely.

Our paper Situational Awareness and Machine Learning – Robots, Manners and Stress was invited as a keynote for two co-located events: IEEE CogSIMA and the NATO STO SCI-341 Research Symposium on Situation awareness of Swarms and Autonomous systems. We got so many conflicting demands from the IEEE that we gave up on making a video of the talk for them, and our paper was pulled from their proceedings. However we decided to put the paper online for the benefit of the NATO folks, who were blameless in this matter.

COVID-19 test provider websites and Cybersecurity: COVID briefing #22

This week’s COVID briefing paper (COVIDbriefing-22.pdf) resumes the Cybercrime Centre’s COVID briefing series, which began in July 2020 with the aim of sharing short on-going updates on the impacts of the pandemic on cybercrime.

The reason for restarting this series is a recent personal experience while navigating through the government’s requirements on COVID-19 testing for international travel. I observed great variation in the quality of website design and cannot help but put on my academic hat to report on what I found.

The quality of some websites is so poor that it hard to distinguish them from fraudulent sites — that is they have many of the features and characteristics that consumers have been warned to pay attention to. Compounded with the requirement to provide personally identifiable information there is a risk that fraudulent sites will indeed spring up and it will be unsurprising if consumers are fooled.

The government needs to set out minimum standards for the websites of firms that they approve to provide COVID-19 testing — especially with the imminent growth in demand that will come as the UK’s travel rules are eased.

Cybercrime is (still) (often) boring

Depictions of cybercrime often revolve around the figure of the lone ‘hacker’, a skilled artisan who builds their own tools and has a deep mastery of technical systems. However, much of the work involved is now in fact more akin to a deviant customer service or maintenance job. This means that exit from cybercrime communities is less often via the justice system, and far more likely to be a simple case of burnout.

Continue reading Cybercrime is (still) (often) boring

Data ordering attacks

Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order.

So what happens if the bad guys can cause the order to be not random? You guessed it – all bets are off. Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set – then let initialisation bias do the rest of the work.

Does this generalise? Indeed it does. Previously, people had assumed that in order to poison a model or introduce backdoors, you needed to add adversarial samples to the training data. Our latest paper shows that’s not necessary at all. If an adversary can manipulate the order in which batches of training data are presented to the model, they can undermine both its integrity (by poisoning it) and its availability (by causing training to be less effective, or take longer). This is quite general across models that use stochastic gradient descent.

This work helps remind us that computer systems with DNN components are still computer systems, and vulnerable to a wide range of well-known attacks. A lesson that cryptographers have learned repeatedly in the past is that if you rely on random numbers, they had better actually be random (remember preplay attacks) and you’d better not let an adversary anywhere near the pipeline that generates them (remember injection attacks). It’s time for the machine-learning community to carefully examine their assumptions about randomness.

Infrastructure – the Good, the Bad and the Ugly

Infrastructure used to be regulated and boring; the phones just worked and water just came out of the tap. Software has changed all that, and the systems our society relies on are ever more complex and contested. We have seen Twitter silencing the US president, Amazon switching off Parler and the police closing down mobile phone networks used by crooks. The EU wants to force chat apps to include porn filters, India wants them to tell the government who messaged whom and when, and the US Department of Justice has launched antitrust cases against Google and Facebook.

Infrastructure – the Good, the Bad and the Ugly analyses the security economics of platforms and services. The existence of platforms such as the Internet and cloud services enabled startups like YouTube and Instagram soar to huge valuations almost overnight, with only a handful of staff. But criminals also build infrastructure, from botnets through malware-as-a-service. There’s also dual-use infrastructure, from Tor to bitcoins, with entangled legitimate and criminal applications. So crime can scale too. And even “respectable” infrastructure has disruptive uses. Social media enabled both Barack Obama and Donald Trump to outflank the political establishment and win power; they have also been used to foment communal violence in Asia. How are we to make sense of all this?

I argue that this is not simply a matter for antitrust lawyers, but that computer scientists also have some insights to offer, and the interaction between technical and social factors is critical. I suggest a number of principles to guide analysis. First, what actors or technical systems have the power to exclude? Such control points tend to be at least partially social, as social structures like networks of friends and followers have more inertia. Even where control points exist, enforcement often fails because defenders are organised in the wrong institutions, or otherwise fail to have the right incentives; many defenders, from payment systems to abuse teams, focus on process rather than outcomes.

There are implications for policy. The agencies often ask for back doors into systems, but these help intelligence more than interdiction. To really push back on crime and abuse, we will need institutional reform of regulators and other defenders. We may also want to complement our current law-enforcement strategy of decapitation – taking down key pieces of criminal infrastructure such as botnets and underground markets – with pressure on maintainability. It may make a real difference if we can push up offenders’ transaction costs, as online criminal enterprises rely more on agility than on on long-lived, critical, redundant platforms.

This was a Dertouzos Distinguished Lecture at MIT in March 2021.

Three Paper Thursday: Subverting Neural Networks via Adversarial Reprogramming

This is a guest post by Alex Shepherd.

Five years after Szegedy et al. demonstrated the capacity for neural networks to be fooled by crafted inputs containing adversarial perturbations, Elsayed et al. introduced adversarial reprogramming as a novel attack class for adversarial machine learning. Their findings demonstrated the capacity for neural networks to be reprogrammed to perform tasks outside of their original scope via crafted adversarial inputs, creating a new field of inquiry for the fields of AI and cybersecurity.

Their discovery raised important questions regarding the topic of trustworthy AI, such as what the unintended limits of functionality are in machine learning models and whether the complexity of their architectures can be advantageous to an attacker. For this Three Paper Thursday, we explore the three most eminent papers concerning this emerging threat in the field of adversarial machine learning.

Adversarial Reprogramming of Neural Networks, Gamaleldin F. Elsayed, Ian Goodfellow, and Jascha Sohl-Dickstein, International Conference on Learning Representations, 2018.

In their seminal paper, Elsayed et al. demonstrated their proof-of-concept for adversarial reprogramming by successfully repurposing six pre-trained ImageNet classifiers to perform three alternate tasks via crafted inputs containing adversarial programs. Their threat model considered an attacker with white-box access to the target models, whose objective was to subvert the models by repurposing them to perform tasks they were not originally intended to do. For the purposes of their hypothesis testing, adversarial tasks included counting squares and classifying MNIST digits and CIFAR-10 images.
Continue reading Three Paper Thursday: Subverting Neural Networks via Adversarial Reprogramming