Monthly Archives: June 2021

WEIS 2021 – Liveblog

I’ll be trying to liveblog the twentieth Workshop on the Economics of Information Security (WEIS), which is being held online today and tomorrow (June 28/29). The event was introduced by the co-chairs Dann Arce and Tyler Moore. 38 papers were submitted, and 15 accepted. My summaries of the sessions of accepted papers will appear as followups to this post; there will also be a panel session on the 29th, followed by a rump session for late-breaking results. (Added later: videos of the sessions are linked from the start of the followups that describe them.)

Cybercrime gangs as tech startups

In our latest paper, we propose a better way of analysing cybercrime.

Crime has been moving online, like everything else, for the past 25 years, and for the past decade or so it’s accounted for more than half of all property crimes in developed countries. Criminologists have tried to apply their traditional tools and methods to measure and understand it, yet even when these research teams include technologists, it always seems that there’s something missing. The people who phish your bank credentials are just not the same people who used to burgle your house. They have different backgrounds, different skills and different organisation.

We believe a missing factor is entrepreneurship. Cyber-crooks are running tech startups, and face the same problems as other tech entrepreneurs. There are preconditions that create the opportunity. There are barriers to entry to be overcome. There are pathways to scaling up, and bottlenecks that inhibit scaling. There are competitive factors, whether competing crooks or motivated defenders. And finally there may be saturation mechanisms that inhibit growth.

One difference with regular entrepreneurship is the lack of finance: a malware gang can’t raise VC to develop a cool new idea, or cash out by means on an IPO. They have to use their profits not just to pay themselves, but also to invest in new products and services. In effect, cybercrooks are trying to run a tech startup with the financial infrastructure of an ice-cream stall.

We have developed this framework from years of experience dealing with many types of cybercrime, and it appears to prove a useful way of analysing new scams, so we can spot those developments which, like ransomware, are capable of growing into a real problem.

Our paper Silicon Den: Cybercrime is Entrepreneurship will appear at WEIS on Monday.

Security engineering and machine learning

Last week I gave my first lecture in Edinburgh since becoming a professor there in February. It was also the first talk I’ve given in person to a live audience since February 2020.

My topic was the interaction between security engineering and machine learning. Many of the things that go wrong with machine-learning systems were already familiar in principle, as we’ve been using Bayesian techniques in spam filters and fraud engines for almost twenty years. Indeed, I warned about the risks of not being able to explain and justify the decisions of neural networks in the second edition of my book, back in 2008.

However the deep neural network (DNN) revolution since 2012 has drawn in hundreds of thousands of engineers, most of them without this background. Many fielded systems are extremely easy to break, often using tricks that have been around for years. What’s more, new attacks specific to DNNs – adversarial samples – have been found to exist for pretty well all models. They’re easy to find, and often transferable from one model to another.

I describe a number of new attacks and defences that we’ve discovered in the past three years, including the Taboo Trap, sponge attacks, data ordering attacks and markpainting. I argue that we will usually have to think of defences at the system level, rather than at the level of individual components; and that situational awareness is likely to play an important role.

Here now is the video of my talk.

Hiring for iCrime

We are hiring two Research Assistants/Associates to work on the ERC-funded Interdisciplinary Cybercrime Project (iCrime). We are looking to appoint one computer scientist and one social scientist to work in an interdisciplinary team reporting to Dr Alice Hutchings.

iCrime incorporates expertise from criminology and computer science to research cybercrime offenders, their crime type, the place (such as online black markets), and the response. We will map out the pathways of cybercrime offenders and the steps and skills required to successfully undertake complex forms of cybercrime. We will analyse the social dynamics and economies surrounding cybercrime markets and forums. We will use our findings to inform crime prevention initiatives and use experimental designs to evaluate their effects.

Within iCrime, we will develop tools to identify and measure criminal infrastructure at scale. We will use and develop unique datasets and design novel methodologies. This is particularly important as cybercrime changes dynamically. Overall, our approach will be evaluative, critical, and data driven.

If you’re a computer scientist, please follow the link at: https://www.jobs.cam.ac.uk/job/30100/

If you’re a social scientist, please follow the link at: https://www.jobs.cam.ac.uk/job/30099/

Please read the formal advertisements for the details about exactly who and what we’re looking for and how to apply — and please pay special attention to our request for a covering letter!

10/06/21 Edited to add new links

A new way to detect ‘deepfake’ picture editing

Common graphics software now offers powerful tools for inpainting – using machine-learning models to reconstruct missing pieces of an image. They are widely used for picture editing and retouching, but like many sophisticated tools they can also be abused. They can remove someone from a picture of a crime scene, or remove a watermark from a stock photo. Could we make such abuses more difficult?

We introduce Markpainting, which uses adversarial machine-learning techniques to fool the inpainter into making its edits evident to the naked eye. An image owner can modify their image in subtle ways which are not themselves very visible, but will sabotage any attempt to inpaint it by adding visible information determined in advance by the markpainter.

One application is tamper-resistant marks. For example, a photo agency that makes stock photos available on its website with copyright watermarks can markpaint them in such a way that anyone using common editing software to remove a watermark will fail; the copyright mark will be markpainted right back. So watermarks can be made a lot more robust.

In the fight against fake news, markpainting news photos would mean that anyone trying to manipulate them would risk visible artefacts. So bad actors would have to check and retouch photos manually, rather than trying use inpainting tools to automate forgery at scale.

This paper has been accepted at ICML.