Category Archives: Security economics

Social-science angles of security

WEIS 2021 – Liveblog

I’ll be trying to liveblog the twentieth Workshop on the Economics of Information Security (WEIS), which is being held online today and tomorrow (June 28/29). The event was introduced by the co-chairs Dann Arce and Tyler Moore. 38 papers were submitted, and 15 accepted. My summaries of the sessions of accepted papers will appear as followups to this post; there will also be a panel session on the 29th, followed by a rump session for late-breaking results. Videos of the sessions will be linked here in a few days.

Cybercrime gangs as tech startups

In our latest paper, we propose a better way of analysing cybercrime.

Crime has been moving online, like everything else, for the past 25 years, and for the past decade or so it’s accounted for more than half of all property crimes in developed countries. Criminologists have tried to apply their traditional tools and methods to measure and understand it, yet even when these research teams include technologists, it always seems that there’s something missing. The people who phish your bank credentials are just not the same people who used to burgle your house. They have different backgrounds, different skills and different organisation.

We believe a missing factor is entrepreneurship. Cyber-crooks are running tech startups, and face the same problems as other tech entrepreneurs. There are preconditions that create the opportunity. There are barriers to entry to be overcome. There are pathways to scaling up, and bottlenecks that inhibit scaling. There are competitive factors, whether competing crooks or motivated defenders. And finally there may be saturation mechanisms that inhibit growth.

One difference with regular entrepreneurship is the lack of finance: a malware gang can’t raise VC to develop a cool new idea, or cash out by means on an IPO. They have to use their profits not just to pay themselves, but also to invest in new products and services. In effect, cybercrooks are trying to run a tech startup with the financial infrastructure of an ice-cream stall.

We have developed this framework from years of experience dealing with many types of cybercrime, and it appears to prove a useful way of analysing new scams, so we can spot those developments which, like ransomware, are capable of growing into a real problem.

Our paper Silicon Den: Cybercrime is Entrepreneurship will appear at WEIS on Monday.

Security engineering and machine learning

Last week I gave my first lecture in Edinburgh since becoming a professor there in February. It was also the first talk I’ve given in person to a live audience since February 2020.

My topic was the interaction between security engineering and machine learning. Many of the things that go wrong with machine-learning systems were already familiar in principle, as we’ve been using Bayesian techniques in spam filters and fraud engines for almost twenty years. Indeed, I warned about the risks of not being able to explain and justify the decisions of neural networks in the second edition of my book, back in 2008.

However the deep neural network (DNN) revolution since 2012 has drawn in hundreds of thousands of engineers, most of them without this background. Many fielded systems are extremely easy to break, often using tricks that have been around for years. What’s more, new attacks specific to DNNs – adversarial samples – have been found to exist for pretty well all models. They’re easy to find, and often transferable from one model to another.

I describe a number of new attacks and defences that we’ve discovered in the past three years, including the Taboo Trap, sponge attacks, data ordering attacks and markpainting. I argue that we will usually have to think of defences at the system level, rather than at the level of individual components; and that situational awareness is likely to play an important role.

Here now is the video of my talk.

Infrastructure – the Good, the Bad and the Ugly

Infrastructure used to be regulated and boring; the phones just worked and water just came out of the tap. Software has changed all that, and the systems our society relies on are ever more complex and contested. We have seen Twitter silencing the US president, Amazon switching off Parler and the police closing down mobile phone networks used by crooks. The EU wants to force chat apps to include porn filters, India wants them to tell the government who messaged whom and when, and the US Department of Justice has launched antitrust cases against Google and Facebook.

Infrastructure – the Good, the Bad and the Ugly analyses the security economics of platforms and services. The existence of platforms such as the Internet and cloud services enabled startups like YouTube and Instagram soar to huge valuations almost overnight, with only a handful of staff. But criminals also build infrastructure, from botnets through malware-as-a-service. There’s also dual-use infrastructure, from Tor to bitcoins, with entangled legitimate and criminal applications. So crime can scale too. And even “respectable” infrastructure has disruptive uses. Social media enabled both Barack Obama and Donald Trump to outflank the political establishment and win power; they have also been used to foment communal violence in Asia. How are we to make sense of all this?

I argue that this is not simply a matter for antitrust lawyers, but that computer scientists also have some insights to offer, and the interaction between technical and social factors is critical. I suggest a number of principles to guide analysis. First, what actors or technical systems have the power to exclude? Such control points tend to be at least partially social, as social structures like networks of friends and followers have more inertia. Even where control points exist, enforcement often fails because defenders are organised in the wrong institutions, or otherwise fail to have the right incentives; many defenders, from payment systems to abuse teams, focus on process rather than outcomes.

There are implications for policy. The agencies often ask for back doors into systems, but these help intelligence more than interdiction. To really push back on crime and abuse, we will need institutional reform of regulators and other defenders. We may also want to complement our current law-enforcement strategy of decapitation – taking down key pieces of criminal infrastructure such as botnets and underground markets – with pressure on maintainability. It may make a real difference if we can push up offenders’ transaction costs, as online criminal enterprises rely more on agility than on on long-lived, critical, redundant platforms.

This was a Dertouzos Distinguished Lecture at MIT in March 2021.

WEIS 2020 – Liveblog

I’ll be trying to liveblog the seventeenth Workshop on the Economics of Information Security (WEIS), which is being held online today and tomorrow (December 14/15) and streamed live on the CEPS channel on YouTube. The event was introduced by the general chair, Lorenzo Pupillo of CEPS, and the program chair Nicolas Christin of CMU. My summaries of the sessions will appear as followups to this post, and videos will be linked here in a few days.

SHB Seminar

The SHB seminar on November 5th was kicked off by Tom Holt, who’s discovered a robust underground market in identity documents that are counterfeit or fraudulently obtained. He’s been scraping both websites and darkweb sites for data and analysing how people go about finding, procuring and using such credentials. Most vendors were single-person operators although many operate within affiliate programs; many transactions involved cryptocurrency; many involve generating pdfs that people can print at home and that are good enough for young people to drink alcohol. Curiously, open web products seem to cost twice as much as dark web products.

Next was Jack Hughes, who has been studying the contract system introduced by hackforums in 2018 and made mandatory the following year. This enabled him to analyse crime forum behaviour before and during the covid-19 era. How do new users become active, and build up trust? How does it evolve? He collected 200,000 transactions and analysed them. The contract mandate stifled growth quickly, leading to a first peak; covid caused a second. The market was already centralised, and became more so with the pandemic. However contracts are getting done faster, and the main activity is currency exchange: it seems to be working as a cash-out market.

Anita Lavorgna has been studying the discourse of groups who oppose public mask mandates. Like the antivaxx movement, this can draw in fringe groups and become a public-health issue. She collected 23654 tweets from February to June 2020. There’s a diverse range of voices from different places on the political spectrum but with a transversal theme of freedom from government interference. Groups seek strength in numbers and seek to ally into movements, leading to the mask becoming a symbol of political identity construction. Anita found very little interaction between the different groups: only 144 messages in total.

Simon Parkin has been working on how we can push back on bad behaviours online while they are linked with good behaviours that we wish to promote. Precision is hard as many of the desirable behaviours are not explicitly recognised as such, and as many behaviours arise as a combination of personal incentives and context. The best way forward is around usability engineering – making the desired behaviours easier.

Bruce Schneier was the final initial speaker, and his topic was covid apps. The initial rush of apps that arrived in March through June have known issues around false positives and false negatives. We’ve also used all sorts of other tools, such as analysis of Google maps to measure lockdown compliance. The third thing is the idea of an immunity passport, saying you’ve had the disease, or a vaccine. That will have the same issues as the fake IDs that Tom talked about. Finally, there’s compliance tracking, where your phone monitors you. The usual countermeasures apply: consent, minimisation, infosec, etc., though the trade-offs might be different for a while. A further bunch of issues concern home working and the larger attack surface that many firms have as a result of unfamiliar tools, less resistance to being tols to do things etc.

The discussion started on fake ID; Tom hasn’t yet done test purchases, and might look at fraudulently obtained documents in the future, as opposed to completely counterfeit ones. Is hackforums helping drug gangs turn paper into coin? This is not clear; more is around cashing out cybercrime rather than street crime. There followed discussion by Anita of how to analyse corpora of tweets, and the implications for policy in real life. Things are made more difficult by the fact that discussions drift off into other platforms we don’t monitor. Another topic was the interaction of fashion: where some people wear masks or not as a political statement, many more buy masks that get across a more targeted statement. Fashion is really powerful, and tends to be overlooked by people in our field. Usability research perhaps focuses too much on the utilitarian economics, and is a bit of a blunt instrument. Another example related to covid is the growing push for monitoring software on employees’ home computers. Unfortunately Uber and Lyft bought a referendum result that enables them to not treat their staff in California as employees, so the regulation of working hours at home will probably fall to the EU. Can we perhaps make some input into what that should look like? Another issue with the pandemic is the effect on information security markets: why should people buy corporate firewalls when their staff are all over the place? And to what extent will some of these changes be permanent, if people work from home more? Another thread of discussion was how the privacy properties of covid apps make it hard for people to make risk-management decisions. The apps appear ineffective because they were designed to do privacy rather than to do public health, in various subtle ways; giving people low-grade warnings which do not require any action appear to be an attempt to raise public awareness, like mask mandates, rather than an effective attempt to get exposed individuals to isolate. Apps that check people into venues have their own issues and appear to be largely security theatre. Security theatre comes into its own where the perceived risk is much greater than the actual risk; covid is the opposite. What can be done in this case? Targeted warnings? Humour? What might happen when fatigue sets in? People will compromise compliance to make their lives bearable. That can be managed to some extent in institutions like universities, but in society it will be harder. We ended up with the suggestion that the next SHB seminar should be in February, which should be the low point; after that we can look forward to things getting better, and hopefully to a meeting in person in Cambridge on June 3-4 2021.

Three Paper Thursday: Broken Hearts and Empty Wallets

This is a guest post by Cassandra Cross.

Romance fraud (also known as romance scams or sweetheart swindles) affects millions of individuals globally each year. In 2019, the Internet Crime Complaint Centre (IC3) (USA) had over US$475 million reported lost to romance fraud. Similarly, in Australia, victims reported losing over $AUD80 million and British citizens reported over £50 million lost in 2018. Given the known under-reporting of fraud overall, and online fraud more specifically, these figures are likely to only be a minority of actual losses incurred.

Romance fraud occurs when an offender uses the guise of a legitimate relationship to gain a financial advantage from their victim. It differs from a bad relationship, in that from the outset, the offender is using lies and deception to obtain monetary rewards from their partner. Romance fraud capitalises on the fact that a potential victim is looking to establish a relationship and exhibits an express desire to connect with someone. Offenders use this to initiate a connection and start to build strong levels of trust and rapport.

As with all fraud, victims experience a wide range of impacts in the aftermath of victimisation. While many believe these to be only financial, in reality, it extends to a decline in both physical and emotional wellbeing, relationship breakdown, unemployment, homelessness, and in extreme cases, suicide. In the case of romance fraud, there is the additional trauma associated with grieving both the loss of the relationship as well as any funds they have transferred. For many victims, the loss of the relationship can be harder to cope with than the monetary aspect, with victims experiencing large degrees of betrayal and violation at the hands of their offender.

Sadly, there is also a large amount of victim blaming that exists with both romance fraud and fraud in general. Fraud is unique in that victims actively participate in the offence, through the transfer of money, albeit under false pretences. As a result, they are seen to be culpable for what occurs and are often blamed for their own circumstances. The stereotype of fraud victims as greedy, gullible and naïve persists, and presents as a barrier to disclosure as well as inhibiting their ability to report the incident and access any support services.

Given the magnitude of losses and impacts on romance fraud victims, there is an emerging body of scholarship that seeks to better understand the ways in which offenders are able to successfully target victims, the ways in which they are able to perpetrate their offences, and the impacts of victimisation on the individuals themselves. The following three articles each explore different aspects of romance fraud, to gain a more holistic understanding of this crime type.

Continue reading Three Paper Thursday: Broken Hearts and Empty Wallets

Security and Human Behaviour 2020

I’ll be liveblogging the workshop on security and human behaviour, which is online this year. My liveblogs will appear as followups to this post. This year my program co-chair is Alice Hutchings and we have invited a number of eminent criminologists to join us. Edited to add: here are the videos of the sessions.

Cybercrime is (often) boring

Much has been made in the cybersecurity literature of the transition of cybercrime to a service-based economy, with specialised services providing Denial of Service attacks, cash-out services, escrow, forum administration, botnet management, or ransomware configuration to less-skilled users. Despite this acknowledgement of the ‘industrialisation’ of much for the cybercrime economy, the picture of cybercrime painted by law enforcement and media reports is often one of ’sophisticated’ attacks, highly-skilled offenders, and massive payouts. In fact, as we argue in a recent paper accepted to the Workshop on the Economics of Information Security this year (and covered in KrebsOnSecurity last week), cybercrime-as-a-service relies on a great deal of tedious, low-income, and low-skilled manual administrative work.

Continue reading Cybercrime is (often) boring