Category Archives: Cybercrime

Cambridge Cybercrime Conference 2025 – Liveblog

The Cambridge Cybercrime Centre‘s eight one day conference on cybercrime was held on Monday, 23rd June 2025, which marked 10 years of the Centre.

Similar to previous “liveblog” coverage of conferences and workshops on Light Blue Touchpaper, here is a “liveblog”-style overview of the talks at this year’s conference.

Sunoo Park — Legal Risks of Security Research

Sunoo discussed researchers receiving restrictive TOS clauses, and risk around adversarial scrutiny. Noting that it’s difficult to distinguish from malicious hacking, and we need to understand the risks. Sunoo highlights particular US laws that creates risk for researchers, sharing a guide they wrote for highlighting these risks. This project grew from colleagues receiving legal threats, as well as clients, wanting to enable informed decisions on how to seek advice, and also try to nudge public discussion on law reforms.

The CFAA was passed a long time ago, around the time of the Wargames film. Computer crime has changed a lot since then. They define computer to be pretty much any computer, where access is unauthorized or exceeds authorized access. One early case was United States vs McDanel, who found a bug in customer software and reported this to customers. This resulted in a legal case where customers were informed of a security flaw, due to the cost of fixing the flaw, but the government later requested the case be overturned. More recently, there was a case of a police database being accessed for a bribe, which was also under the CFAA.

Another law is the DMCA, which states that “no person shall circumvent a technological measure that effectively controls access to work”, and this may apply to captchas, anti-bot, etc.

Sunoo is starting a new study looking at researchers’ lived experiences of legal risk under US/UK law. It can be hard for researchers to talk openly about these, which results in little evidence to counter laws. Furthermore, there’s a lot of anecdotal information. Sunoo would like to hear from US/UK researchers relating to law and researchers.

Alice Hutchings — Ten years of the Cambridge Cybercrime Centre

The Centre was established in 2015, to collect and share cybercrime data internationally. They collect lots of data at scale: forums, chat channels, extremist platforms, DDoS attacks, modded apps, defacements, spam, and more. They share datasets with academics, not for commercial purposes, through agreements to set out ethical and legal constraints. The aim was to help researchers with collecting data at scale, and overcome challenges with working on large datasets. They don’t just collect data, but they do their own research too, around crime types, offenders, places, and responses.

Session 1: Trust, Identity, and Communication in Cybercriminal Ecosystems

Roy Ricaldi— From trust to trade: Uncovering the trust-building mechanisms supporting cybercrime markets on Telegram

Roy is researching trust and cybercrime, and how this is built on Telegram. Cybercrime markets rely on trust to function, and there is existing literature on this topic for forums. Forums have structured systems, such as reputation and escrow, whereas Telegram is more ephemeral, but still used for trading. Roy asks how trust established in this volatile, high-risk environment? Economic theory states without trust, markets can fail.

Roy starts by exploring the market segments found, looking at trust signals, and how frequently users are exposed to these trust systems. Roy notes chat channels can have significant history, and while trust signals exists, users may not be likely to find older trust signals easily. They built a snowballing and classification pipeline, to collect over 1 million messages from 167 telegram communities. Later, they developed a framework, for measuring and simulating trust signals. Findings showed market segments were highly thematic within communities, and trust signals. They used DeepseekV3 for classification, which detected trust signals and market segments with highest accuracy. They found an uneven distribution of trust signals across market segments. For example, piracy content is free so trust signals were low.

They find messages asking for use of escrow, or asking other to “vouch” for sellers. Some of these communities have moderators which would set rules around types of messages. After looking at the distribution, they ran a simulation to see how many signals the users were exposed to. Setup profiles of market segments, communities visited and messages read. They found 70% of users see 5 or less trust signals in their simulation, and all users see at least 1. Over time, these do evolve with digital infrastructure forming a larger peak. They note the importance of understanding how trust works on Telegram, to help find the markets that matter and can cause harm.

John McAlaneyPower, identity and group dynamics in hacking forums

John discussed work in progress around power structures and group dynamics in the CrimeBB dataset. He attended Defcon as a social psychologist, observing the interaction dynamics and how people see themselves within the large size of the conference.

Previous work in identity asked if hacking forums members considered themselves to be a “hacker” and resulted in discussions around the term and labelling. Other previous work looked at themes of what was spoken about in forums, such as legality, honesty, skill acquisition, knowledge, and risk. Through interviews, they found people had contradictory ideas around trust. They note existing hierarchies of power within forums, and evidence of social psychological phenomenon.

Within existing research literature, John found a gap where theories had not been explored necessarily in the online forum setting. They ask if there are groups forming on hacking forums in the same way as other online forums? Also, how does the structure of these groups differ? Are group dynamics different?

He was initially working with a deductive approach for thematic analysis. “Themes do not emerge from thematic analysis”, rather they are exploring what is currently discussed. He is not looking to generalise from thematic analysis, but rather looking into BERT next to see if they are missing any themes from the dataset.

He suggests the main impact will aim to contribute back to sociological literature, and also try to improve threat detection.

Haitao ShiEvaluating the impact of anonymity on emotional expression in drug-related discussions: a comparative study of the dark web and mainstream social media

Haitao looked at self-disclosure, emotional disclosure, and environmental influence on cybercrime forums. They ask how different models of anonymity across chat channels and forums vary, and which different communications styles emerge? They identified drug-related channels and discussions for their analysis, and took steps to clean and check dataset quality. The project used BERTopic, for embedding messages to be used in clustering, then plotted these to visually identify similar topics. To further explore the topics, Haitao used an emotion classifier to detect intent. They found high levels of disgust, anger, and anticipation in their dataset.

Session 2: Technical Threats and Exploitation Tactics

Taro TsuchiyaBlockchain address poisoning

Taro introduces a scenario of sending rent, where the victim seems to make an error selecting a cryptocurrency address. This turns out to have been a poisoned address. Taro aims to identify address poisoning, to see how prevalent this is, and measure the payoff. They identify attack attempts with an algorithm to match transfers with similar addresses in a given time range.

They detect 270M attack transfers on 17M victims, estimating a $84M USD loss. They found loss was much higher on Ethereum, and this lookalike attack is easily generalisable and scalable.

They bundled these into groups, considering two are the same if, they are launched in the same transaction, and they use the same address to pay the transaction fees, or they use the same lookalike address. Clustering found “copying bots”, who copy other transactions for front-running. The attack groups identified are large but heterogenous, and the attack itself is profitable for large groups. Furthermore, larger groups tend to win over smaller groups. Finally, they model lookalike address generation, finding one large group is using GPUs to generate these addresses.

They give suggestions for mitigating these attacks, by adding latency for address generation, disallow zero-value transfers, and increase wallet lengths. They also want to alert users to this risk of this attack.

Marre SlikkerThe human attack surface: understanding hacker techniques in exploiting human elements

Marre is looking at human factors in security, as this is commonly the weakest link in security. Marre asks what do hackers on underground forums discuss regarding the exploitation of human factors in cybercrime? They look at CrimeBB data to analyse topics discussed, identify lexicon used, and give a literature review of how these factors are conceptualised.

They create a bridge between academic human factor language (“demographics”) to hacker language (“target dumb boomers”), and use topic modelling to identify distribution of words used in forum messages.

What were their results? A literature review found a lot of inconsistencies in human factors research terminology. Following this, they asked cybersecurity experts about human factors, and created a list of 328 keywords to help filter the dataset. Topic modelling was then used, however the results were quite superficial, with lots of noise and general chatter.

Kieron Ivy Turk — Technical Tactics Targeting Tech-Abuse

Ivy discussed a project on personal item tracking devices, which have been misused for stalking, domestic abuse, and theft. Companies have developed anti-stalking features to try to mitigate these issues. They ran a study with the Assassins Guild, provided students with trackers to test the efficacy of these features. Their study found nobody used the anti-stalking features, despite everyone in the study knowing there was a possibility they were being stalked. At the time of the study, the scanning apps only tended to detect a subset of tracker brands. Apple and Google have since created an RFC to try to standardise trackers and anti-stalking measures.

Ivy has also been working on IoT security to understand the associated risks. They present a HARMS model to help analyse IoT device security failings. Ivy ran a study to identify harms with IoT devices, asking participants to misuse these. They ask how do attackers discover abusive features? They found participants used and explored the UI to find features available to them. They suggest the idea of a “UI-bounded” adversary is limiting, and rather attackers are “functionality-enabled”.

Ivy asks how can we create technical improvements in future with IoT?

Session 3: Disruption and Resilience in Illicit Online Activities

Anh V. VuAssessing the aftermath: the effects of a global takedown against DDoS-for-hire services

Anh has been following DDoS takedowns by law enforcement. DDoS for hire services provide a platform for taking control of botnets to be used in flooding servers with fake traffic. There is little technical skill needed, and is cheap. These services publicly advertise statistics of daily attacks they contribute to.

Law enforcement continues to takedown DDoS infrastructure, focusing on domain takedowns. Statistics of visitors following the takedowns found 20M visitors, and 34k messages were collected from DDoS support Telegram channels. They also have DDoS UDP amplification data, and collected self-reported DDoS attack data.

Domain takedowns showed that domains returned quickly, 52% returned after the first takedown, and in the second takedown all returned. Domain takedown appears to now have limited effect. Visitor statistics showed large booters operate a franchise business, offering API access to resellers.

Following the first takedown, activity and chat channel messages declined, but this had less impact in the second wave. Operators gave away free extensions to plans, and a few seemed to leave the market.

Their main takeaway is the overall intervention impact is short lived, and suppressing the supply side alone is not enough as the demand continues to persist in the long run. He asks what can be done better for interventions in the future?

Dalya ManatovaModeling organizational resilience: a network-based simulation for analyzing recovery and disruption of ransomware operations

Dalya studies the organisational dynamics and resilience of cybercrime, tracking the evolution and rebranding of ransomware operators. To carry out ransomware, they need infrastructure. This includes selecting targets, executing, ransom negotiation, payment processing, and victim support, and creating leak websites. They break this down further into a complex model, showing the steps of ransomware attacks. They use this to model the task duration involved in attacks, estimating how long it takes to complete a ransomware attack when learning. Following this, they create infrastructure disruption and observe how this process changes. They also model the disruption of members: what happens if they reassign tasks to others or hire a new person?

Marco WähnerThe prevalence and use of conspiracy theories in anonymity networks

Marco first asks what is a conspiracy theory? These all appear to have right-wing extremism, antisemitism, and misinformation. There are a lot of challenges around researching conspiracy theories: the language is often indirect and coded, however this is not a new phenomenon.

What is the influence of environmental and structural of conspiracy theories in anonymised networks? Marco notes this can be for strengthening social ties, and fosters a sense of belonging. Also, this may be used with ideological or social incentives.

Marco asks how we can identify these theories circulating in anonymised networks, and if these are used to promote illicit activities or drive sales? This could then be used to formulate intervention strategies. They took a data-driven approach looking at CrimeBB and ExtremeBB data to find conspiracies, using dictionary keyword searches and topic modelling. Preliminary research found prevalence of conspiracies was very low. ExtremeBB is a bit higher, but still rare.

They provide explanations for the low level of distribution. Keywords are indirect, and can be out of context when searching. Also, conspiratorial communications are not always needed to sell products. They are aiming to strengthen the study design, by coding a subsample to check for false positives, and use classical ML models. They find a dictionary approach may not be a good starting point, and conspiracies are not always used to sell products.

Join Our 3-Course Series on Cybersecurity Economics

On 2 October, TU Delft are starting a new online three course series on cybersecurity economics. I am co-teaching this course with Michel van Eeten (TU Delft), Daniel Woods (University of Edinburgh), Simon Parkin (TU Delft), Rolf van Wegberg (TU Delft), Tyler Moore (Tulsa Uni) and Rainer Böhme (Innsbruck Uni). The course also features content from Ross Anderson (University of Cambridge), recorded before his passing. Ross was passionate about teaching, and was deeply involved in the design of this MOOC.

The first course on Foundation and Measurement provides you with foundational micro-economic concepts to explain security behavior of various actors involved securing the organization – internally, like IT and business units, and externally, like suppliers, customers and regulators. Next, it equips you with a causal framework to understand how to measure the effectiveness of security controls, as well as what measurements are currently available.

The second course on Users and Attackers presents a wealth of insights on the individuals involved in security: from user behavior to the strategies of attackers. Contrary to popular opinion, users are not the weakest link. If you want to know why do users not follow company security policies, you need to look at the costs imposed on them. On the side of the attackers, there are also clear incentives at work. The course covers the latest insights on attacker behavior.

The third course on Solutions covers answers to overcome the incentive misalignment and information problems at the level of organizations and at the level of markets. Starting with the standard framework of risk management, the course unpacks how to identify solutions in risk mitigation and risk transfer and where risk acceptance might be more rational. Finally, we need to address market failures, since they end up undermining the security of firms and society at large.

Cambridge Cybercrime Conference 2024 – Liveblog

The Cambridge Cybercrime Centre‘s seventh one day conference on cybercrime was held on Monday, 10th June 2024

Similar to previous “liveblog” coverage of conferences and workshops on Light Blue Touchpaper, here is a “liveblog”-style overview of the talks at this year’s conference.

L. Jean Camp – Global Cyber Resilience Using a Public Health Model of eCrime (Keynote)

Who gets phished? This still hasn’t changed much in 20 years. We still don’t know how people are targeted, or even if they are targeted. People need to identify security indicators, domain names, etc., and this is hard. Current practice with warnings does not provide what people need. While people can learn how to use bad interfaces, we can’t expect people to pay attention all the time and without interruption. Expertise alone is not adequate: LastPass devs were phished. She looked at phishing factors, and asked how good each population was at identifying phishing and legitimate websites, finding familiarity and gender did not have a significant difference for phishing websites, but found familiarity was important for identifying legitimate websites. Later, they asked participants about security expertise. We tend to write warnings for ourselves (security experts), rather than for end users. They also compared risk perception across populations. Overall, they found computer expertise (positive) and age (negative) were the primary factors in identifying phishing pages. How can we learn from public health to provide more effective warnings which work for the wider general population?

Gabriella Williams – Beyond Borders: Exploring Security, Privacy, Cultural, and Legal Challenges in Metaverse Sexual Harassment

PhD researcher in digital identity and age assurance methods to mitigate against virtual harms. The virtual reality environment (metaverse) has new risks and harms, by creating a new environment with anonymity where people can be whoever they want to be. Gabriella asks if sexual harassment is a crime in the metaverse? There is no legal framework currently, and there are varying jurisdictions online. Metaverse has cultural issues, with standing close to someone, making unwanted contact, and inappropriate jokes. How can this be moderated? Lots of issues with collecting metadata on social interactions, biometric data, and security issues with over reliance on automation and threats to authentication and integrity. Their current research is looking at challenges around implementing age assurance, and how identities can be authenticated.

Bomin Keum – The Incel Paradox: Does Collective Self-Loathing Facilitate Radicalisation or Belonging?

What don’t we know and why don’t we know it? We have a hard time agreeing on what radicalisation is, but this is a process rather than instances of extremist violence. Online radicalisation is facilitated through anonymity, perceived strength in numbers, and too much information spread and absorbed quickly. Bomin considers the use of the Us vs Them framework: collectively constructed perception differentiating the in-group from the out-group. Incel communities show negativity within the group as well as out, which is different to other communities. The Us vs Them framework has “us” as self-directed victimhood with men deprived of their “right to sex” whereas the “them” refers to a perception of society giving “too much freedom to women”. What are the self and other narrative framings, and which topics are associated with self vs other narrative frames? Bomin compares 2019 and 2020 datasets around the start of the pandemic. Internal group themes have helplessness and victimisation, whereas outside has unfair advantages and shameful other. Collectively, there are narratives of community, violence, and vision. They note you can’t take discussions at face value, as the language used can be quite extreme and text-level analysis may not reflect intent. Also, there is some shifting from blame to mockery of others. Not all radical actors commit violence but can inform facilitators behind intensification. Applying theories to these communities can be questionable, due to the unique aspects of the communities, and needs further data-driven research to improve on theory.

Jason Nurse – Ransomware Harms and the Victim Experience

Supply chain issue with St. Thomas’ Hospital last week, where a supplier of hospitals was hit by ransomware, and a critical incident was declared in London. Focus in the media on the financial impact, but what are the other harms of this, on both individuals and society? Jason carried out a literature review, and ran workshops and interviews alongside harm modelling to explore effects. What do we know already from the literature, and what can we learn from individuals? Interviews were focused on people who were subject to a ransomware attack or had professional experience of supporting organisations affected by ransomware. This includes cyber insurance organisations, which are now a big player. Gathering qualitative data from interviews, and using thematic analysis. Findings show this is a serious risk for all organisations, including small businesses: “everything you relied on yesterday doesn’t work today”. Can also create reputational harm for organisations. Applying the idea of orders of harm: first-order are harms directly to the person or org, second-order are downstream orgs and individuals, and third-order are the economy and society. Implications include a loss of trust in law enforcement, reduced faith in public services, and normalisation of cybercrime. Other impacts include harms to staff: staff members having to deal with the situation, including overworking to resolve issues. Highlights potential correlations between burnout and cybersecurity issues. Next, Jason looks at how to model harms. They gather data on well publicised events and to establish relationships between harms. This finds many downstream harms: we can more deeply explore harms arising throughout society rather than just “the data was encrypted”.

Ethan Thomas – Operation Brombenzyl and Operation Cronos

DDoS for hire continues to be a threat, enabling easy attacks against infrastructure, and these are targeted by site take downs and arrests. Finding a new way to provide a longer lasting impact, disrupting the marketplace. Using splash pages to deter users, and also creating law enforcement-run DDoS for hire websites. Some of the disguised sites were “seized”, others were “outed” as NCA controlled, and some are still running. Second operation is Cronos, again using deception but applied to ransomware attacks. Finding broad deterrence messaging doesn’t always work well, now there is focus on showing victims cases where cybercriminals did not uphold their promises.

Luis Adan Saavedra del Toro – Sideloading of Modded Apps: User Choice, Security and Piracy

What are modded apps, and why do users use them? Android users have the capability of installing any app they download from the internet, outside of the Google Play Store. Third-party stores have ads and user review features. Modded apps have unlocked pro features, such as a modded Spotify app to bypass ads and other paid features. Modded gaming apps have free in-app purchases. Luis found over 400 modded Android app markets, and crawled the 13 most popular, creating the ModZoo dataset. Most of these modded apps are games, and lots of duplicates across markets. None of the markets had any payment infrastructure. They discovered apps with changed code had added additional permissions and advertising libraries. Some apps with Ad IDs had been changed. 9% of those with modded code were malicious. iOS has misconceptions around jailbreaking. iOSModZoo has ~30k apps. iOSZoo is a dataset of ~55k free App Store apps. Most iOS modded apps are pirated copies of paid apps.

Felipe Moreno-Vera and Daniel S. Menasché – Beneath the Cream: Unveiling Relevant Information Points from CrimeBB with Its Ground Truth Labels

Looking at exploits which are shared on underground forums. The team used three types of labels: post-type, intent, and crime-type, which they used to complement their approach to tracking keywords, their usage, and different vulnerability levels discussed. They create a classifier for threats, so they can identify what is being discussed. They use regex to identify CVEs, and a function to identify language. They note the labels used were only available for one site, and later use ChatGPT to create more labels for posts. They find ChatGPT improves on existing labels.

Jeremy D. Seideman, Shoufu Luo, and Sven Dietrich – The Guy In The Chair: Examining How Cybercriminals Use External Resources to Supplement Underground Forum Conversations

“Guy in the chair” is the support network that “connects the dots”. They looked at underground forum conversations to identify what this support network is. Do people post URLs, do they advertise things, do they talk about other communications? What is the wider context? Past literature shows that forums work best as a social network, forming communities. Their project examines the use of offensive AI usage, presenting their data pipeline, which they use to clean data prior to using topic transfer models. Following this, they identified buckets of URLs. The majority of known links were other forums, code sharing, image hosting, and file sharing. Lots of the links had link rot. Future work will further explore the application of analysis methods used with archaeological count data to their dataset.

Anh V. Vu – Yet Another Diminishing Spark: Low-level Cyberattacks in the Israel-Gaza Conflict

Anh notes differing perspectives of cyberwar in the world media, with a strong focus on high-profile cyber attacks. However, what is happening with low-level cybercrime actors and the services supporting these attacks? They are using data from website defacement attacks and UDP amplification DDoS attacks, alongside collections of volunteer hacking discussions. They contrast the conflicts of Russia vs Ukraine and Israel vs. Gaza. Anh finds interest in low-level DDoS and defacement attacks dropped off quickly, although notes that these findings should not be confounded with state-sponsored cyber attacks.

Dalyapraz Manatova – Relationships Matter: Reconstructing the Organisational Structure of a Ransomware Group

Dalyapraz has been studying dynamics of cybercrime networks, thinking about these as a socio-technical complex system, with technical, economical, and social factors. Existing literature shows that eCrime has “communities”, with admins and moderators. When these communities are disrupted, they often move to other places. Participants often have different pseudonyms for who they are communicating with, e.g. as an administrator or to trade. However, these communities are more like organisations, with roles, tasks, scale, scope. Follows a similar structure to aaS services.

Marilyne Ordekian – Investigating Wrench Attacks: Physical Attacks Targeting Cryptocurrency Users

Wrench attacks have been around since the start of Bitcoin, yet have received little academic attention. Marilyne gathered data on wrench attacks through Bitcoin Talk discussions and interviews. Incidents were reported across different areas, from 2011 to 2021. There were peaks of incidents, which coincided with bitcoin reaching an all-time high. Why? Potential reasons include financial gain, theft is easier than hacking, and no account transfer limits. They found that 25% of these incidents occurred during in-person meet ups. Are wrench attacks reported? No, they are underreported. They propose safety mechanisms for individuals, including not bragging, diversifying of funds, and digital safety practices. Also, they suggest existing regulations could be strengthened, such as improved KYC verification to consider the risk of wrench attacks. System design changes could include redesigning apps to hide balance amounts.

Mariella Mischinger – Investigating and Comparing Discussion Topics in Multilingual Underground Forums

Grasping at straw

Britain’s National Crime Agency has spent the last five years trying to undermine encryption, saying it might stop them arresting hundreds of men every month for downloading indecent images of children. Now they complain that most of the men they do prosecute escape jail. Eight in ten men convicted of image offences escaped an immediate prison sentence, and the NCA’s Director General Graeme Biggar describes this as “striking”.

I agree, although the conclusions I draw are rather different. In Chatcontrol or Child Protection? I explained how the NCA and GCHQ divert police resources from tackling serious contact offences, such as child rape and child murder, to much less serious secondary offences around images of historical abuse and even synthetic images. The structural reasons are simple enough: they favour centralised policing over local efforts, and electronic surveillance over community work.

One winner is the NCA, which apparently now has 200 staff tracing people associated with alarms raised automatically by Big Tech’s content surveillance, while the losers include Britain’s 43 local police forces. If 80% of the people arrested as a result of Mr Biggar’s activities don’t even merit any jail time, then my conclusion is that the Treasury should cut his headcount by at least 160, and give each Chief Constable an extra 3-4 officers instead. Frontline cops agree that too much effort goes into image offences and not enough into the more serious contact crimes.

Mr Biggar argues that Facebook is wicked for turning on end-to-end encryption in Facebook Messenger, as won’t be able to catch as many bad men in future. But if encryption stops him wasting police time, well done Zuck! Mr Biggar also wants Parliament to increase the penalties. But even though Onan was struck dead by God for spilling his seed upon the ground, I hope we can have more rational priorities for criminal law enforcement in the 21st century.

How hate sites evade the censor

On Tuesday we had a seminar from Liz Fong-Jones entitled “Reverse engineering hate” about how she, and a dozen colleagues, have been working to take down a hate speech forum called Kiwi Farms. We already published a measurement study of their campaign, which forced the site offline repeatedly in 2022. As a result of that paper, Liz contacted us and this week she told us the inside story.

The forum in question specialises in personal attacks, and many of their targets are transgender. Their tactics include doxxing their victims, trawling their online presence for material that is incriminating or can be misrepresented as such, putting doctored photos online, and making malicious complaints to victims’ employers and landlords. They describe this as “milking people for laughs”. After a transgender activist in Canada was swatted, about a dozen volunteers got together to try to take the site down. They did this by complaining to the site’s service providers and by civil litigation.

This case study is perhaps useful for the UK, where the recent Online Safety Bill empowers Ofcom to do just this – to use injunctions in the civil courts to take down unpleasant websites.

The Kiwi Farms operator has for many months resisted the activists by buying the services required to keep his website up, including his data centre floor space, his transit, his AS, his DNS service and his DDoS protection, through a multitude of changing shell companies. The current takedown mechanisms require a complainant to first contact the site operator; he publishes complaints, so his followers can heap abuse on them. The takedown crew then has to work up a chain of suppliers. Their processes are usually designed to stall complainants, so that getting through to a Tier 1 and getting them to block a link takes weeks rather than days. And this assumes that the takedown crew includes experienced sysadmins who can talk the language of the service providers, to whose technical people they often have direct access; without that, it would take months rather than weeks. The net effect is that it took a dozen volunteers thousands of hours over six months from October 22 to April 23 to get all the Tier 1s to drop KF, and over $100,000 in legal costs. If the bureaucrats at Ofcom are going to do this work for a living, without the skills and access of Liz and her team, it could be harder work than they think.

Liz’s seminar slides are here.

Hacktivism, in Ukraine and Gaza

People who write about cyber-conflict often talk of hacktivists and other civilian volunteers who contribute in various ways to a cause. Might the tools and techniques of cybercrime enable its practitioners to be effective auxiliaries in a real conflict? Might they fall foul of the laws of war, and become unlawful combatants?

We have now measured hacktivism in two wars – in Ukraine and Gaza – and found that its effects appear to be minor and transient in both cases.

In the case of Ukraine, hackers supporting Ukraine attacked Russian websites after the invasion, followed by Russian hackers returning the compliment. The tools they use, such as web defacement and DDoS, can be measured reasonably well using resources we have developed at the Cambridge Cybercrime Centre. The effects were largely trivial, expressing solidarity and sympathy rather than making any persistent contribution to the conflict. Their interest in the conflict dropped off rapidly.

In Gaza, we see the same pattern. After Hamas attacked Israel and Israel declared war, there was a surge of attacks that peaked after a few days, with most targets being strategically unimportant. In both cases, discussion on underground cybercrime forums tailed off after a week. The main difference is that the hacktivism against Israel is one-sided; supporters of Palestine have attacked Israeli websites, but the number of attacks on Palestinian websites has been trivial.

The Pre-play Attack in Real Life

Recently I was contacted by a Falklands veteran who was a victim of what appears to have been a classic pre-play attack; his story is told here.

Almost ten years ago, after we wrote a paper on the pre-play attack, we were contacted by a Scottish sailor who’d bought a drink in a bar in Las Ramblas in Barcelona for €33, and found the following morning that he’d been charged €33,000 instead. The bar had submitted ten transactions an hour apart for €3,300 each, and when we got the transaction logs it turned out that these transactions had been submitted through three different banks. What’s more, although the transactions came from the same terminal ID, they had different terminal characteristics. When the sailor’s lawyer pointed this out to Lloyds Bank, they grudgingly accepted that it had been technical fraud and refunded the money.

In the years since then, I’ve used this as a teaching example both in tutorial talks and in university lectures. A payment card user has no trustworthy user interface, so the PIN entry device can present any transaction, or series of transactions, for authentication, and the customer is none the wiser. The mere fact that a customer’s card authenticated a transaction does not imply that the customer mandated that payment.

Payment by phone should eventually fix this, but meantime the frauds continue. They’re particularly common in nightlife establishments, both here and overseas. In the first big British case, the Spearmint Rhino in Bournemouth had special conditions attached to its license for some time after a series of frauds; a second case affected a similar establishment in Soho; there have been others. Overseas, we’ve seen cases affecting UK cardholders in Poland and the Baltic states. The technical modus operandi can involve a tampered terminal, a man-in-the-middle device or an overlay SIM card.

By now, such attacks are very well-known and there really isn’t any excuse for banks pretending that they don’t exist. Yet, in this case, neither the first responder at Barclays nor the case handler at the Financial Ombudsman Service seemed to understand such frauds at all. Multiple transactions against one cardholder, coming via different merchant accounts, and with delay, should have raised multiple red flags. But the banks have gone back to sleep, repeating the old line that the card was used and the customer PIN was entered, so it must all be the customer’s fault. This is the line they took twenty years ago when chip and pin was first introduced, and indeed thirty years ago when we were suffering ATM fraud at scale from mag-strip copying. The banks have learned nothing, except perhaps that they can often get away with lying about the security of their systems. And the ombudsman continues to claim that it’s independent.

2023 Workshop on the Economics of Information Security

WEIS 2023, the 22nd Workshop on the Economics of Information Security, will be held in Geneva from July 5-7, with a theme of Digital Sovereignty. We now have a list of sixteen accepted papers; there will also be three invited speakers, ten posters, and ten challenges for a Digital Sovereignty Hack on July 7-8.

The deadline for early registration is June 10th, and we have discount hotel bookings reserved until then. As Geneva gets busy in summer, we suggest you reserve your room now!

Security economics course

Back in 2015 I helped record a course in security economics in a project driven by colleagues from Delft. This was launched as an EDX MOOC as well as becoming part of the Delft syllabus, and it has been used in many other courses worldwide. In Brussels, in December, a Ukrainian officer told me they use it in their cyber defence boot camp.

There’s been a lot of progress in security economics over the past seven years; see for example the liveblogs of the workshop on the economics of information security here. So it’s time to update the course, and we’ll be working on that between now and May.

If there are any topics you think we should cover, or any bugs you’d like to report, please get in touch!